id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2307.16522 | Two-dimensional Dyck words | We propose different ways of lifting the notion of Dyck language from words
to 2-dimensional (2D) pictures, by means of new definitions of increasing
comprehensiveness. Two of the proposals are based on alternative definitions of
a Dyck language, which are equivalent over words but not on pictures. First,
the property that any two pairs of matching parentheses are well-nested or
disjoint is rephrased for rectangular boxes and leads to the well-nested Dyck,
$DW_k$. This is a generalization of the known Chinese box language, but, unlike
the Chinese boxes, $DW_k$ is not recognizable by a tiling system. Second, the
Dyck cancellation rule is rephrased as a neutralization rule, mapping a
quadruple of symbols representing the corners of a subpicture onto neutral
symbols.The neutralizable Dyck language $DN_k$ is obtained by iterating
neutralizations, starting from 2-by-2 subpictures, until the picture is wholly
neutralized. Third, we define the Dyck crossword $DC_k$ as the row-column
combination of Dyck word languages, which prescribes that each column and row
is a Dyck word. The relation between matching parentheses is represented in
$DC_k$ by an edge of a graph situated on the picture grid. Such edges form a
circuit, of path length multiple of four, of alternating row and column
matches. Length-four circuits have rectangular shape, while longer ones exhibit
a large variety of forms. A proper subset of $DC_k$, called quaternate, is also
introduced by excluding all circuits of length greater than 4. We prove that
$DN_k$ properly includes $DW_k$, and that it coincides with the quaternate
$DC_k$ such that the neutralizability relation between subpictures induces a
partial order. The 2D languages well-nested, neutralizable, quaternate and Dyck
crossword are ordered by strict inclusions. This work can be also seen as a
first step towards the definition of context-free picture languages. | Stefano Crespi Reghizzi, Antonio Restivo, Pierluigi San Pietro | 2023-07-31T09:39:33Z | http://arxiv.org/abs/2307.16522v2 | # Two-dimensional Dyck words
###### Abstract
We propose different ways of lifting the notion of Dyck language from words to 2-dimensional (2D) arrays of symbols, i.e., pictures, by means of new definitions of increasing comprehensiveness. Two of the proposals are based on alternative definitions of a Dyck language, which are equivalent over words but not on pictures.
First, the property that any two pairs of matching parentheses are either well-nested or disjoint, is rephrased for rectangular boxes and leads to the well-nested Dyck, \(DW_{k}\). The latter is a generalization of the known Chinese box language. We prove that, unlike the Chinese boxes, the language \(DW_{k}\) is not recognizable by a tiling system.
Second, the Dyck cancellation rule is rephrased as a neutralization rule, mapping a quadruple of symbols representing the corners of a subpicture onto neutral symbols.The neutralizable Dyck language \(DN_{k}\) is obtained by iterating neutralizations, starting from 2-by-2 subpictures, until the picture is wholly neutralized. Third, we define the Dyck crossword \(DC_{k}\) as the row-column combination of Dyck word languages, which prescribes that each column and row is a Dyck word. The relation between matching parentheses is represented in \(DC_{k}\) by an edge of a graph situated on the picture grid. Such edges form a circuit, of path length multiple of four, of alternating row and column matches. Length-four circuits have rectangular shape, while longer ones exhibit a large variety of forms. A proper subset of \(DC_{k}\), called quaternate, is also introduced by excluding all circuits of length greater than 4. We prove that \(DN_{k}\) properly includes \(DW_{k}\), and that it coincides with the quaternate \(DC_{k}\) such that the neutralizability relation between subpictures induces a partial order. The 2D languages well-nested, neutralizable, quaternate and Dyck crossword are ordered by strict inclusions. This work can be also seen as a first step towards the definition of context-free picture languages.
## 1 Introduction
The Dyck language is a fundamental concept in formal language theory. Its alphabet \(\{a_{1},\ldots,a_{k},\,a^{\prime}_{1},\ldots,a^{\prime}_{k}\}\), for any \(k\geq 1,\) is partitioned into the pairs \([a_{1},a^{\prime}_{1}],\ldots,[a_{k},a^{\prime}_{k}]\). The language is the set of all words that can be reduced to the empty word by cancellations of the form \(a_{i}a^{\prime}_{i}\rightarrow\varepsilon\). The centrality of the Dyck language is expressed by the Chomsky-Schutzenberger theorem [3] stating that any context-free language is the homomorphic image of the intersection of a Dyck language and a local one; intuitively, a
regular language is local if it is defined by the set of factors, of prefixes and of suffixes of length two.
Motivated by our interest for the theory of two-dimensional (2D) or picture languages, we investigate the possibility to transport the Dyck concept from one dimension to 2D. When moving from 1D to 2D, most formal language concepts and relationships drastically change. In particular, in 2D the Chomsky's language hierarchy is blurred because the notions of regularity and context-freeness cannot be formulated for pictures without giving up some characteristic properties that hold for words. In fact, it is known [8] that the three equivalent definitions of regular languages by means of finite-state recognizer, by regular expressions, and by the homomorphism of local languages, produce in 2D three distinct language families. The third one gives the family of _tiling system recognizable languages_ (REC) [8], that many think to be the best fit for regularity in 2D.
The situation is less satisfactory for context-free (CF) languages where a transposition in 2D remains problematic. None of the existing proposals of "context-free" picture grammars ([16, 10, 11, 14, 5, 6], a survey is [4]) match the expressiveness and richness of formal properties of 1D CF grammars. In this paper we make the first step towards a new definition of CF languages by means of the following 2D reformulation of the Chomsky-Schutzenberger theorem, that, to avoid destroying the rectangular structure of a picture, we take with a non-erasing homomorphism, as in [2, 12]. A context-free picture language is the homomorphic, letter-to-letter image of the intersection of a 2D Dyck language and a 2D local language. While the notion of 2D local language is well-known, we are not aware of any existing definitions of 2D Dyck language; we know of just one particular example, the _Chinese box language_ in [5], that intuitively consists of embedded or concatenated boxes, and was proposed to illustrate the expressiveness of the grammars there introduced; that language is not a satisfactory proposal, since it is in the family REC, hence "regular". Although a best definition of Dyck picture languages might not exist, it is worth formalizing and comparing several possible choices; this is our contribution, while the study of the resulting 2D CF languages is still under way and not reported here.
Our contribution includes four definitions of 2D "Dyck" languages based on various approaches, a study of their properties and the proofs of their inclusions.
It is time to describe the intuitions behind each proposal and, particularly, the properties of Dyck words that are preserved in each case.
Instead of open and closed parentheses, the elements of a Dyck alphabet in 2D are the four "corners" \({}^{r}\), \({}^{\shortmid}\), \({}_{\shortmid}\), \({}_{\shortmid}\); there may be \(k\geq 1\) distinct quadruples. Each corner quadruple encodes two 1D Dyck alphabets, one for rows the other for columns. The row alphabet has the pairs \([^{r},\ ^{\shortmid}]\) and \([_{\shortmid},\ _{\shortmid}]\) while the column alphabet has the pairs \([^{r},\ _{\shortmid}]\) and \([^{\shortmid},\ _{\shortmid}]\). When in a picture a corner quadruple is correctly laid on the four vertexes of a rectangular subpicture we say that it represents a rectangle.
We start from the simpler cases, the _well-nested Dyck language_\(DW_{k}\) and the _neutralizable Dyck language_\(DN_{k}\). In both cases, a picture is partitioned into rectangles, in the sense that each pixel is placed on a vertex of a rectangle. The difference between \(DW_{k}\) and \(DN_{k}\) resides in the relative positions that are permitted for the rectangles that cover a picture.
In a \(DW_{k}\) picture the rectangles are well nested and do not overlap each other; thus it is fair to say that the well-nesting property of parenthesized words is here preserved. This is the same constraint of the Chinese boxes [5], which however use a different alphabet that is not a Dyck alphabet.
The definition of \(DN_{k}\) is based on the observation that the Dyck cancellation rule can be replaced by a neutralization rule that maps a pair of matching parentheses onto a neutral symbol \(N\), \(a_{i}a_{i}^{\prime}\to NN\), so that a word is well parenthesized if it can be transformed to a word in \(N^{+}\) of the same length. In 2D the reformulation of the neutralization rule is: if a rectangle (having the four corner symbols as vertexes) includes only neutral symbols, then the whole subpicture is neutralized. A picture is in \(DN_{k}\) if all corner symbols are replaced with \(N\) by a sequence of neutralization steps. We prove the language inclusion \(DW_{k}\subset DN_{k}\).
The third approach is based on the crosswords of Dyck languages, a.k.a. row-column compositions. A picture is in \(DC_{k}\) if all rows and all columns are Dyck words. Crosswords have been studied for regular languages (e.g., in [9, 7]) but not, to our knowledge, for context-free ones. A little reflection suffices to realize that in \(DN_{k}\), hence also in \(DW_{k}\), the rows and columns are Dyck words, therefore the inclusion \(DN_{k}\subseteq DC_{k}\) is obvious.
The interesting question is whether the inclusion is strict. Surprisingly, \(DC_{k}\) comprises pictures that do not belong to \(DN_{k}\). A subclass of \(DC_{k}\), called _quaternate_, or \(DQ_{k}\), is the set of pictures covered by rectangles. We prove that \(DQ_{k}\) includes also not neutralizable pictures, which present a circularity in the precedence relation that governs the neutralization order.
But the family of Dyck crosswords includes a large spectrum of pictures where patterns other than rectangles are present. Each pattern is a closed path, called a _circuit_, made by alternating horizontal and vertical edges, representing a Dyck match on a row or on a column. A circuit label is a string in the language \((\begin{array}{c}\begin{array}{c}\\ \end{array}\\ \end{array})^{+}\), thus having length \(4k\). The circuit path may intersect itself one or more times on the picture grid-the case of zero intersection is the rectangle. We prove that for any value of \(k\geq 0\) there exist pictures in \(DC_{k}\) featuring a circuit of length \(4+8k\). We have examined some interesting types of Dyck crosswords that involve complex circuits, but much remains to be understood of the general patterns that are possible.
Section 2 lists basic concepts of picture languages and Dyck languages. Section 3 recalls the Chinese boxes language, defines the \(DW_{k}\) and \(DN_{k}\) languages, and studies their relations. Section 4 introduces the \(DC_{k}\) languages, exemplifies the variety of circuits they may contain, and defines the quaternate subclass \(DQ_{k}\). Section 5 proves the strict inclusions of the four above languages. Section 6 mentions open problems.
## 2 Preliminaries
All the alphabets to be considered are finite. The following concepts and notations for picture languages follow mostly [8]. A _picture_ is a rectangular array of letters over an alphabet. Given a picture \(p\), \(|p|_{row}\) and \(|p|_{col}\) denote the number of rows and columns, respectively; \(|p|=(|p|_{row},|p|_{col})\) denotes the _picture size_. The set of all non-empty pictures over \(\Sigma\) is denoted by \(\Sigma^{++}\).
A _domain_\(d\) of a picture \(p\) is a quadruple \((i,j,i^{\prime},j^{\prime})\), with \(1\leq i\leq i^{\prime}\leq|p|_{row}\), and \(1\leq j\leq j^{\prime}\leq|p|_{col}\). The _subpicture of \(p\)_ with domain \(d=(i,j,i^{\prime},j^{\prime})\), denoted by \(spic(p,d)\) is the (rectangular) portion of \(p\) defined by the top-left coordinates \((i,j)\) and by the bottom right coordinates \((i^{\prime},j^{\prime})\).
_Concatenations._ Let \(p,q\in\Sigma^{++}\). The _horizontal concatenation_ of \(p\) and \(q\) is denoted as \(p\obarleft q\) and it is defined when \(|p|_{row}=|q|_{row}\). Similarly, the _vertical concatenation_\(p\in q\) is defined when \(|p|_{col}=|q|_{col}\). We also use the power operations \(p^{\oplus k}\) and \(p^{\obarleft\Downarrow k}\), \(k\geq 1\), their closures \(p^{\oplus+}\), \(p^{\obarleft\Downarrow+}\) and we extend the concatenations to languages in the obvious way.
The notation \(N^{m,n}\), where \(N\) is a symbol and \(m,n>0\), stands for a homogeneous picture of size \(m,n\). For later convenience, we extend this notation to the case where either \(m\) or \(n\) are 0, to introduce identity elements for vertical and horizontal concatenations: given a picture \(p\) of size \((m,n)\), by definition \(p\obarleft N^{m,0}=N^{m,0}\obarleft p=p\) and \(p\ominus N^{0,n}=N^{0,n}\ominus p=p\).
The _Simplot closure_[15] operation \(L^{**}\) is defined on a picture language \(L\) as the set of pictures \(p\) tessellated by pictures in \(L\), more precisely defined by the following condition:
\[\exists\text{ a partition of }\{1,\ldots,row(p)\}\times\{1, \ldots,col(p)\}\text{ into }n\geq 1\text{ domains }d_{1},\ldots,d_{n}\text{ of }p\] \[\text{ such that for all }1\leq i\leq n\text{ the subpicture }spic(p,d_{i})\text{ is in }L. \tag{1}\]
Notice that the concatenations \(L^{\otimes k}\), \(L^{\obarleft k}\) and their closures \(L^{\ominus+}\), \(L^{\obarleft+}\) are included in the Simplot closure of \(L\), which therefore is the most general way of assembling a picture starting from pictures in a given language.
To help understanding, we reproduce a picture in \(L^{**}\), where \(L\) is tessellated into the 1-by-1, 1-by-2, 2-by-1, and 2-by-2 pictures shown, and cannot be obtained by horizontal and vertical partitions:
We assume some familiarity with the basic properties of the family of REC languages [8], in particular with their definition by the projection of a local 2D language or equivalently, by the projection of the intersection of two domino languages for rows and for columns.
Dyck alphabet and languageThe definition and properties of Dyck languages are basic concepts in formal language theory, yet we prefer to list them since each one of our developments for 2D languages differs with respect to the property it strives to generalize.
For a Dyck language \(D\subseteq\Gamma_{k}^{*}\), the alphabet has size \(|\Gamma_{k}|=2k\) and is partitioned into two sets of cardinality \(k\geq 1\), denoted \(\{a_{i}\mid 1\leq i\leq k\}\cup\{a_{i}^{\prime}\mid 1\leq i\leq k\}\).
The Dyck language \(D_{k}\) has several equivalent, definitions. We recall the word congruence or _cancellation rule_ defined by \(a_{i}a_{i}^{\prime}=\varepsilon\): a word is in \(D_{k}\) if it is congruent to \(\varepsilon\), i.e., it can be erased to \(\varepsilon\) by repeated application of the cancellation rule. We say that in
a word \(x\in D_{k}\) two occurrences of terminals \(a_{i},\,a^{\prime}_{i}\)_match_ if they are erased together by an application of the cancellation rule on \(x\).
A characteristic of Dyck languages is that, in every word of \(D_{k}\), any two factors \(a_{i}ya^{\prime}_{i}\) and \(a_{j}wa^{\prime}_{j}\), with \(y,w\in\Gamma^{*}\) where \(a_{i},a^{\prime}_{i}\) and \(a_{j},a^{\prime}_{j}\) are matching pairs, are either disjoint or _well-nested_.
A (non-\(\varepsilon\)) Dyck word is _prime_ if it is not the concatenation of two Dyck words. The set of prime words can be defined [1] as the set \((D_{k}-\varepsilon)-(D_{k}-\varepsilon)^{2}\).
For future comparison with pictures, we introduce an equivalent definition of the Dyck language by means of the following _neutralization rule_ instead of the cancellation rule, since the latter does not work for pictures: erasing a subpicture would not result in a picture. Let \(N\notin\Gamma_{k}\) be a new terminal character called _neutral_. For every word in \((\Gamma_{k}\cup\{N\})^{*}\) define the congruence \(\approx\), for all \(1\leq i\leq n\), and for all \(m\geq 0\) as:
\[a_{i}\,N^{m}a^{\prime}_{i}\approx N^{m+2}. \tag{2}\]
A word \(x\in\Gamma^{*}_{k}\) is in \(D_{k}\) if it is \(\varepsilon\) or it is \(\approx\)-congruent to \(N^{|x|}\). An equivalent definition of the Dyck language is based on the observation that a Dyck word can be enlarged either by surrounding it with a matching pair of parentheses, or by concatenating it to another Dyck word. Therefore, the Dyck language over \(\Gamma_{k}\) can be defined through a _nesting accretion_ rule: given a word \(x\in\Gamma^{*}_{k}\), a nesting accretion of \(x\) is a word of the form \(a_{i}xa^{\prime}_{i}\). The language \(D_{k}\) can then be defined as the smallest set including the empty word and closed under concatenation and nesting accretion.
## 3 Box-based choices of Dyck picture languages
In this section we present two simple choices, called well-nested and neutralizable, each one conserving one of the characteristic properties of Dyck words.
To make the analogy with Dyck words more evident, we represent in 2D the parentheses pair \([\,,\,]\) by a quadruple of corners \({}^{r}\), \({}^{\shortmid}\), \({}^{\shortmid}\), \({}^{\shortmid}\), \({}^{\shortmid}\). Then inside a picture such a quadruple matches if it is laid on the four vertexes of a rectangle (i.e., a subpicture), as we see in the picture \({}^{r\mid}\) for each quadruple identified by a color.
First, we focus on the nesting accretion definition of Dyck words and extend it to pictures by considering a quadruple of corners. The corresponding picture languages are called _well-nested Dyck_, denoted as \(DW_{k}\).Then, we extend the neutralization rule to 2D in a way that essentially preserves the following property: two matching parentheses that encompass a neutral word can be neutralized. Now, the two matching parentheses become a quadruple of symbols representing the corners of a (rectangular) subpicture already neutralized. The corresponding languages are called _neutralizable Dyck_ (\(DN_{k}\)).
### Well-nested Dyck language
The natural question of what should be considered a Dyck-like language in 2D received a tentative answer in [5], where a language of well-embedded rectangular boxes, called _Chinese boxes_, was presented as an example of the generative power of the tile rewriting grammars there introduced.
The alphabet is \(\Gamma=\{\,^{\tau},\,^{\varsigma},\,_{\iota},\,_{\iota},\bullet\}\); the corner symbols represent the four box vertexes and a horizontal/vertical string of bullets represents a box side. Instead of the original grammar formalism, we give a recursive definition.
Definition 1 (Chinese boxes [5]): Given a picture \(p\) of size \((n,m)\), with \(n,m\geq 0\), its Chinese accretion is the picture:
\[\left(\,^{\tau}\,\,\raisebox{-1.72pt}{\scalebox{0.7}{$\circ$}}\,\raisebox{-1.72 pt}{\scalebox{0.7}{$\circ$}}\,\raisebox{-1.72pt}{\scalebox{0.7}{$\circ$}}\, \raisebox{-1.72pt}{\scalebox{0.7}{$\circ$}}\,\raisebox{-1.72pt}{\scalebox{0.7}{$ \circ$}}\,^{\bullet}\,\right)\in\left(\,^{\bullet\circ n}\,\,\raisebox{-1.72pt}{ \scalebox{0.7}{$\circ$}}\,\raisebox{-1.72pt}{\scalebox{0.7}{$\circ$}}\,\raisebox {-1.72pt}{\scalebox{0.7}{$\circ$}}\,\raisebox{-1.72pt}{\scalebox{0.7}{$ \circ$}}\,\raisebox{-1.72pt}{\scalebox{0.7}{$\circ$}}\,\raisebox{-1.72pt}{ \scalebox{0.7}{$\circ$}}\,^{\bullet n}\,\right)\in\left(\,^{\bullet}\,\, \raisebox{-1.72pt}{\scalebox{0.7}{$\circ$}}\,\raisebox{-1.72pt}{\scalebox{0.7}{$ \circ$}}\,\raisebox{-1.72pt}{\scalebox{0.7}{$\circ$}}\,\raisebox{-1.72pt}{ \scalebox{0.7}{$\circ$}}\,^{\bullet\circ m}\,\,\raisebox{-1.72pt}{\scalebox{0.7}{$ \circ$}}\,_{\iota}\right)\]
\(\,^{\tau}\
\([a_{i},c_{i}]\), such that \(|w_{r}|=|p|_{col}\), \(|w_{c}|=|p|_{row}\), the _nesting accretion_ of \(p\) within \(w_{r},w_{c}\) is the picture:
\[(a_{i}\ \mbox{\raisebox{-1.72pt}{$\Box$}}\ w_{r}\ \mbox{\raisebox{-1.72pt}{$ \Box$}}\ b_{i})\in(w_{c}\ \mbox{\raisebox{-1.72pt}{$\Box$}}\ p\ \mbox{\raisebox{-1.72pt}{$\Box$}}\ h_{c}\,(w_{c}))\in(c_{i}\ \mbox{\raisebox{-1.72pt}{$\Box$}}\ h_{r}\,(w_{r})\ \mbox{\raisebox{-1.72pt}{$\Box$}}\ d_{i})\,.\]
The language \(DW_{k}\) is the smallest set including the empty picture and closed under nesting accretion and Simplot closure (see (1) in Section 2).
Figure 1 (right) illustrates accretion and (left) shows a picture in \(DW_{1}\); for comparison a Chinese box picture of the same size is shown in the middle.
The definition can be explained intuitively by considering two distinct occurrences of a quadruple of matching corners: the subpictures delimited by each quadruple (i.e., their bounding boxes) are either disjoint, or included one into the other; or they overlap and a third box exists that "minimally" bounds both boxes. The third case is illustrated in Figure 1, left, by the overlapping blue and green boxes.
It is immediate to see that for any size \((2m,2n)\), \(m,n\geq 1\), there is a picture in \(DW_{k}\).
Theorem 4.1: _The language \(DW_{k}\) is not (tiling system) recognizable, for every \(k\geq 1\)._
Proof: By contradiction, assume that \(DW_{k}\) is recognizable. Without loss of generality, we consider only the case \(k=1\). Consider the following picture \(p\) in \(DW_{1}\): \(\begin{array}{c}a\ b\\ c\ d\end{array}\). From closure properties of REC, the language \(p^{\mbox{\raisebox{-1.72pt}{$\Box$}}+}\) is recognizable, hence also the language:
\[R=\left(a^{\mbox{\raisebox{-1.72pt}{$\Box$}}+}\ \mbox{\raisebox{-1.72pt}{$ \Box$}}\ b^{\mbox{\raisebox{-1.72pt}{$\Box$}}+}\right)\in\left((a\ \mbox{\raisebox{-1.72pt}{$\Box$}}\ c)\ \mbox{\raisebox{-1.72pt}{$\Box$}}\ p^{\mbox{\raisebox{-1.72pt}{$\Box$}}+}\ \mbox{\raisebox{-1.72pt}{$\Box$}}\ (b\ \mbox{\raisebox{-1.72pt}{$\Box$}}\ d)\right)\in\left(c^{\mbox{ \raisebox{-1.72pt}{$\Box$}}+}\ \mbox{\raisebox{-1.72pt}{$\Box$}}\ d^{\mbox{ \raisebox{-1.72pt}{$\Box$}}+}\right).\]
A picture in \(R\) has \(a^{+}b^{+}\) in the top row and \(c^{+}d^{+}\) in the bottom row. Let \(T\) be the language obtained by intersection of \(DW_{1}\) with \(R^{\Theta+}\). Therefore, both \(T\) and \(T^{\Theta+}\) are also recognizable; moreover, the first row of every picture in \(T^{\Theta+}\) has the form \(a^{n}b^{n}\). By applying the Horizontal Iteration Lemma of [8] (Lemma 9.1) to \(T^{\Theta+}\), there exists a (suitably large) picture \(t\) in \(T^{\Theta+}\) which can be written as the horizontal concatenation of the three (non empty) pictures \(x,q,y\), namely \(t=x\ \mbox{\raisebox{-1.72pt}{$\Box$}}\ q\ \mbox{\raisebox{-1.72pt}{$\Box$}}\ y\), such that \(x\ \mbox{\raisebox{-1.72pt}{$\Box$}}\ q^{\mbox{\raisebox{-1.72pt}{$\Box$}}+}\ \mbox{\raisebox{-1.72pt}{$\Box$}}\ y\) is also in \(T^{\Theta+}\), a contradiction with the fact that the top row of the pictures in \(T^{\Theta+}\) must be of the form \(a^{n}b^{n}\).
Figure 1: (Left) An example of picture in \(DW_{1}\) and (middle) the similar Chinese box version. (Right) Scheme of nesting accretion.
### Neutralizable Dyck language
We investigate a possible definition of Dyck picture languages by means of a neutralization rule analogous to the congruence (2) of Dyck word languages.
Definition 3 (neutralizable Dyck language): Let \(N\) be a new symbol not in \(\Delta_{k}\). The neutralization relation \(\overset{\nu}{\rightarrow}\subseteq\left(\left\{N\right\}\cup\Delta_{k}\right)^ {++}\times\left(\left\{N\right\}\cup\Delta_{k}\right)^{++}\), is the smallest relation such that for every pair of pictures \(p,p^{\prime}\) in \(\left(\left\{N\right\}\cup\Delta_{k}\right)^{++}\), \(p\overset{\nu}{\rightarrow}p^{\prime}\) if there are \(m,n\geq 2\) and \(1\leq i\leq k\), such that \(p^{\prime}\) is obtained from \(p\) by replacing a subpicture of \(p\) of the form:
\[\left(a_{i}\odot N^{m-2,1}\odot c_{i}\right)\odot N^{m,n-2}\odot\left(b_{i} \odot N^{m-2,1}\odot d_{i}\right)\text{.} \tag{3}\]
with the picture of the same size \(N^{m,n}\).
The 2D_ neutralizable Dyck language, _denoted with \(DN_{k}\subseteq\Delta_{k}^{++}\), is the set of pictures \(p\) such that there exists \(p^{\prime}\in N^{++}\) with \(p\overset{\nu}{\rightarrow}p^{\prime}\)._
In other words, a \(DN_{k}\) picture is transformed into a picture in \(N^{++}\) by a series of neutralizations. It is obvious that the order of application of the neutralization steps is irrelevant for deciding if a picture is neutralizable.
Example 1 (neutralizations): The following picture \(p_{1}\) on the alphabet \(\Delta_{1}\) is in \(DN_{1}\) since it reduces to the neutral one by means of a sequence of six neutralization steps:
Neutralizations have been arbitrarily applied in top to bottom, left to right order.
By a proof almost identical to the one of Theorem 3.1, since the language \(T^{\Theta+}\) can be obtained from \(DN_{k}\) by intersection with a recognizable language, we have:
Theorem 3.2: _The language \(DN_{k}\) is not (tiling system) recognizable for every \(k\geq 1\)._
Although \(DW_{k}\) is defined by a diverse mechanism, the next inclusion is immediate.
Theorem 3.3: _The language \(DW_{k}\) is strictly included in \(DN_{k}\) for every \(k\geq 1\)._
Proof: The inclusion \(DW_{k}\subseteq DN_{k}\) is obvious since any picture in \(DW_{k}\) can be neutralized in accordance with Definition 3. Then the thesis follows since the neutralizable picture \(p_{N}=\) cannot be obtained using nesting accretion.
Another picture in \(DN_{1}\setminus DW_{1}\) is in Figure 2.
## 4 Row-column combination of Dyck languages
We consider the pictures such that their rows and columns are Dyck languages, more precisely, they are Dyck word languages over the same alphabet but with different pairing of terminal characters. Such pictures, called Dyck crosswords, may be viewed as analogous of Dyck word languages.
Following [8] we introduce the row-column combination operation that takes two word languages and produces a picture language.
Definition 4 (row-column combination a.k.a. crossword): Let \(S^{\prime},S^{\prime\prime}\subseteq\Sigma^{*}\) be two word languages, called _component languages_. The _row-column combination_ or _crossover_ of \(S^{\prime}\) and \(S^{\prime\prime}\) is the picture language \(L\) such that a picture \(p\in\Sigma^{++}\) belongs to \(L\) if, and only if, the words corresponding to each row (in left-to-right order) and to each column (in top-down order) of \(p\) belong to \(S^{\prime}\) and \(S^{\prime\prime}\), respectively.
The row-column combination of regular languages has received attention in the past since its alphabetic projection exactly coincide with the REC family [8]; some complexity issues for this case are addressed in the recent paper [7] where the combinations are called "regex crosswords". Moreover, given two regular languages \(S^{\prime},S^{\prime\prime}\), it is undecidable to establish whether their composition is empty. In this section, we investigate the properties of the row-column combination of a fundamental type of context-free languages, the Dyck ones.
The picture alphabet is the same of \(DW_{k}\) and \(DN_{k}\) languages, here preferably represented by letters instead of corner symbols.
Definition 5 (Dyck crossword alphabet and language): Let \(\Delta_{k}=\{a_{i},b_{i},c_{i},d_{i}\mid 1\leq i\leq k\}\), an alphabet. We associate \(\Delta_{k}\) with two different Dyck alphabets, the _Dyck row alphabet_\(\Delta_{k}^{Row}\) and the _Dyck column alphabet_\(\Delta_{k}^{Col}\) by means of the following matching pairs:
\[\begin{cases}\text{for }\Delta_{k}^{Row}:\{[a_{i},b_{i}\big{]}\mid i\leq 1 \leq k\}\cup\{[c_{i},d_{i}\big{]}\mid 1\leq i\leq k\}\\ \text{for }\Delta_{k}^{Col}:\{[a_{i},c_{i}\big{]}\mid i\leq 1\leq k\}\cup\{[b_{i}, d_{i}\big{]}\mid 1\leq i\leq k\}\end{cases}.\]
The corresponding Dyck languages, without \(\varepsilon\), are denoted by \(D_{k}^{Row}\subset{\Delta_{k}}^{+}\) and \(D_{k}^{Col}\subset{\Delta_{k}}^{+}\).
The _Dyck crossword language_\(DC_{k}\) is the row-column combination of \(D_{k}^{Row}\) and \(D_{k}^{Col}\).
In the following, we often consider only the language \(DC_{1}\), over alphabet \(\{a,b,c,d\}\), when statements and properties of \(DC_{k}\) are straighforward generalizations of the \(DC_{1}\) case.
Remark 1: The choice in Definition 5 that the \(DC_{k}\) alphabet \(\Delta_{k}\) consists of one or more quadruples \(a_{i},b_{i},c_{i},d_{i}\), \(1\leq i\leq k\), is not just for continuity with the alphabet of the well-nested and neutralizable cases, but it is imposed by the following simple facts. For brevity we consider \(k=1\).
1. Let \(\Gamma\) be the binary alphabet \(\{e,e^{\prime}\}\). Let \(S^{\prime}\) and \(S^{\prime\prime}\) be the Dyck languages respectively for rows and for columns based on \(\Gamma\), with matching parentheses \((e,e^{\prime})\) for both rows and columns. Then, it is easy to see that the row-column combination of \(S^{\prime}\) and \(S^{\prime\prime}\) is empty, since it is impossible to complete a \(DC\) picture starting from a row containing word \(ee^{\prime}\). Moreover, the combination remains empty if we invert the matching for columns to \((e^{\prime},e)\).
2. Let the alphabet for words be \(\Gamma=\{e,e^{\prime},f,f^{\prime}\}\). Then, to obtain a non-empty combination, there is only one way (disregarding trivially equivalent letter permutations) of matching the letters, namely: for rows, \((e,e^{\prime}),(f,f^{\prime})\) and for columns \((e,f),(e^{\prime},f^{\prime})\). For instance, the choice \((e,f^{\prime}),(e^{\prime},f)\}\) for columns does not produce any \(DC_{1}\) picture. By renaming the letters of \(\Gamma\) as \(\Delta_{1}=\{a,b,c,d\}\) we regain the row/column Dyck alphabets of Definition 5; then, the matching \(\Delta_{1}^{Row}=\{a,b\}\cup\{c,d\}\) and \(\Delta_{1}^{Col}=\{a,d\}\cup\{b,c\}\) makes \(DC_{1}\) empty.
3. Let the alphabet for words have six letters \(\Gamma=\{e,e^{\prime},f,f^{\prime},g,g^{\prime}\}\). From part (i) it is easy to see that, no matter what matching is chosen for row and columns, two of the letters cannot occur in any picture of \(DC_{1}\). Therefore, it is enough to consider an alphabet of size multiple of four.
4. A consequence of the previous items is that the following property of Dyck words over a binary alphabet \(\{e,e^{\prime}\}\) does not hold for \(DC_{1}\): any Dyck word, e.g., \(e^{\prime}e\), occurs as a factor of some Dyck word, e.g., \(e\,e^{\prime}e\,e^{\prime}\); this is not true for the rows and the columns of Dyck crosswords because each one of the row/column Dyck alphabets contains two pairs of symbols, not just one. For instance the word \(ad\) is a forbidden factor of language \(D_{1}^{Row}\).
We state and prove some basic properties. It is easy to notice that \(DN_{k}\subseteq DC_{k}\): for instance, when neutralizing a subpicture, the neutralization of its two corners \((a_{i},b_{i})\) acts in that row as the neutralization rule for words in \(D_{k}^{row}\), and similarly for the other corners. We later prove that this inclusion is proper.
The result of not being tiling recognizable holds also for \(DC_{k}\):
Theorem 4.1: _For every \(k\geq 1\), the language \(DC_{k}\) is not (tiling system) recognizable._
Proof: The proof is essentially the same as of Theorem 4.1, since also in this case the language \(T^{\oplus}\) can be obtained from \(DC_{1}\) by intersection with a recognizable language.
The next property of \(DC_{k}\) is that any picture \(p\) that is partitioned into \(DC_{k}\) subpictures is also in \(DC_{k}\). This is obvious since each row of \(p\) is the concatenation of Dyck words, and similarly for columns. An analogous result holds for each language \(DN_{k}\) (for \(DW_{k}\) this holds by definition).
Theorem 4.2 (Invariance under Simplot operation): \((DC_{k})^{++}=DC_{k}\) _and \((DN_{k})^{++}=DN_{k}\)._
Another question for any of the Dyck-like picture languages introduced is whether its row and column languages respectively saturate the horizontal and vertical Dyck word languages. We prove that this is the case for \(DN_{k}\) and \(DC_{k}\), but this is not for \(DW_{k}\). Let \(\Delta_{k}=\{a_{i},b_{i},c_{i},d_{i}\mid 1\leq i\leq k\}\). Let \(P\subseteq\Delta_{k}^{++}\) be a picture language and define the _row language_ of \(P\) as: \(\text{ROW}(P)=\{w\in\Delta_{k}^{+}\mid\text{ there exist }p\in P,p^{\prime},p^{\prime\prime}\in P\}\)
\(\Delta_{k}^{++}\) such that \(p=p^{\prime}\odot w\odot p^{\prime\prime}\). The column language of \(P\), COL(\(P\)) is defined analogously.
Theorem 4.1 (row/column languages): __
1. \(\textit{ROW}(DC_{k})=\textit{ROW}(DN_{k})=D_{k}^{Row}\)_,_ \(\textit{COL}(DC_{k})=\textit{COL}(DN_{k})=D_{k}^{Col}\)_._
2. \(\textit{ROW}(DW_{k})\nleq D_{k}^{Row}\)_,_ \(\textit{COL}(DW_{k})\nleq D_{k}^{Col}\)_._
Proof: Part (1): It is enough to prove that \(D_{k}^{Row}\subseteq\text{ROW}(DN_{k})\), since the other inclusion is obvious and the case for columns is symmetrical; moreover, \(DN_{k}\subseteq DC_{k}\), so there is no need to prove the statement for \(DC_{k}\). Without loss of generality, we consider only the case \(k=1\). We prove by induction on \(n\geq 2\), that for every word \(w\in D_{1}^{Row}\) of length \(n\) there exists a picture \(p\in DN_{1}\) of the form \(w_{1}\oplus w_{2}\ominus w\ominus w_{3}\) for \(w_{1},w_{2},w_{3}\in D_{1}^{Row}\). There are two base cases, the words \(ab\) and \(cd\). The word \(ab\) is (also) the third row in the \(DN_{1}\) picture \(ab\ominus cd\ominus ab\ominus cd\), while \(cd\) is (also) the third row in the \(DN_{1}\) picture \(ab\ominus ab\ominus cd\ominus cd\). The induction step has three cases: a word \(w\in D_{1}^{Row}\) of length \(n>2\) has the form \(w^{\prime}w^{\prime\prime}\), or the form \(aw^{\prime}b\) or the form \(cw^{\prime}d\), for some \(w^{\prime},w^{\prime\prime}\in D_{1}^{Row}\) of length less than \(n\). Let \(p^{\prime},p^{\prime\prime}\) the pictures verifying the induction hypothesis for \(w^{\prime}\) and \(w^{\prime\prime}\) respectively. The case of concatenation \(w^{\prime}w^{\prime\prime}\) is obvious (just consider the picture \(p^{\prime}\oplus p^{\prime\prime}\)). The case \(aw^{\prime}b\) can be solved by considering the picture \((a\ominus c\ominus a\ominus c)\oplus p^{\prime}\odot(b\ominus d\ominus b\ominus d)\), which is in \(DN_{1}\). Similarly, for the case \(cw^{\prime}d\) just consider the \(DN_{1}\) picture \((a\ominus a\ominus c\ominus c)\oplus p^{\prime}\odot(b\ominus b\ominus d\ominus d)\).
Part (2): The Dyck word \(abcd\) cannot be a row of a picture in \(DW_{k}\). In fact, every picture in \(DW_{1}\) of width 4 must be in the vertical concatenation closure of the set composed of the following two pictures, which do not include an \(abcd\) row:
### Matching-graph circuits
We present some patterns that occur in \(DC_{k}\) pictures. The simplest patterns are found in pictures that are partitioned into rectangular circuits connecting four elements, see, e.g., Figure 2, right, where an edge connects two symbols on the same row (or
Figure 2: (Left) A \(DC_{1}\) picture whose cells are partitioned into 4 quadruples of matching symbols, identified by the same color (font). (Right) An alternative visualization (right) by a graph using edges that connect matching symbols.
column) which match in the row (column) Dyck word. Notice that the graph made by the edges contains four disjoint circuits of length four, called _rectangles_ for brevity. Three of the circuits are nested inside the outermost one.
However, a picture in \(DC_{1}\) may also include circuits longer than four. In Figure 3 (left) we see a circuit of length 12, labeled by the word \((abdc)^{3}\), and on the right a circuit of length 36. Notice that when a picture on \(\Delta_{1}\) is represented by circuits, the node labels are redundant since they are uniquely determined on each circuit.
We formally define the graph, situated on the picture grid, made by such circuits.
Definition 6 (matching graph): The _matching graph_ associated with a picture \(p\in DC_{k}\), of size \((m,n)\), is a pair \((V,E)\) where the set \(V\) of nodes is the set \(\{1,\ldots n\}\times\{1\ldots m\}\) and the set \(E\) of edges is partitioned in two sets of row and column edges defined as follows, for all \(1\leq i\leq n,1\leq j\leq m\):
* for all pairs of matching letters \(p_{i,j},p_{i,j^{\prime}}\) in \(\Delta_{k}^{Row}\), with \(j<j^{\prime}\leq m\), there is a row (horizontal) edge connecting \((i,j)\) with \((i,j^{\prime})\),
* for all pairs of matching letters \(p_{i,j},p_{i^{\prime},j}\) in \(\Delta_{k}^{Col}\), with \(i<i^{\prime}\leq n\), there is a column (vertical) edge connecting \((i,j)\) with \((i^{\prime},j)\),
Therefore, there is a horizontal edge connecting two matching letters \(a_{i},b_{i}\) or \(c_{i},d_{i}\) that occur in the same row: e.g., the edge \((2,1)\leftrightarrow(2,4)\) of Figure 3, left. Analogously, there is a vertical edge connecting two matching letters \(a_{i},c_{i}\) or \(b_{i},d_{i}\), that occur in the same column: e.g., the edge \((2,2)\leftrightarrow(3,2)\) of Figure 3, left.
From elementary properties of Dyck languages it follows that the distance on the picture grid between two nodes connected by an edge is an odd number.
Theorem 6.1 (matching-graph circuits): _Let \(p\) be a picture in \(DC_{k}\). Then:_
1. _its matching graph_ \(G\) _is partitioned into disjoint simple circuits;_
Figure 3: Two pictures in \(DC_{1}\). (Left) The picture is partitioned into two circuits of length 12 and 4. (Right) The picture includes a circuit of length 36 and seven rectangular circuits. Its pattern embeds four partial copies (direct or rotated) of the left picture; in, say, the NW copy the “triangle” \(bdc\) has been changed to \(aaa\). Such a transformation can be reiterated to grow a series of pictures.
2. _the clockwise visit of any such circuit, starting from one of its nodes with label_ \(a_{j}\)_, yields a word in the language_ \((a_{j}b_{j}d_{j}c_{j})^{+}\)_, for all_ \(1\leq j\leq k\)_._
Proof: Part (1): By Definition 6, every node of \(G\) has degree 2, with one row edge and one column edge, since its corresponding row and column in picture \(p\) are Dyck words. Every node must be on a circuit, otherwise there would be a node of degree 1. Each circuit must be simple and the sets of nodes on two circuits are disjoint, else one of the nodes would have degree greater than 2. Part (2) is obvious, since from a node labeled \(a_{j}\) there is a row edge connecting with a node labeled \(b_{j}\), for which there is a column edge connecting with a \(d_{j}\), then a row edge connecting \(d_{j}\) with \(c_{j}\), etc., finally closing the circuit with a column edge connecting a \(c_{j}\) with the original \(a_{j}\).
Theorem 7 has a simple interpretation: to check that in a picture all rows and columns are Dyck words of respectively \(D_{k}^{Row}\) and \(D_{k}^{Col}\), we could proceed along each linear path. The process moves from an opening letter (say \(a\)) to its matching letter (\(b\)) on the same row, while verifying that the word between the two letters is correctly parenthesized; then, the process moves to the closed matching letter (\(d\)) on the column of \(b\), and so on, until the circuit is closed, or interrupted causing rejection. Such a checking of \(DC_{k}\) membership corresponds to a way of checking Dyck membership for words. Since a word is a picture of size \((1,n)\), its associated matching graph is the well-known so-called rainbow representation, e.g., \(a\overbrace{\begin{array}{c}a\end{array}\begin{array}{c}b\end{array}} \begin{array}{c}b\end{array}\) of the syntax tree of the word. A matching circuit then corresponds to the binary relation between the two ends of a rainbow arc. However it is perhaps unexpected that moving from 1D to 2D the length of circular paths increases not just to \(2\times 2\), but without an upper bound, as proved below.
Notice that there exist pictures that are not in \(DC_{1}\), but which still can be partitioned in circuits with label in \(abdc^{+}\) and having arcs following the correct directions (starting from a node label \(a\), going right, then down, then left and then up). For instance, in the picture:
all 8 columns and the first and fourth rows are Dyck words, while the second and third rows are not Dyck words. Still, it is easy to verify that the picture can be partitioned in "correct" circuits having label in \((abcd)^{+}\) (two circuits of length 12 and two circuits of length 4).
Theorem 7.1: (Unbounded circuit length) _For all \(h\geq 0\) there exist a picture in \(DC_{k}\) that contains a circuit of length \(4+8h\)._
Proof: We prove the statement for \(DC_{1}\), the general case being analogous. The case \(h=0\) is obvious. The case \(h>0\) is proved by induction on a sequence of pictures \(p_{(1)},\ldots p_{(h)}\) using as basis the \(DC_{1}\) picture \(p_{(1)}\) in Figure 4 (left), that has size \((m_{(1)},6)\), where \(m_{(1)}=4\), and contains a circuit of length \(12=4+8\), referred to as double-noose. Induction step. It extends picture \(p_{(h-1)}\), \(h>1\), by appending a copy of \(p_{(1)}\) underneath and making a few changes defined in Figure 4 (right). It is easy to see that the result is
a picture \(p_{(h)}\) of size \(\left(m_{(h-1)}+4,6\right)\) such that: \(p_{(h)}\in DC_{1}\) and \(p_{(h)}\) contains a circuit of length \(4+8h\).
Another series of pictures that can be enlarged indefinitely is the one in Figure 3, where the first two terms of the series are shown.
### Quatermate Dyck crosswords
The next definition forbids any cycle longer than 4 and keeps, e.g., the pictures in Figures 2 and 5.
Definition 7 (Quatermate \(DC_{k}\)): A Dyck crossword picture such that all its circuits are of length 4 is called _quaterate_; their language, denoted by \(DQ_{k}\), is the _quaterate Dyck language_.
## 5 Language inclusions
In this section we show the strict language inclusions existing between the alternative definitions of 2D Dyck languages.
Since \(DC_{k}\) pictures may contain circuits of length \(>4\), (e.g., in Figure 3) quaternate Dyck languages are strictly included in Dyck crosswords.
It is obvious that \(DN_{k}\subseteq DQ_{k}\); a natural question is then whether the inclusion is strict. To answer, we define a precedence relation between two rectangles of a \(DQ_{k}\) picture such that the first must be neutralized before the second.
Figure 4: Left. Picture \(p_{(1)}\) used as induction basis of Theorem 4.1. It is covered by a circuit of length \(4+8\cdot 1=12\) and by 3 rectangular circuits. Middle. Picture \(p_{(1)}\odot p_{(1)}\), the four arcs to be deleted are in green, and the four nodes to be relabeled are in blue. Right. Inductive step: picture \(p_{(2)}\) is obtained from \(p_{(1)}\odot p_{(1)}\) by canceling the four green arcs, relabeling the four blue nodes as shown (the corresponding rectangular circuit is in blue) and finally adding two arcs (blue) that join the double-noose circuits. A circuit of length \(4+8\cdot 2\) results. Notice that all length 4 circuits of \(p_{(h-1)}\) and \(p_{(1)}\) are unchanged in \(p_{(h)}\).
Definition 8 (precedence in neutralization): Let \(p\in DQ_{k}\) and let \(\alpha\) and \(\beta\) two rectangles (i.e. length 4 circuits) occurring in \(p\). Rectangle \(\alpha\) has _priority_ over \(\beta\) if, and only if, one, two or four nodes of \(\alpha\) fall inside rectangle \(\beta\) or on its sides. (For three nodes it is impossible.). Let \(\prec\), the _precedence relation_, be the transitive closure of the priority relation.
Example 2 (precedence relation): The precedence relation for the picture in Figure 5, left, has the length-2 cycle \((1,1)\prec(3,3)\prec(1,1)\), blocking the neutralization process of the two rectangles evidenced by thicker lines. The picture in Figure 5, right, has a cycle of length 4.
Theorem 5.1 (neutralizable vs quaternate): _A picture in \(DQ_{k}\) is neutralizable if and only if its precedence relation is acyclic._
Proof: Let relation \(\prec\) be acyclic. Then sort the rectangles in topological order and apply neutralization starting from a rectangle without predecessors. When a rectangle is checked, all of its predecessors have already been neutralized, and neutralization can proceed until all rectangles are neutralized. The converse is obvious: if relation \(\prec\) has a cycle no rectangle in the cycle can be neutralized.
From previous properties of the various 2D Dyck languages introduced in this paper, we obtain a strict linear hierarchy with respect to language inclusion.
Corollary 1 (hierarchy): \(DW_{k}\subsetneq DN_{k}\subsetneq DQ_{k}\subset DC_{k}\)_._
## 6 Conclusion
By introducing some definitions of 2D Dyck languages we have made the first step towards a new characterization of 2D context-free languages by means of the Chomsky-Schutzenberger theorem suitably reformulated for picture languages. But, in our opinion, the mathematical study of the properties of 2D Dyck languages has independent
Figure 5: (Left). A quaternate picture (left) with two overlapping rectangles (thicker lines) that mutually include only one node of the other. To avoid clogging, the rectangles in the specular right half of the picture are not drawn. Such a picture is not neutralizable (Definition 3). The precedence relation (Definition 8) is not acyclic since \((1,1)\prec(3,3)\prec(1,1)\), where each rectangle is identified by the coordinate of its north-west node. Another quaternate picture (right) shows a cycle of length 4: \((1,2)\prec(4,1)\prec(3,4)\prec(2,3)\prec(1,2)\).
interest, and much remains to be understood, especially for the richer case of Dyck crosswords. Very diverse patterns may occur in \(DC_{k}\) pictures, that currently we are unable to classify. The variety of patterns is related to the length of the circuits in the matching graph and to the number of intersection points in a circuit or between different circuits.
We mention two specific open problems. (i) The picture \({}^{ab}_{cd}\) has just one circuit, which is therefore Hamiltonian; it is not known whether there exist any other Hamiltonian pictures in \(DC_{1}\). (ii) By Theorem 3.2 the length of circuits in \(DC_{1}\) pictures is unbounded. The question is whether, for all values \(n>1\), there is a \(DC_{1}\) picture containing a circuit of length \(4n\).
A related range of questions concerns the "productivity" of a circuit, meaning the existence of \(DC_{k}\) pictures incorporating the circuit. A simple formulation is: given a circuit situated in its bounding box, does a \(DC_{k}\) picture exist of a size equal or larger than the bounding box, such that the same circuit occurs within the picture?
**Acknowledgment**: We thank Matteo Pradella for helpful discussions.
|
2305.00539 | Dynein-driven self-organization of microtubules: An entropy- and
network-based analysis | Microtubules self-organize to form part of the cellular cytoskeleton. They
give cells their shape and play a crucial role in cell division and
intracellular transport. Strikingly, microtubules driven by motor proteins
reorganize into stable mitotic/meiotic spindles with high spatial and temporal
precision during successive cell division cycles. Although the topic has been
extensively studied, the question remains: What defines such microtubule
networks' spatial order and robustness? Here, we aim to approach this problem
by analyzing a simplified computational model of radial microtubule
self-organization driven by a single type of motor protein -- dyneins. We
establish that the spatial order of the steady-state pattern is likely
associated with the dynein-driven microtubule motility. At the same time, the
structure of the microtubule network is likely linked to its connectivity at
the beginning of self-organization. Using the continuous variation of dynein
concentration, we reveal hysteresis in microtubule self-organization, ensuring
the stability of radial filament structures. | Nikita Frolov, Bram Bijnens, Daniel Ruiz-Reynés, Lendert Gelens | 2023-04-30T17:45:14Z | http://arxiv.org/abs/2305.00539v2 | # Self-organization of microtubules: complexity analysis of emergent patterns
###### Abstract
Microtubules self-organize to structure part of the cellular cytoskeleton. As such they give cells their shape and play a crucial role in cell division and intracellular transport. Past studies have identified diverse spatio-temporal patterns into which microtubules can organize when driven by motor proteins. The question remains if there is an appropriate way to quantify these structures and gain new knowledge about the physical principles of self-organization in microtubule-motor mixtures. Here, we aim to approach this problem from a complexity science perspective. We introduce an entropy-based measure to evaluate the structural complexity of spatial patterns emerging in a simplified agent-based computational model of a microtubule-motor interactions. Our results demonstrate that the proposed quantifier discriminates well between ordered, disordered, and intermediate structures. Besides, our study indicates that the transition to steady states in such a system is likely to be discontinuous and exhibits distinct properties of self-organized criticality.
pacs: 87.10.-c, 87.
motors can also be gained from theoretical and computational models. Important contributions in this direction were made by F. Nedelec and colleagues, who developed the simulation software Cytosim [20] and use it to analyze hidden aspects of complex MT-motor interactions behind the filament assembly [21; 22; 23; 24; 25; 26]. Remarkably, they have observed the emergence of particular MT structures such as asters and vortices, and they explained physical principles of their formation [22]. Later works have explored the stability of such patterns using nonlinear analysis and a dynamical systems approach [27; 28; 29]. Recently, Torisawa _et al._ have demonstrated both experimentally and theoretically a diverse spatio-temporal patterning of MT fibers ranging from densely networked to isolated asters [30].
Based on the variety of spatio-temporal structures that the MT-motor mixture can self-organize into, the question arises: how can one quantify such states and gain new knowledge about the system? In this paper, we address this question using a complexity science approach, which provides powerful mathematical and statistical tools for complex systems' analysis [31; 32]. Particularly, we characterize the MT-motor pattern formation in terms of the complexity of the spatial distribution of the MT filaments. We test our approach on a simplified computational model of the MT-motor interaction, in which the MT fibers of a fixed length are driven by only one type of motor protein - dynein. Our results show how the complexity of the MT pattern changes under the variation of molecular motor density. We demonstrate features of MT self-organization in terms of the microscopic motion of motor proteins. Finally, we discuss the properties of transition to a steady state pattern by relating spatial complexity with characteristics of the microscopic motion.
The paper is organized as follows. Section II describes the agent-based computational model of MT-motor interactions, its configuration and limitations. It also reports the details of MT pattern complexity evaluation. In Section III, we present main results of the study and put them into the context of current literature. Section IV provides a brief summary of this work and highlights open questions.
## II Methods
### Agent-based modeling
Agent-based models provide an accurate mathematical description of complex interactions in multi-component systems. Individual agents follow straightforward rules (usually simpler than those of global behavior), making it easier to interpret and control individual kinetic parameters. Though the equations are simple, complex behavior can arise in such systems. In this work, we perform agent-based modeling using Cytosim [20], an open-source cytoskeleton simulation tool.
Microtubules and molecular motors float around in liquid suspension, e.g., the cytoplasm in a cell. One can express it in terms of the Langevin equation that describes the motion of the particle of mass \(m\) in a medium having a viscosity \(\gamma\) under Brownian force \(\zeta(t)\) generated by collisions with other particles and external force \(\mathbf{F}_{\text{ext}}(t)\) including bending and contraction forces acting on a particle:
\[m\frac{d^{2}\mathbf{r}}{dt^{2}}=-\gamma\frac{d\mathbf{r}}{dt}+\zeta(t)+\mathbf{F}_{\text{ ext}}(\mathbf{r},t). \tag{1}\]
Under the assumption of high friction (\(m/\gamma\)) \(\ll\) 1, Eq. (1) becomes:
\[\frac{d\mathbf{r}}{dt}=\mathbf{B}(t)+\mu\mathbf{F}_{\text{ext}}(\mathbf{r},t), \tag{2}\]
where \(\mathbf{B}\) is a so-called rescaled Brownian force and \(\mu\) is a particle's mobility. For \(N\) particles, Cytosim numerically solves the system of \(N\) equations (2). Larger objects, such as microtubule fibers, are presented as multiple points (particles) connected by springs.
The motor proteins floating near a microtubule can bind to it with some probability. While bound to a microtubule, such a motor no longer obeys the Eq. (2) but moves along the fiber with a fixed velocity \(v_{\text{mot}}\). Motor complexes can bind to two fibers, thus creating a crosslink between them and generating forces that determine a fiber's motion and self-organization (Fig. 1B). The sign of \(v_{\text{mot}}\) defines the direction of the motor's motion along the fiber: positive \(v_{\text{mot}}\) for motion towards the plus-end, and negative \(v_{\text{mot}}\) for the opposite direction (towards the minus-end).
Bound motor proteins can detach themselves from the microtubule. The interval of time for the particle to be bound to the microtubule is distributed exponentially distributed as follows:
\[F(\Delta t)=\int_{0}^{\delta t}dt\,b_{\text{off}}\,e^{-b_{\text{off}}t}. \tag{3}\]
Here, \(b_{\text{off}}\) is an off-rate at which a motor detaches, which is determined by an unbinding rate \(b_{\text{unbind}}\), an external force acting on a motor \(\mathbf{F}_{\text{mot}}\), and the binding force of a motor \(F_{\text{bind}}\):
\[b_{\text{off}}=b_{\text{unbind}}e^{F_{\text{mot}}/F_{\text{bind}}}. \tag{4}\]
_Assumptions_. In the current study, we considered only one type of molecular motor - dynein - that moves towards the "minus"-end of the microtubule (negative \(v_{\text{mot}}\)). We also considered microtubule fibers of a fixed length \(l\) neglecting their growth and shrinkage. Hence, we did not include freely distributed tubulin particles in our model. Finally, we consider a 2D model assuming that the microtubules and motor proteins float in an infinitely thin square chamber of cytoplasm with a side length \(l_{\text{ch}}>l\).
All the model parameters, except for the number of motor proteins \(N_{\text{mot}}\), have been fixed during the simulations and are provided in the Table 1. For convenience,
we present the amount of dyneins in terms of the concentration \(N_{\text{mot}}/l_{\text{ch}}^{2}\) [\(\mu\)m\({}^{-2}\)].
The simulation starts with a random distribution of \(N_{\text{mot}}\) dynein complexes over the chamber space \(t=0\). The position and orientation of \(N_{\text{MT}}\) microtubule filaments is also initiated randomly. We simulate 100s of the dynein-driven self-organization of the microtubules with the time step of \(\Delta t=\)0.02s (5000 iterations). Example of the simulation is presented in the bottom panel in Fig. 1B.
### Pattern complexity evaluation
Under the collective motion of dynein complexes bound to microtubules, the latter form specific star-like structures - asters - with "minus"-ends pointing towards their centers. Varying the concentration of motor proteins, one can switch from a network of weakly organized and poorly concentrated asters to a single aster accumulating all available filaments (Fig. 2A). An actual issue is a proper evaluation of a global spatial pattern to gain insight into a transition to a self-organized steady state. Here, we handle this problem via a complexity science approach, specifically an entropy-based quantification of the microtubule structure.
_Minus-ends positions._ As mentioned before, with increasing dynein concentration, the spatial patterns change from randomly distributed and oriented microtubule filaments - high-complexity states - to formations of asters with the fibers' minus-ends directed towards their centers - lower-complexity states. In this context, it is logical to describe the complexity of the spatial pattern in terms of the minus-ends distribution. Fig. 2B represents the pipeline of the complexity-based quantification of the microtubule pattern complexity. The left panel shows an exemplary distribution of the filaments (green lines) with a few asters formed in a steady state. One can see that most of the filaments are ar
\begin{table}
\begin{tabular}{l l} \hline \hline
**Model parameter** & **Value** \\ \hline
**Cytoplasm** & \\ Viscosity, \(\mu\) & 0.05 pNs/\(\mu\)m\({}^{2}\) \\ Chamber size, \(l_{\text{ch}}\) & 50 \(\mu\)m \\
**Microtubules** & \\ Amount of fibers, \(N_{\text{MT}}\) & 2000 \\ Fiber length, \(l\) & 10 \(\mu\)m \\
**Motor protein complexes (dyneins)** & \\ Binding rate, \(b_{\text{bind}}\) & 10 s\({}^{-1}\) \\ Binding force, \(F_{\text{bind}}\) & \(\infty\) pN\({}^{\text{a}}\) \\ Unbinding rate, \(b_{\text{unbind}}\) & 0.1 s\({}^{-1}\) \\ Motor velocity, \(v_{\text{mot}}\) & \(-\)0.8 \(\mu\)m/s \\ Amount of motor complexes, \(N_{\text{mot}}\) & [1000,5000] \\ \hline \hline \end{tabular}
* No correction to \(b_{\text{bind}}\)
\end{table}
Table 1: Parameters used to configure a numerical model of microtubule-motor interaction in the Cytosim
Figure 1: **Self-organization of the microtubule networks.****A.** Top: experimental observations of compartmentalization in cell-free extracts (adapted from Fig. S3B, Cheng and Ferrell [9]). Bottom: mitotic spindle assembly (adapted from Fig. 1A, Wilbur and Heald [19]). **B.** Top: illustration of a simplified dynein-driven mechanism of MT network assembly used in the current study. Bottom: simulation of the MT network formation via a dynein-driven interaction. Here, orange circles and green lines represent dynein complexes and MT fibers, respectively. Orange arrowheads indicate the position of dynein loci, and black dashed circles show the boundaries of separate MT asters.
ranged in the asters, and some of them still freely float in the cytoplasm. The blue dots indicate the positions of the minus-ends, most of which, as expected, are concentrated around the centers of respective asters resulting in almost 0 \(\mu\)m distance between the neighboring ends. Another spatial scale emerges from the distance between the formed asters as indicated by solid and dashed red lines.
_Pairwise distances._ Based on the above discussion, it is reasonable to assume that the pairwise distances between the fibers' minus-ends contain information on how ordered the microtubule pattern is. We compute the pairwise distance matrix \(D\) as:
\[D_{ij}=||\mathbf{r}_{i}^{-}-\mathbf{r}_{j}^{-}||, \tag{5}\]
where \(\mathbf{r}_{i}^{-}\) is a position of the minus-end of the \(i^{th}\) microtubule fiber, and \(||\bullet||\) is a Euclidean norm. The middle
Figure 2: **Microtubule self-organization under variation of the dynein density.****A.** Evolution of the self-organized MT pattern under an increase of the total dynein concentration as captured by the end step of numerical simulation. **B.** Quantifying complexity of the MT pattern: (i) extracting locations of the MT fibers’ minus-ends (blue dots, right panel); (ii) computing their pairwise distances (middle panel); (iii) constructing the pairwise distance distribution (left panel) and evaluating its entropy. Solid and dashed red lines in the left and right panels indicate characteristic distances between neighboring and opposite asters, respectively. **C.** Left: evolution of the pairwise distance distribution with increasing total dynein concentration. Middle: the number of asters (orange circles) and entropy of the pattern (blue circles) versus the total dynein concentration. Right: relationship between entropy and the number of asters (right panel). Non-transparent circles show the median entropy estimate for a given number of asters, and the black curve represents an associated sigmoid fit.
panel in Fig. 2B displays the matrix \(D\) corresponding to the spatial pattern in the left panel. One can see that the matrix \(D\) reflects the spatial structure of the microtubule pattern, i.e., four clusters of closely located fiber ends (the deep blue regions 1, 3, 4, and 5 along the main diagonal) associated with respective asters. Alongside these clusters, the matrix isolates the group of freely distributed filaments (region 2).
_Pairwise distance distribution_. Here, we describe the pairwise distance distribution and quantify its complexity. The right panel in Fig. 2B shows a respective probability density function (PDF) of the pairwise distances \(D_{ij}\) (bin width of 1 \(\mu\)m). As expected, the distribution is bimodal with the local peaks around two spatial scales: (i) 0-scale, i.e., the minus-ends concentrated near the asters' centers, and (ii) the scale of the distance between asters. Notably, there are few local peaks around the second. One of them corresponds to the distance between neighboring asters, which is roughly \(2l=20\)\(\mu\)m (solid vertical line) - the closest distance for two asters to stay in equilibrium and not to collapse in a single one. The other reflects the distance between the opposite asters (dashed vertical line \(\approx 2\sqrt{2}l=28.28\)\(\mu\)m). Finally, we quantify the complexity of this distribution using the definition of complexity:
\[H=-\sum p_{i}\log p_{i}, \tag{6}\]
where \(p_{i}\) is a probability density of the \(i^{th}\) bin. Indeed, with such a definition, a completely disordered state having a normal distribution of pairwise distances and large variance is characterized by the highest entropy. On the contrary, self-organization facilitates the nucleation of asters resulting in the emergence of sharp peaks in the pairwise distance distribution, i.e., pattern simplification and entropy reduction.
### Curve-fitting and statistics
We perform fitting curves to the data using a least-squares-based procedure curve_fit implemented in the scipy.optimize package for Python. Quality of fit is assessed via the \(R^{2}\)-score for scatterplots and via the \(\chi^{2}\)-statistics for the distributions.
## III Results and discussion
### Pattern complexity versus dynein concentration
We start by examining how the introduced indicators of spatial complexity are associated with the pattern formation induced by different concentrations of the dynein complexes (Fig. 2C). The left panel shows the evolution of the pairwise distance distribution with increasing dyneins concentration. One can see that at concentrations \(\lessapprox\) 0.8 \(\mu\)m\({}^{-2}\), the distribution of pairwise distances is the flattest, reflecting a disordered spatial pattern with poorly nucleated microtubule asters. Increasing the concentration of molecular motors up to \(\approx\) 1.4 \(\mu\)m\({}^{-2}\) leads to the emergence and rise of the peaks in the distribution around 0, 20, and 30 \(\mu\)m indicative of an arrangement of more concentrated asters into a well-organized square-shaped structure. Between 1.4 and 1.85 \(\mu\)m\({}^{-2}\), only two peaks remain in a distribution, around 0 and 2\(l\)\(\mu\)m, and their magnitude considerably increases compared to the lower motor concentrations. It informs us about the breaking of a square state symmetry, transforming the spatial pattern into a triangular one, and then further into a bipolar one. Finally, at the motor concentrations exceeding 1.85 \(\mu\)m\({}^{-2}\), microtubule fibers are accumulated in a single highly nucleated aster as displayed by the only prominent sharp peak around 0 \(\mu\)m in the pairwise distance distribution. Similarly, earlier works have shown the transition from globally connected (disordered) microtubules to sparse concentrated metastructures by changing dynein concentration [21; 22]. A more recent study by Torisawa _et al._[30] has identified such a transition under the variation of motor/filament ratios.
The middle panel in Fig. 2C displays the entropy of the pairwise distance distribution alongside the number of asters versus the dynein concentration. The variation of the number of asters with an increasing number of motor proteins is naturally step-wise and discontinuous, having multiple plateaus. In turn, entropy continuously decreases with the growth of dynein concentration well approximated by a parabola with a negative quadratic coefficient.
Establishing an explicit association between entropy and the number of clusters (right panel in Fig. 2C), one can see that the medians are fitted well by a sigmoid curve. At given simulation parameters, the patterns containing six and more asters are almost equally disordered as provided by saturation of entropy around the value of 3.8. Conversely, the least complex states with 1 and 2 highly concentrated asters are at the opposite end of the S-curve, characterized by the entropy between 2.8 and 2.9. Gradual increase of entropy between these plateaus, i.e., for 3, 4, and 5 asters, suggests the complication of an arrangement of the asters into a global spatial pattern. While for 3 and 4 asters, a non-equilibrium steady state reaches an ordered spatially-homogeneous structure (triangle and square, respectively), a 5-aster-pattern is of higher spatial heterogeneity (see examples in Fig. 2A).
### Microscopic dynamics of the dynein complexes
After establishing a complexity-based approach to quantify the spatial pattern in a microtubule-motor system, we explore the microscopic dynamics of motor proteins behind pattern formation. More specifically, we are interested in how one can evaluate the collective motion of motor proteins, which features it exhibits, and how it determines the macroscopic state.
Let us start with the quantification of the motion of the molecular motors. The left panel in Fig. 3A shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3B shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the direction of the molecule. The right panel in Fig. 3C shows the molecular motors that are not in the direction of the molecule. The right panel in Fig.
the distribution of motor proteins and microtubule fibers at the beginning of self-organization at the concentration of dyneins of \(1.2\mu\)m\({}^{-2}\). One can see that 20s after the simulation started, a substantial fraction of motor proteins has bound to the respective microtubule filaments. It results in a slow collective motion along fibers that arranges the filaments into asters. Indeed, one can also see an initiation of aster nucleation in the corners of the chamber (shown in a zoomed image). It is accompanied by a considerable increase in the local concentration of motor proteins in the nucleation loci, as presented in a density plot of a zoomed image. Together these observations indicate that the system's evolution towards pattern formation is associated with decreasing the motor protein velocity and increasing their local concentration. To characterize the microscopic motion of molecular motors, we introduce two quantities: maximal velocity and maximal local concentration assessed as the \(97.5^{th}\) percentile of the distributions of respective variables (right panel in Fig. 3A). These quantities inform us of which maximum velocity and concentration can be achieved in a given state (time moment).
Fig. 3B illustrates how these quantities evolve in time at different concentrations of molecular motors. Regardless of the motor concentration, the maximal velocity dramatically decreases, more than 10-fold, during the first 5s, as shown in the inset in the left panel. As discussed above, this transient process is likely associated with a rapid binding of the motor proteins to the filaments. After that, the maximal velocity decreases exponentially at a much slower rate. As the system approaches a steady state at a low concentration of molecular motors, the maximal velocity is of the order of \(v_{\rm mot}\) indicative of the unceasing motors' motion along the fibers, which, however, does not disturb the spatial pattern. At higher concentrations, the system abruptly switches to an almost motionless state of the dynein complex after \(\approx\)50s. In this case, the dyneins become locked in the centers of highly-concentrated asters, and free motor proteins are almost absent. In turn, maximal local density experiences exponential growth and saturates approaching a non-equilibrium steady-state (right panel) and the level of saturation growth with an increasing total concentration of motor proteins.
Described exponential decay of velocity of motor proteins accompanied by the exponential growth of their local concentration is likely to be a hallmark of a self-organized criticality as a mechanism giving rise to pattern formation from collective microtubule-motor interaction. To explore it in detail, we consider how the distribution of local dynein concentration evolves in time (Fig. 3C). One can see that the molecular motors are distributed homogeneously over chamber 1s after the start of the simulations (panels on the left). Additionally, the values of local concentration in square bins obey the Weibull distribution (top panel on the right, \(p>0.99\) via the \(\chi^{2}\)-test). Over time, the dyneins condensate due to positive feedback in microtubule-motor interactions: dense clusters of motor proteins facilitate the alignment of filaments, which in turn reinforce the directed motion of the motors, increasing their concentration. Established feedback contributes to an avalanche-like growth of the local dyneins concentration as reflected by the switch to power-law distributions \(PDF\sim(\rm local\ concentration)^{\alpha}\) (bottom panel to the right, \(p>0.99\) via the \(\chi^{2}\)-test). This observation suggests that the formation of microtubule networks is rooted in self-organized criticality [33] through the positive feedback in microtubule-motor interactions.
Our results complement seminal works by Nedelec _et al._ and by Sankararaman _et al._ that have demonstrated a \(1/r\) distribution of molecular motors near the aster's (and vortex's) center [23; 28]. The authors articulated that this distribution originates from the radial nature of these structures. Recently, Banks _et al._ have extended these results by developing a new data-driven approach to model the motor distribution in asters [34]. Although their model did not exhibit a specific functional form, it agreed with earlier works in the part of a peripheral power-law decay of the motor concentration.
Figure 4: **Phase transitions and scaling during microtubule pattern formation.****A.** Transition to stable MT patterns on (entropy, velocity) and (entropy, local density) planes for the different levels of total dynein concentration. Empty circles indicate steady states. **B.** Left: the relationship between dynein velocity and local dynein concentration. Black dashed lines indicate the area of critical dynamics exhibiting power law scaling between the two quantities. Right: zoom into the area of critical dynamics and corresponding power-law fits (solid lines). All lines have the same slope of \(\kappa=-0.156\).
### Phase transition in a microtubule-motor system
By now, we have separately quantified and discussed the formation of a spatial microtubule pattern and the underlying microscopic behavior of motor proteins. Complementing the above discussed, we aim to find the relationship between the macro- and microscopic dynamics and explore the transition to a non-equilibrium steady state from random initial conditions.
Fig. 4A shows the state space of the microtubule-motor system formed by a macroscopic state variable (y-axis) and the quantifiers of a microscopic motion of motor proteins - their maximal velocity (x-axis in the right panel) and their maximal local concentration (x-axis in the left panel). The scatterplots are color-coded with the values of total motor concentration. On the route to a steady state, the pattern's entropy decreases continuously with the exponential growth of local dyneins concentration. Notably, the slope of the trajectory depends on the total dynein concentration. Conversely, the variation of entropy under velocity decay proceeds discontinuously. It means that dramatic changes in the spatial organization of microtubules occur for small changes in the velocity of motor proteins. This observation indicates the irreversibility of self-organization in a microtubule-motor system and supports our previous discussion on its criticality.
Now, exploring the state space formed by the microscopic variables only in a log-log scale (Fig. 4B), one can see that the system converges from its initial state (top left corner) to a steady state (bottom right corner) drifting along the flat plateau associated with the active phase of microtubule pattern formation (i.e., a dynein driven alignment of the filaments). Zooming in on this part of the state space, we find that during this process, the maximal velocity of the dyneins is inversely power-law-correlated with their maximal local concentration:
\[[\text{velocity}]\sim[\text{local concentration}]^{\kappa}. \tag{7}\]
Remarkably, the value of slope in a log-log scale, or equivalently the value of the exponent in a power function, is invariant to the total dynein concentration (\(\kappa=-0.156\), \(R^{2}>0.8\)). The uncovered invariant scaling between the velocity and local concentration of dyneins implies that regardless of the amount of motor proteins, their accumulation preserves the same rate \(\kappa\).
## IV Conclusion
In this work, we characterized self-organization in a microtubule-motor system from a complexity science perspective. We introduced an entropy-based measure to evaluate the spatial (dis)order of the system's state. We then tested our approach on a simplified model, in which the assembly of microtubules is driven by only one type of motor protein - dynein. At low concentrations of molecular motors, a microtubule-motor mixture is organized into a disordered pattern as characterized by the estimates of entropy. Such a pattern consisted of multiple poorly concentrated and irregularly positioned asters - star-like assemblies of microtubules. With increasing concentration of motor proteins, the system switches to more ordered states, e.g., rectangular or triangular compositions of asters, exhibiting lower entropy. Finally, at a high concentration of dynein, the system reaches a state of minimal complexity - a single highly-concentrated aster.
Further analysis of the microscopic motion of dyneins indicated that pattern formation in this system exhibits particular features of self-organized criticality, i.e., a power-law distribution of local dynein concentration. These results are consistent with earlier works [20; 34], where one explored the steady-state organization of molecular motors in asters and demonstrated that it obeys a heavy-tailed distribution. This current work complements previous studies by analyzing the distributions in dynamical characteristics and highlighting that the power-law distribution of motor proteins is not only applicable in a single-aster domain but in the whole cytoplasmic space (volume).
Relating the evolution of characteristics of the microscopic motion of dyneins with entropy-based estimates of the (macroscopic) spatial pattern, we explored the properties of a phase transition in a microtubule-motor system. We showed that the pattern formation process also exhibits properties of a first-order transition as indicated by a dramatic change of spatial structure (pattern complexity) under a small self-tuned variation of a microscopic variable (dynein velocity). Moreover, we demonstrated that during the phase transition, microscopic variables (dynein velocity, and local concentration) establish a power-law scaling in which the exponent is invariant to the total concentration of molecular motors in the cytoplasm. This finding suggests that the kinetics of pattern formation follow the same rules and scaling properties, whereas the resulting spatial pattern depends on the threshold values of total dynein concentration.
To conclude, our study uses a complexity science approach to describe and explore the self-assembly of microtubules driven by interaction with molecular motors. It complements existing works and offers new insights into phase transitions in microtubule-motor mixtures. Prospectively, it unlocks a range of intriguing open questions, e.g., how the transition to steady states and their complexity would change under (i) the interaction with other types of motor proteins (a mixture of motor proteins of different types); (ii) mechanical interaction with other filament networks (F-actins) [35]; (iii) positioning of the MT asters [36].
###### Acknowledgements.
D.R-R. acknowledges support by the internal funds KU Leuven (grant no. PDM/20/153), the Ministry of Universities through the "Pla de Recuperacio, Transformacio i Resilencia", and by the EU (NextGenerationEU) together with the Universitat de les Illes Balears. L.G. acknowledges financial support by the Research-Foundation Flanders (FWO-Vlaanderen) (grant no. G074321N). We also thank Felix E. Nolet for valuable discussions on related topics leading up to this study.
## Data Availability
The configuration file for simulating microtubule-dynein interactions in Cytosim is available online [37].
|
2304.00176 | Improving extreme weather events detection with light-weight neural
networks | To advance automated detection of extreme weather events, which are
increasing in frequency and intensity with climate change, we explore
modifications to a novel light-weight Context Guided convolutional neural
network architecture trained for semantic segmentation of tropical cyclones and
atmospheric rivers in climate data. Our primary focus is on tropical cyclones,
the most destructive weather events, for which current models show limited
performance. We investigate feature engineering, data augmentation, learning
rate modifications, alternative loss functions, and architectural changes. In
contrast to previous approaches optimizing for intersection over union, we
specifically seek to improve recall to penalize under-counting and prioritize
identification of tropical cyclones. We report success through the use of
weighted loss functions to counter class imbalance for these rare events. We
conclude with directions for future research on extreme weather events
detection, a crucial task for prediction, mitigation, and equitable adaptation
to the impacts of climate change. | Romain Lacombe, Hannah Grossman, Lucas Hendren, David Lüdeke | 2023-03-31T23:38:54Z | http://arxiv.org/abs/2304.00176v1 | # Improving extreme weather events detection with light-weight neural networks
###### Abstract
To advance automated detection of extreme weather events, which are increasing in frequency and intensity with climate change, we explore modifications to a novel light-weight Context Guided convolutional neural network architecture trained for semantic segmentation of tropical cyclones and atmospheric rivers in climate data. Our primary focus is on tropical cyclones, the most destructive weather events, for which current models show limited performance. We investigate feature engineering, data augmentation, learning rate modifications, alternative loss functions, and architectural changes. In contrast to previous approaches optimizing for intersection over union, we specifically seek to improve recall to penalize under-counting and prioritize identification of tropical cyclones. We report success through the use of weighted loss functions to counter class imbalance for these rare events. We conclude with directions for future research on extreme weather events detection, a crucial task for prediction, mitigation, and equitable adaptation to the impacts of climate change.
## 1 Introduction
Climate action failure and extreme weather are two of the most severe global risks today (IPCC, 2022; World Economic Forum, 2022). Tropical cyclones, the most destructive extreme weather events (NOAA, 2022), have a rising and disproportionate impact on low and medium income countries (LMICs), yet research into their effects focuses mostly on high-income countries (Parks and Guinto, 2022). Studies of extreme weather and climate change rely on heuristics or expert judgment to label data which leads to an inequitable global scientific focus, as well as discrepancies in predicted frequency, intensity, and attribution estimates. Improving automated detection of extreme weather events is thus paramount to fair attribution of climate loss and damages (Philip et al., 2020), and to develop the early warning and detection systems that will be critical for equitable adaption to climate change (IPCC, 2022; Nguyen et al., 2013).
Since 2020, deep learning has shown great promise for semantic segmentation of weather patterns in climate simulation data (Prabhat et al., 2021). However, initial approaches have relied on complex architectures and hard to train models with very large numbers of parameters. A key area of research is the application of lighter-weight neural networks to semantic segmentation of tropical cyclones (TC) and atmospheric rivers (AR) (Kapp-Schwoerer et al., 2020).
Here we explore the application of the light-weight Context Guided convolutional neural network (CGNet) architecture to semantic segmentation of tropical cyclones in climate data. Input to our model is hand-labeled climate simulation data with channels that contain key atmospheric variables such as wind speed, moisture content, and atmospheric pressure for different time steps, latitudes, and longitudes. The output is a segmentation mask where each pixel takes a value corresponding to the background (BG), TC, or AR classes.
Specific challenges include the very small dataset size, inherent class imbalance of infrequent extreme events, unavoidable bias due to subjective human labeling, and limited capacity of the light-weight network. We report experiments with different hyper-parameters (loss function, learning rate), architecture (up-sampling), data augmentation, and feature engineering. We find that weighted loss functions aimed at compensating class imbalance provide the most significant improvement on recall of extreme weather events.
## 2 Related work
Initial inspiration for this work came from Prabhat et al. (2021) which trained a DeepLabV3+ convolutional neural net on the _ClimateNet_ expert-label dataset. This \(\sim\)50 million parameters model achieved an intersection over union (IoU) score (1) of 0.24 for TCs, and was the first to demonstrate that deep learning models trained on hand-labeled climate data could effectively perform semantic segmentation of extreme weather patterns. However, the DeepLabV3+ architecture is complex, heavy, and thus costly in terms of memory, training time, and associated carbon footprint.
In _Spatio-temporal segmentation and tracking of weather patterns with light-weight Neural Networks_, Kapp-Schwoerer et al. (2020) attempt to perform the same segmentation task on the _ClimateNet_ dataset with the much lighter-weight (\(\sim\)500,000 parameters) Context Guided neural architecture. They improve on Prabhat et al. (2021) with a IoU score of 0.34 and a recall of 0.57 for TCs, our primary class of interest. This model and its associated metrics form our performance baseline.
For a detailed presentation of Context Guided convolutional neural networks, we refer the reader to the original paper that introduced the CGNet architecture, _A light-weight Context Guided Network for semantic segmentation_ by Wu et al. (2021). To solve the class imbalance problem, we experimented with various loss functions reviewed in _Survey of loss functions for semantic segmentation_(Jadon, 2020). Lastly, we relied on _Deep Learning for the Earth Sciences_(Mudigonda et al., 2021) for general background on applying deep learning techniques to Earth Sciences.
## 3 Dataset & Features
We trained our neural net on _ClimateNet_, an open, community-sourced, human expert-labeled dataset of outputs from Community Atmospheric Model (CAM5.1) climate simulation runs for 459 time steps from 1996 to 2013. Each sample is a netCDF file containing a 1152 \(\times\) 768 array for one simulation time step, with each pixel mapping to: one (latitude, longitude) point with 34.8 km/pixel horizontal and 26.1 km/pixel vertical resolution near the Equator; 16 channels for key atmospheric variables, described in table 2 and visualized in figure 4; and one ground truth class label. The dataset is split into a training set of 398 (map, labels) pairs from 1996 to 2010, and a test set of 61 (map, labels) pairs spanning 2011 to 2013. For learning rate scheduling, we created a validation set of 56 (map, labels) pairs spanning 2008 to 2010, which we set aside from the training set to keep the test set consistent with our baseline.
The implementation by Kapp-Schwoerer et al. (2020) is trained on the following four channels: TMQ, total vertically integrated precipitable water; U850, zonal (east-west) winds at the 850 mbar pressure surface; V850, meridional (north-south) wind at the 850 mbar pressure surface; and PSL, atmospheric pressure at sea level. From the existing 16 channels, we engineered new features, _wind velocity_ and _wind vorticity_, to help the model identify TCs since they are characterized by high wind speeds and rotation. Wind velocity is the \(L_{2}\) norm of zonal and meridional components of the wind vector field (equation 11). Wind vorticity is the curl of the wind vector field around the earth radius axis (equation 10), a measure of the local rotation (Simpson, 2010). We pre-computed these engineered features at the 850 mbar pressure level and at the lowest altitude level.
The output of the model is a (1152 \(\times\) 768) tensor of softmax probabilities for background, TC, or AR classes. Importantly, labels for the supervised learning of this task are segmentation maps that were hand-drawn by climate scientists as part of a community labeling exercise described in Prabhat et al. (2021). Figure 2 illustrates how labels were generated as a consensus between experts.
In an effort to reduce over-fitting to the relatively small training set, we explored data augmentation techniques. While transforming the image based on randomized longitude increments seemed promising, we observed that random translations along the longitudes dimension immediately de
creased performance. We hypothesize that this may be due to the importance of geography (relative positioning of continents and oceans) for atmospheric circulation and weather patterns. As a consequence, rather than providing additional data for training, data augmentation may act as a detriment to learning by precluding the learning of accurate geographical representations.
## 4 Methods
### Baseline Implementation and Performance
We established our baseline by training the Kapp-Schwoerer et al. (2020) implementation of the CGNet architecture for 15 epochs over the _ClimateNet_ training set, with a Jaccard loss (equation 4) based on the IoU for the 3 classes (background, AR, and TC).
We report recall as a key performance metric to minimize false negatives, which is especially important for identification of infrequent events. The baseline performance for TCs reaches an IoU score of 0.3396 and a recall of 0.5708 on the test set (see table 1). A higher performance on the train set (IoU score of 0.38 for TCs) indicates the model may also display some variance and over-fitting.
A fundamental challenge for climate event identification is the inherent imbalance of the data, since, by definition, the extreme events we aim to detect are very rare. We conclude from this analysis that the baseline implementation exhibits high bias, some variance, and relatively low recall.
### CGNet Architecture
The light-weight CGNet architecture introduced by Wu et al. (2021) follows the principle of "deep and thin" and is designed specifically for high-accuracy semantic segmentation while maintaining a small memory footprint. This is advantageous for reducing training time and model complexity.
Context Guided block.The basic unit of CGNet is a Context Guided (CG) block, presented in figure 1, which concatenates the output of normal and dilated convolutional layers to integrate local and surrounding context respectively. It uses 1x1 convolutions and average pooling to further refine the representation using a global context. The CG block reduces parameter count and memory footprint by employing channel-wise convolutions to lower computational cost across channels.
Figure 1: Above: Context Guided convolutional neural network (CGNet). Below: Context Guided block (CG) consisting of local feature extractor \(f_{loc}\), surrounding context extractor \(f_{sur}\), joint feature extractor \(f_{join}\), and global context extractor \(f_{glo}\) where \(\odot\) represents element-wise multiplication.
Architectural experimentations.In order to improve performance, we experimented with additional CNN + BatchNorm + ReLU layers to the model to produce a deeper network with the goal of learning more complex features. We also experimented with doubling the final up-sampling layer to increase resolution of the output predictions. Both of these attempts were unsuccessful at significantly improving performance.
Learning rate scheduler.Experimenting with learning rates greater and lower than the original (0.001) negatively affected IoU and Dice scores. To limit variance, we implemented learning rate (LR) scheduling and early termination for the Adam optimizer. This proved successful in reducing the over-fitting observed in the baseline.
### Addressing Imbalanced Classes
The foremost challenge presented by this task is the extreme data imbalance inherent to rare weather events. Prabhat et al. (2021) report 94% of pixels in the _ClimateNet_ data belonging to the background class. We find that TCs represent only 0.462% of pixels of the entire dataset (and ARs only 5.674%). This means that a naive model assigning _every pixel_ to the background class would reach 94% accuracy despite failing at its task.
To address this class imbalance, we experimented with modifying the loss landscape to better account for under-represented classes and improve performance on rare events such as TC and AR pixels. To that end, we leaned on the literature review by Jadon (2020) to select and implement additional performance metrics and loss functions for training and evaluation.
#### 4.3.1 Performance metrics
To fulfill our problem statement of improved detection of rare weather events in climate data, we explored performance metrics that better represent the model's capacity to learn that task. Specifically, we value detecting extreme events more than identifying their exact boundaries hand-labeled by experts, and aim to penalize missing relevant events more than over-predicting their geographical extent. Specifically, we implemented the following performance metrics:
* **Intersection over union:** our baseline model was trained to optimize for the IoU metric (equation 1), as usual for many computer vision problems.
* **Sorensen-Dice similarity** or Dice coefficient (equation 2) is a measure of the similarity between class predictions and ground truth that is widely used for image comparison.
* **Recall** or **Sensitivity:** we devised our training strategy to optimize for recall (equation 3) as a proxy for the ability to detect most true positives of the TC class.
#### 4.3.2 Weighted loss functions
To optimize for these metrics, we explored and implemented a broad set of loss functions designed to assign higher weights to rare classes, building on a review by Jadon (2020):
* **Jaccard loss:** used by our baseline mode. Computes a derivable prediction of segmentation map IoU from the softmax probabilities output of the classifier (equation 4).
* **Dice loss:** derivable Dice coefficient from the softmax probabilities (equation 5).
* **Cross-entropy loss:** canonically used in multi-class classification problems, it helps balance under-represented classes. We used the pyTorch implementation of the cross entropy loss (equation 6) and weighted cross entropy loss (equation 7).
* **Focal Tversky loss:** a tunable loss function which gives higher weight to false positives, false negatives, and hard examples, by introducing hyper-parameters \(\beta\) and \(\gamma\) (equation 8).
* **Weighted Jaccard loss:** to normalize the relative weights of each class in the IoU estimate, we experimented with a custom loss function inspired by the Jaccard loss (equation 9).
## 5 Results & Discussion
We report summary results for the baseline and six experiments in table 1, and corresponding precision-recall and ROC curves in figure 3. Table 4 reports detailed performance metrics for our experiments (except data augmentation due to performance drop), and figure 5 compares ground truth labels and baseline results with our predicted segmentation maps on a test set sample.
Tropical cyclones recall.While we measured IoU, Dice, precision, recall/sensitivity, and specificity scores for TC and AR events, our key results focus on: (i) recall performance to prioritize detection of positives given the severity of a positive event; and (ii) TCs specifically, the most destructive extreme weather events, for which previous models showed limited performance.
Key results.After comparing our models on the precision-recall and specificity-sensitivity curves, we found that our weighted Cross Entropy and weighted Jaccard loss models with engineered features and a learning rate scheduler achieve better recall than the baseline (0.7836 and 0.7944 compared to 0.5708, a performance gain of +37.3% and +39.2%, respectively). Our experiments with the baseline model with LR scheduler, with baseline loss on engineered data with LR scheduler, and with cross entropy loss on engineered data with LR scheduler performed worse or no better than the baseline (0.2447, 0.4681, and 0.5016, respectively).
Carbon footprint.Given the climate focus of this model and our goal of keeping it light-weight, we tracked and evaluated our carbon footprint during our experiments. Based on emissions factors from Lacoste et al. (2019), and approximately 40 hours of usage of an NVIDIA A100 GPU VM with 40GB of RAM, we estimate our model training emissions at around 6.24 kg CO\({}_{2e}\).
## 6 Conclusion
In conclusion, semantic segmentation of extreme weather events in climate data is a challenging problem. The small and imbalanced dataset makes improving on task performance difficult, and CGNet is an intentionally light-weight model with limited capacity. IoU alone is a poor performance metric for identification of rare extreme weather events and should be paired with recall to reflect the priority given to true positive predictions on under-represented classes.
We found success with weighted loss functions, and showed a significant (+39.2%) improvement in recall for our class of interest. We demonstrated that careful matching of loss functions and optimization algorithms with the task at hand can yield important performance gains, even for light-weight architectures with a much lower resource footprint than current trends in machine learning.
Because advances in light-weight segmentation are so new (the seminal CGNet paper was published in 2021), we have found no other applications of these novel architectures to climate data so far beyond the reported baseline. We hope our results will contribute to improving automated extreme weather events detection, which is of crucial importance to prediction, mitigation, and equitable adaptation to the increasing destructiveness of anthropogenic climate change.
\begin{table}
\begin{tabular}{l l l l l l l l l}
**Models** & **1: Baseline** & **2: Learning** & **3: Feature** & **4. Cross** & **5. Weighted** & **6. Focal** & **7. Weighted** \\
**\& Metrics** & **model** & **rate decay** & **engineering** & **entropy** & **cross entropy** & **Tversky** & **Jaccard** \\ \hline \multirow{2}{*}{**TC**} & **IoU** & 0.3396 & 0.3492 & 0.3161 & 0.2228 & 0.2025 & 0.3160 & 0.2245 \\ & **Precision** & 0.4560 & 0.5346 & 0.4933 & 0.7134 & 0.2145 & 0.3701 & 0.2384 \\ & **Recall** & 0.5708 & 0.5016 & 0.4681 & 0.2447 & 0.7836 & 0.6836 & **0.7944** \\ & **Specificity** & 0.9962 & 0.9976 & 0.9973 & 0.9995 & 0.9841 & 0.9936 & 0.9860 \\ \multirow{2}{*}{**AR**} & **IoU** & 0.3983 & 0.4128 & 0.4147 & 0.3575 & 0.2932 & 0.3839 & 0.3411 \\ & **Precision** & 0.5429 & 0.5344 & 0.5425 & 0.6896 & 0.3069 & 0.4479 & 0.3714 \\ & **Recall** & 0.5993 & 0.6448 & 0.6377 & 0.4261 & 0.8680 & 0.7287 & 0.8068 \\ & **Specificity** & 0.9701 & 0.9667 & 0.9681 & 0.9886 & 0.8839 & 0.9468 & 0.9191 \\ \end{tabular}
\end{table}
Table 1: Summary results for baseline model and six experiments.
#### Acknowledgments
We would like to thank Lukas Kapp-Schwoerer, Andre Graubner, and their co-authors in Kapp-Schwoerer et al. (2020) for their implementation of CGNet on _ClimateNet_ data, the authors of Wu et al. (2021) for the original light-weight Context Guided network architecture, and the authors of Prabhat et al. (2021) and the climate sciences expert-labeling community for creating and annotating the _ClimateNet_ dataset, which made this study possible. We are also grateful to Andrew Ng, Kian Katanforoosh, and Sarthak Consul at Stanford University for their guidance and support.
#### Data Availability
The original _ClimateNet_ dataset is available at [https://portal.nersc.gov/project/ClimateNet/](https://portal.nersc.gov/project/ClimateNet/). The dataset with engineered features is available at [https://huggingface.co/datasets/rlacombe/ClimateNet/](https://huggingface.co/datasets/rlacombe/ClimateNet/).
We provide an online repository at [https://github.com/hannah141/ClimateNet](https://github.com/hannah141/ClimateNet) with: (i) our modified implementation of the CGNet model building on Kapp-Schwoerer et al. (2020); (ii) notebooks for download, exploration, and visualization of the _ClimateNet_ data set, generation of engineered features, and flexible model training on a Google Colab instance; and (iii) a baseline and six experimental models along with their training and evaluation metrics history.
#### Future Work
A critical issue with model training on _ClimateNet_ is the small and imbalanced nature of the dataset. Also, as is apparent in figure 2, individual labels appear to have some degree of subjectivity, and we suspect human-expert consensus labeling leads to unavoidable bias and high Bayes error. Training on historical observational data, expanding expert-labeling efforts, or learning event identification with more objective ground truth labels (e.g. building on previous work on TC centers identification (Nguyen et al., 2014)) has the potential to improve performance on this task.
A promising direction for that purpose is the _International Best Track Archive for Climate Stewardship_ (IBTrACS) dataset, a historical database of TC positions, wind speeds, and geographical extents maintained by NOAA (Knapp et al., 2018). In conjunction with weather re-analysis data services such as ERA5 (Copernicus Climate Change Service, 2017), this set of labels could enable training on a large corpus of observational data. Crucially, the _IBTrACS_ data set is global and covers oceanic basins where tropical cyclones with the most destructive impact on LMICs are forming.
This avenue for future work could generalize our models from simulations to observational data, a key step towards early warning and detection systems for equitable adaptation to climate change.
|
2309.03483 | DetermiNet: A Large-Scale Diagnostic Dataset for Complex
Visually-Grounded Referencing using Determiners | State-of-the-art visual grounding models can achieve high detection accuracy,
but they are not designed to distinguish between all objects versus only
certain objects of interest. In natural language, in order to specify a
particular object or set of objects of interest, humans use determiners such as
"my", "either" and "those". Determiners, as an important word class, are a type
of schema in natural language about the reference or quantity of the noun.
Existing grounded referencing datasets place much less emphasis on determiners,
compared to other word classes such as nouns, verbs and adjectives. This makes
it difficult to develop models that understand the full variety and complexity
of object referencing. Thus, we have developed and released the DetermiNet
dataset , which comprises 250,000 synthetically generated images and captions
based on 25 determiners. The task is to predict bounding boxes to identify
objects of interest, constrained by the semantics of the given determiner. We
find that current state-of-the-art visual grounding models do not perform well
on the dataset, highlighting the limitations of existing models on reference
and quantification tasks. | Clarence Lee, M Ganesh Kumar, Cheston Tan | 2023-09-07T05:13:52Z | http://arxiv.org/abs/2309.03483v1 | DetermiNet: A Large-Scale Diagnostic Dataset for Complex Visually-Grounded Referencing using Determiners
###### Abstract
State-of-the-art visual grounding models can achieve high detection accuracy, but they are not designed to distinguish between all objects versus only certain objects of interest. In natural language, in order to specify a particular object or set of objects of interest, humans use determiners such as "my", "either" and "those". Determiners, as an important word class, are a type of schema in natural language about the reference or quantity of the noun. Existing grounded referencing datasets place much less emphasis on determiners, compared to other word classes such as nouns, verbs and adjectives. This makes it difficult to develop models that understand the full variety and complexity of object referencing. Thus, we have developed and released the DetermiNet dataset 1, which comprises 250,000 synthetically generated images and captions based on 25 determiners. The task is to predict bounding boxes to identify objects of interest, constrained by the semantics of the given determiner. We find that current state-of-the-art visual grounding models do not perform well on the dataset, highlighting the limitations of existing models on reference and quantification tasks.
Footnote 1: [https://github.com/clarence-lee-sheng/DetermiNet](https://github.com/clarence-lee-sheng/DetermiNet) contains the dataset and code
## 1 Introduction
Humans combine visual and linguistic cues to perform object localization, referencing and quantification tasks on a daily basis. For example, when someone says "pass me a cup", we first locate any cups present, and then select one cup based on other criterias, such as the nearest or cleanest one. Deep learning models [5, 9, 10, 11, 16, 19, 29, 37, 39, 41] can localize object impressively to achieve the first part of the task. However, the ability to deal with a variety of complex referencing and quantification to achieve the second part of the tasks has yet to be properly investigated.
A _determiner_ is an English part-of-speech (word class) that quantifies or references the noun following it. For instance, the determiner in "my apple" versus "your apple" takes reference from different owners. The number of apples being referenced differs for "some apples" versus "all apples". Such semantic differences are succinctly captured by determiners, and not by other word classes.
Determiners like "a", "the" and "my" are ubiquitous and among the most common English words [1, 22]. Most children learn to use determiners at a near-mastery level by 3 years of age [3, 6]. Since determiners play an important role in the semantics of a phrase, they are distinctly classified in natural language processing libraries [26, 35].
Unlike numerous nouns, verbs and adjectives, there are only about 50 determiners in the English language [22]. Nevertheless, determiners can be highly complex, and a hardcoded or fixed-rule approach to using or understanding determiners simply will not work. For instance, take the determiner "some" - in its simplest form, "some" refers to a relatively small number or quantity. However, this can be highly noun-specific and context-specific, _e.g_. the absolute physical quantities for "add some salt" versus "drink some water" are very different. Furthermore, determiners that describe ownership or possession, such as "my" and "your", are highly context-dependent and dynamic, as possession can change on the fly, _e.g_. after handing over an object. In general, there are many such subtleties and complexities for determiners. Hence, a learning-based approach is needed, along with suitable training data.
If state-of-the-art models could learn a schema of determiners [33, 20, 34], it could facilitate flexible combination in novel contexts [21, 17, 28] and improve visual reasoning. However, existing vision-language models such as CLIP [31] and BLIP-2 [23] do not capture the semantic organization of determiners well (see Supplementary Material), and there is no visual grounding dataset that focuses on Determiners. Existing grounded referring expression datasets [4, 13, 15, 18, 27, 36, 38] exclusively focus on "the" and "a", making an unambiguous reference to a specific single object. Some examples include "bottle with a lid", "the
blue truck in the bottom right corner" and "a bird that is close to the baby in a pink shirt". In other words, existing datasets focus on the noun, verb and adjective aspects of referring expressions, with "the" and "a" as the main determiners used.
Hence, as a first step towards bridging this gap, we developed the _DeterminNet_ diagnostic dataset [15] to benchmark current state-of-the-art (SOTA) algorithms on their potential for learning determiner concepts. As with CLEVR [15], good performance on DeterminNet is not an end-goal in itself, as knowledge of the dataset generation process can be used to hand-craft toy models that will not generalize to real-world determiner usage. The dataset uses a bounding box localization task, set in a highly-constrained instruction task context, and deals only with simplified determiner definitions. Even with all these simplifications, we find that SOTA methods do not perform well.
DeterminNet contains 250,000 synthetic images and captions covering 25 determiners. The images are designed with the premise of two avatars interacting at a table with objects. The captions consist of a determiner followed by a noun; the task context is that the viewer is asking the avatar in the image to "pass me {_determiner noun_}".
The task is to choose a set of objects that is consistent with the given {_determiner noun_} pair. Examples are "those apples" or "either orange". Beyond just object detection, the task tests the ability to understand the logical semantics that define various determiners (see Fig. 1), such as selecting the correct number of requested objects. Simply returning all or random instances of the queried noun would not lead to high performance. Since the focus of DeterminNet is on the logical schema of determiners, high levels of visual realism and diversity are not crucial for benchmarking the ability of algorithms to learn determiners.
Finally, we analyze the performance of SOTA models that were pre-trained to perform visual grounding, so as to see if SOTA deep learning models can learn to understand the logical schema governing determiners.
In summary, our contributions are as follows:
1. We developed DeterminNet, the first large-scale diagnostic dataset covering the _determiners_ word class, with 250,000 examples across 25 determiners from all four main types of determiners (Articles, Possessives, Demonstratives and Quantifiers).
2. We show that the core task of learning determiners is very challenging - even an oracle model struggles to learn the determiner schema from a few hundred examples and requires a large dataset.
3. We find that state-of-the-art visually-grounded models show only moderate results on DeterminNet, hence much more work is needed to perform well on the end-to-end task.
## 2 Related work
### Datasets
There has been substantial work in developing datasets for visual question answering and referring expressions. However, referring expression datasets which include egocentric points of view and focus on the full coverage of the determiner class for referring is limited (see Table 1). While a dataset like Flickr30k Entities [30] contains some determiners, its coverage is narrow, with only 5.33% being non-articles. Furthermore, the captions do not consistently capture the semantics of the determiner. For example, although one particular caption specifies "some people...", all the people (_i.e_. many) are labelled instead of just a relatively small number of people. Lastly, Flickr30k Entities is used as a phrase grounding dataset rather than a referring expression dataset, hence it is excluded from Table 1.
### Tasks
A greater confluence between computer vision and natural language processing research has given rise to increasingly complex mixed-modality tasks such as Visual Question Answering (VQA) [4, 15, 36] and Referring Expression Comprehension (REC) [18, 27, 38].
Both the datasets for VQA and REC are similar in that the input comprises of images or videos, and a language query is given as a caption. For VQA tasks, the model has to respond to the query by classifying the correct answer out of several potential choices. REC tasks are considered to be a harder problem as the model has to respond by predicting the bounding box coordinates or segmentation masks that identify the object of interest. Nevertheless, both tasks require a combined understanding of language attributes such as colour, shape and size, and visual attributes such as of object classes and location.
The DeterminNet dataset is related to the REC task, where the model needs to identify the object of interest either using bounding boxes or segmentation masks. However, our task defers from existing REC tasks in two ways.
Firstly, DeterminNet's captions involve only two components, a determiner followed by a noun, instead of descriptive adjectives such as colours or shapes [15], or other nouns such as people or objects [18]. This forces models to learn
\begin{table}
\begin{tabular}{l|c|c|c|c|c|c|c}
**Datasets** & **A** & **P** & **D** & **Q** & **View** & **Images** & **Type** \\ \hline RefCOCO [18] & Y & N & N & N & Exo & 19,994 & Real \\ RefCOCO+ [18] & Y & N & N & N & Exo & 19,992 & Real \\ RefCOCOg [27] & Y & N & N & N & Exo & 26,711 & Real \\ CLEVR-Ref+ [25] & Y & N & N & N & Exo & 99,992 & Synth \\ YouRefIt [8] & Y & N & N & N & Exo & 497,348 & Real \\ \hline DeterminNet & Y & Y & Y & Y & **Ego** & 250,000 & Synth \\ \end{tabular}
\end{table}
Table 1: Comparison of datasets for referring expressions [14]. A, P, D, Q, Exo and Ego stand for Articles, Possessives, Demonstratives, Quantifiers, Exocentric and Egoometric respectively.
and reason using a new word class, instead of using visual features and spatial representations pre-learned from other visual datasets.
Secondly, REC tasks usually tests the identification of a single object. However, DetermiNet requires models to predict multiple objects based on the query given, instead of identifying only a single instance. For example, if an image has three apples and two carrots and the query is "all apples", the model needs to predict all three bounding boxes instead of a single one. This is the biggest difference between DetermiNet and other REC tasks. DetermiNet allows the development of models to identify multiple objects that correspond to the determiner schema.
Since DetermiNet allows multiple solutions to be proposed, there can also be multiple combinations of possible solutions. For example, given the same image with three apples and two carrots, and the query "any apples", the total number of correct solutions quickly increases to \(C(3,1)+C(3,2)+C(3,3)=7\). The task evaluation metric should not penalise possible solutions and should accommodate the model prediction accordingly. To our knowledge, there are no REC or VQA tasks that support multiple combinations of solutions.
### Models
Existing visually grounded models can combine language and visual modalities to achieve superior performance on many downstream tasks such as those in Grounded Language and Visual Question and Answering.
Dual Encoder models such as MDETR [16] and GLIP [42] use an image and text encoder model to encode the inputs before implementing a deep fusion or transformer layer to train the model on the image caption pairs. Seq2Seq models such as OFA [39] follow the likes of GPT [32] by processing multimodal inputs using byte sequence representation. A unified vocabulary approach to vision and language tasks is taken to perform the grounding tasks. SOTA models such as MDETR and OFA perform really well on visual grounding tasks by achieving 87.5% AP and 92.0% AP respectively on the RefCOCO dataset.
However, these models have been largely evaluated against referring expression datasets that are dependent on the spatial and visual attributes of objects. Hence, a more challenging dataset is needed to determine if these SOTA models are robust to solve language and egocentric-based object referencing, like in natural language.
Figure 1: Organization and characteristics of the 25 determiners in DetermiNet.
## 3 The DetermiNet dataset
DetermiNet is the first visuo-linguistic dataset based on the determiners word class. Fig. 1 describes our determiner schema that describes which object and how many of those objects should be selected. The dataset was generated synthetically using this schema and focuses on the referencing and quantification of noun phrases. Determiners are largely used from an egocentric perspective, and their properties requires models to perform deeper and more complex reasoning to accomplish the visual grounding task. Careful curation of the dataset was conducted to account for these complexities.
To provide a comprehensive coverage, our dataset includes all four main types of determiners [2, 22], namely:
**- Articles:** identify nouns which the speaker is referring to **- Possessives:** signify ownership of the noun
**- Demonstratives:** isolate nouns that are being referred to
**- Quantifiers:** describe the amount of the referred noun
### Dataset design and construction
DetermiNet is a synthetically generated dataset based on an end-to end-pipeline developed in Unity. Scene and phrase generations were done through predefined scene configurations based on the scene chart. Since the logic governing determiners is unrelated to the level of visual realism, DetermiNet follows the approach of synthetic data with visual simplicity [12, 15, 24, 40]. For example, CLEVR [15] and CLEVRER [40] use only 3 shapes, 2 materials and 8 colors; the background is uniform.
### Dataset statistics
DetermiNet has a comprehensive coverage of 25 determiners. We generated 10,000 image-caption pairs per determiner, totaling 250,000 samples. We describe the breakdown of our train, test, and validation splits in Table 2.
In total, our dataset includes a variation of 15 object classes, including 5 countables starting with consonant sounds (_e.g_. "a lemon"), 5 countables starting with vowel sounds (_e.g_. "an apple") as well as 5 uncountable substances (_e.g_. "some grape juice"). Ground truths are determined by the object which the determiner is referring to. This referred object will then be labelled as part of the ground truth annotations (Fig. 2). Variations indicate the number of different permutations of the object, while the number of objects spawned indicate the possible count of that particular item spawned in the scene. A summary of the scene and object variations is shown in Table 3.
### Scene generation and ground truth annotation
DetermiNet is based on the interaction of two avatars at a table. We randomly spawn the positions of objects, as well as generate different perspectives. Configuration parameters were used to determine the construction of each scene, providing a unified interface for scene generation. These configuration parameters follow the tree in Fig. 1, and can be adjusted to the user's own definitions. Attributes include type of object (countability), number of referred objects (plurality), spawn locations and distance from the viewer. Egocentric viewpoints of the viewer were generated by attaching the camera to the viewer's head and directing the camera to focus towards the center of the table. We varied the avatars' positions to generate multiple perspectives.
Images were rendered using Unity3D. Camera projections were used to check for visibility of the spawned objects and collision detectors were put in place to ensure that objects did not intersect. Different objects (tray, tables) were also sampled to be used as random spawn locations.
Mesh vertices were projected onto the camera's 2D space to extract bounding boxes for all objects, modeling a perfect object detector. Unity's Image Synthesis module was used to generate object segmentation masks.
### Phrase generation
DetermiNet uses the task context of "pass me {_determiner noun_}", _e.g_. "pass me an apple", "pass me that apple". For simplicity, we omitted "pass me". Hence, the phrases are simple captions with only a determiner and its noun phrase (Fig. 2), _e.g_. "an apple", "this apple", "some grape juice". Additionally, we follow this phrasing format while keeping errors in grammatical structure minimal. For example, "pass me all apples" is sufficient to capture the task instead of "pass me all the apples".
### Evaluation metric for DetermiNet
Since the task is to evaluate bounding box predictions, we used the detection evaluation metric used by COCO,
\begin{table}
\begin{tabular}{l|c|c|c}
**Splits** & **Samples** & **Objects** & **Ground truth b-boxes** \\ \hline Train & 175000 & 2799790 & 460200 \\ Validation & 25000 & 399654 & 66023 \\ Test & 50000 & 799756 & 131460 \\ \end{tabular}
\end{table}
Table 2: Statistics for train, test and validation splits
\begin{table}
\begin{tabular}{l|c|c}
**Object** & **Variations** & **No. spawned in scene** \\ \hline _Referred_ objects & 15 & 1-9 \\ All objects & 15 & 10-20 \\ Countables (consonant) & 5 & 1-20 \\ Countables (vowels) & 5 & 1-20 \\ Uncountables & 5 & 1-20 \\ Trays & 2 & 2 \\ Tables & 3 & - \\ Camera positions & 3 & - \\ \hline \end{tabular}
\end{table}
Table 3: Scene variations
specifically the average precision (AP) metric with IoU thresholds ranging from 0.5 to 0.95.
The DetermiNet dataset contains scenarios where different combinations of solutions can be correct. For instance, for an image with three apples and a query specifying "an apple", there are three equally correct solutions. However, a correct bounding box prediction should only contain one bounding box instead of three. If all three bounding boxes are predicted, the evaluation metric should evaluate the prediction as one true positive and two false positives.
To account for multiple correct solutions during evaluation, we developed a ground truth correction function that compares the model's predicted bounding boxes against all the relevant bounding boxes that satisfy both the determiner and noun conditions. The function chooses the ground truth bounding box that has the highest IoU with the predicted bounding box, and discards the rest of the relevant ground truth bounding boxes based on the quantity specified by the determiner.
The modified ground truth annotations are then used to evaluate the predictions. This way, if a model predicts three bounding boxes instead of one, the prediction with the highest IoU and prediction score will be treated as true positive, and the other two predictions treated as false positive.
## 4 Experiments
In this section, we verify the challenge posed by the dataset to refer or quantify objects of interest using five models. Since the DetermiNet task is similar to the REC task, models need to predict bounding boxes which were evaluated using the Average Precision (AP) evaluation metric. Before evaluation, the ground truth bounding box annotations were modified to account for multiple combinations of correct solutions.
### Random selection model
The first model is a random bounding box selection model (Fig. 3). This model has two components. The first is a perfect object detector (see 3.3) that tags all objects with class labels and their corresponding bounding boxes.
The second component sampled prediction scores between 0 to 1 from a uniform distribution and generated positive and negative masks based on a threshold of 0.5 which was used to select bounding boxes as predictions.
In short, the perfect object detector generated a list of bounding boxes and the attention mask randomly selected a subset of bounding boxes as predictions without using information of either determiner or noun.
### Neuro-Symbolic oracle model
The neuro-symbolic model (Fig. 3) was developed to isolate the main challenge of the dataset, which is to classify objects of interest based on the concept specified by the determiner. Hence, this model tackles the DetermiNet dataset as a classification problem, similar to VQA models.
Like the random selection model, a perfect object detector was used to identify all the object bounding boxes, class labels and volume of liquid within the object. These three information were fed to a single feedforward layer with 128 units to embed the visual information.
A perfect text encoder converted the two-part caption specifying the determiner and the noun into two one-hot encoded vectors. The first one-hot vector of length 25 represented the determiner, and the second one-hot vector of length 16 represented the noun. The two vectors were concatenated and fed to another feedforward layer with 128 units to embed textual information.
The output of the two embedding layers were concatenated and fed to two feedforward layers, each with 256 units, followed by a final classification layer with sigmoid activation function.
A ground truth attention mask was generated by comparing all the objects detected in the image against the ground truth bounding boxes such that masking the list of object bounding boxes detected by the perfect object detector will provide the ground truth bounding boxes. The model was trained to predict the ground truth attention mask using binary cross entropy for 30 epochs.
The model's prediction scores from the classification layer and bounding boxes extracted by the perfect object detector were used for evaluation. The neuro-symbolic model can be considered to be an **oracle model**, as it received ground-truth information about all the objects in the image, and it only needs to learn to predict the correct bounding boxes given the determiner and noun.
Figure 2: Examples from DetermiNet, with image, phrase, target bounding boxes and segmentation masks shown.
### SOTA deep learning models
To verify the full challenge posed by DetermiNet, we fine-tuned three SOTA visual grounding models, OFA[39] with ResNet-152 backbone, GLIP [42] and MDETR [16] with ResNet-101 backbone for 5 epochs on our dataset.
OFA's weights were pretrained on RefCOCO and VG datasets, GLIP's weights were pretrained on O365, GoldC, CC3M and SBU datasets while MDETR's weights were pretrained on the RefCOCO, VG and Flickr datasets. Both image and captions were passed as inputs to the SOTA models, and the bounding box predictions were obtained as outputs. The object class prediction was not relevant to our DetermiNet task, so we set category ID to 1 for all predictions. While GLIP and MDETR models returned multiple bounding box predictions and scores, OFA is designed to predict only one bounding box per image.
## 5 Benchmarking models on DetermiNet
After correcting the ground truth annotations to account for multiple solutions, the random bounding box selection model demonstrates the worst performance of 9.8% AP. Even though the random model has the perfect object detection module, randomly selecting different quantities of different objects without considering the textual information leads to poor performance. This can be treated as the lower-bound performance for the DetermiNet dataset.
In contrast, the oracle demonstrates the highest performance of 93.5% AP (Table 4) as it receives object class and textual information while only needing to learn the determiner schema. Since the oracle model is only tested on semantics to provide a **rough upper-bound** for DetermiNet, its performance **should not be directly compared** against end-to-end models which learn both object detection and determiner semantics, and whose learning performance is difficult to disentangle. When the oracle uses MDETR object detection outputs instead of perfect detection, overall AP fell to 62.8%.
When comparing end-to-end finetuned models, OFA performs the worst, as it is only able to predict one bounding box, similar to the REC task condition, contributing to high false negatives. GLIP achieves 55.0% while MDETR achieves the best performance of 70.6% AP (Table 4). Although MDETR's bounding box predictions are impressive to identify the reference objects, the model does not constrain its predictions according to the determiners schema, incurring high false positive predictions. Conversely, MDETR performs well on uncountable quantifiers and possesses (Fig. 4). This is likely because MDETR gets the raw RGB image as input, allowing it to understand and reason about volume levels within a cup or the presence of the referred object on the tray.
### Embedding of determiners
To study how the dataset is represented in both an untrained and trained network, we extracted the neural activity of the layer before the attention mask classifier. The neural activity was clustered using Linear Discriminent Analysis, with the determiner labels as targets. Before training, the neural representations corresponding to 25 determiners were highly overlapped and the centroid coordinates for each determiner class occupied the same space (Fig. 5, left).
As training progressed, the embedding of the 25 determiners evolved into clusters (Fig. 5, middle). The dendrogram (Fig. 5, right) represents the euclidean distance between centroids after training. With training, the network learns a representation that seemingly corresponds to the organization of determiners in Figure 1.
Neural representations for "a" and "an" occupy the same subspace as they obey the same articles determiner schema. We can see similar clustering of determiner subclasses such as "both" and "neither" which fall under quantifiers and "this" and "that" which fall under demonstratives. However, some determiners such as "the" and "our" do not occupy the same subspaces as articles or possessives, suggesting that the model struggles to disentangle them. Surprisingly, unlike the oracle model, text encoders in established vision-language models such as CLIP [31] and BLIP-2 [23] do not demonstrate distinct organization of determiners **(see Supplementary Material).**
### Ablation study
To determine the importance of determiners and nouns in the DetermiNet task, we conducted ablation studies using oracle and MDETR models where the determiner, noun or both determiner and noun were masked during evaluation.
Masking determiners while feeding in the noun is similar to a query-based object detection task. The decrease in performance for the oracle model was 22.2% while MDETR suffered a decrease of 14.3%, suggesting that MDETR learnt to predict bounding boxes using most of the determiner concepts, though not as well as the oracle model.
When the determiner was given but the noun was masked, AP dropped significantly since the object to be identified was not known. Finally, when both determiner and noun were omitted, the oracle performed similarly to the lower bound random model while MDETR performed much worse since it also had to perform object detection.
Nevertheless, SOTA models do learn some determiner concepts, and lower performance can be attributed to errors in both object detection and bounding box classification.
### Dataset efficiency
Since 10,000 examples per determiner in the full dataset is presumably way beyond what humans require to learn determiners well, we trained the oracle and MDETR models on randomly sampled subsets (N=6) of DetermiNet training samples to determine how much data is needed for the models to learn the determiner schema.
Since the oracle has a perfect object detector and text encoder, the increase in oracle performance is attributed solely to the learning of determiner schema. Despite the isolation of training, the oracle model struggles to learn the concept of determiners even with 1,000 examples per determiner. This could be because the oracle model has 188,308 trainable parameters and a large dataset is needed to optimize the weights accordingly. Conversely, MDETR has 185 million parameters but was pre-trained to perform object detection. After fine-tuning MDETR with 1,000 examples per determiner, its performance matches the ablation condition where the model can achieve 56.3% without needing to learn determiners (Table 5), suggesting that the faster improvement is likely due to improved object detection in DetermiNet, rather than learning about determiners. Nevertheless, DetermiNet follows a scaling law that is consistent with other visual recognition tasks.
## 6 Transfer of learning to real images
We curated a dataset with 100 real world images and captions using images from COCO [7]. The oracle model achieved decent zero-shot performance on the real-image samples (78.1%), demonstrating a neural network's ability to generalize to real images if object detection works well.
Although MDETR was pre-trained on RefCOCO, it struggled to refer and quantify individual objects according to the determiner schema (10.4%) since RefCOCO did not account for such determiner concepts (Table 1) and instead predicted single bounding boxes for a collection of objects (Fig. 6). Fine-tuning MDETR on the synthetic DetermiNet significantly increased performance to 19.5% as the model learned to identify and quantify each object (Fig. 6, top row), suggesting that the determiner concepts learned
\begin{table}
\begin{tabular}{l|c|c|c}
**Samples** & **10** & **100** & **1000** \\ \hline Oracle & 17.9\(\pm\)0.6 & 29.6\(\pm\)0.4 & 44.7\(\pm\)3.3 \\ \hline MDETR & 2.8\(\pm\)1.0 & 33.5\(\pm\)1.2 & 55.0\(\pm\)0.8 \\ \hline \end{tabular}
\end{table}
Table 6: AP@IoU=0.5:0.95 with standard deviation attained after training models on 10, 100, 1000 samples per determiner.
\begin{table}
\begin{tabular}{c|c|c}
**Ablation condition** & **Oracle** & **MDETR** \\ \hline \hline Noun+ / Det+ & 93.5 & 70.6 \\ Noun+ / Det- & 71.3 & 56.3 \\ Noun– / Det+ & 11.3 & 11.3 \\ Noun– / Det- & 9.8 & 0.2 \\ \hline \end{tabular}
\end{table}
Table 5: Ablation study with masked captions. Performance reported AP@IoU=0.5:0.95
from the synthetic dataset transferred to real images to a certain extent. However, MDETR still struggles with some determiner concepts such as "half" (Fig. 6, bottom row). The far lower MDETR performance could be due to poor object detection, separate from learning the semantics of determiners. The real-image test samples will be made available along with the synthetic DetermiNet.
## 7 Current limitations
Since the dataset focuses on referencing and quantification, we omitted the use of wh-determiners (_e.g_. "where", "what"), which are mainly used in question answering tasks. Since we constrained our captions to fit the task context of "pass me {determiner, noun}", comparison determiners such as "more" and "less" were left out for now, as they require multiple sets of nouns. Furthermore, gender-specific possesses such as "his" and "her" were omitted, as gender recognition is not the focus of this work. Additionally, the composition of multiple determiners (_e.g_. "pass me some of those apples) will be explored in future work.
Parameter-efficient finetuning, or adding the semantic module of the oracle to a trained detector such as MDETR can serve as an additional evaluation to disentangle the learning performances of object detection and determiner semantics in end-to-end models.
## 8 Conclusion
We present the DetermiNet dataset to determine if models can learn object referencing and quantification for all four major determiner categories. The dataset accommodates multiple combinations of possible solutions, as in a natural language context. Since the dataset images and ground truth annotations were synthetically generated, it allows for rapid reconfiguration of parameters, scenes and object classes to increase the challenge posed by the dataset.
Our experiments demonstrate that although state-of-the-art visual grounding models are able to identify objects of interest, they do not perform well on the overall task. While they can learn the semantics of some determiners and transfer the concept to real images, they require exponential amounts of data to learn the determiner schema and struggle to handle ambiguity when considering multiple combinations of possible solutions.
In summary, DetermiNet highlights determiners as important and complex but neglected, and formulates a common task framework for all 4 determiners types. It shows the current limitations of visual grounding models in learning determiner schemas in referencing and quantification. Good oracle results on real images suggests the "determiner logic module" could be used for captioning, VQA, etc.
## 9 Acknowledgements
This was supported by an A*STAR CRF award (C.T.), and by the Design and Artificial Intelligence department, Singapore University of Technology and Design (C.L.).
\begin{table}
\begin{tabular}{l|c}
**Models (Tasks pretrained on)** & **AP@IoU=0.5:0.95** \\ \hline Oracle & 78.1 \\ \hline MDETR (Pretrained) & 10.4 \\ MDETR (Finetuned on DetermiNet) & 19.5 \\ \end{tabular}
\end{table}
Table 7: Zero-shot evaluation on real-image dataset
Figure 5: Clustering 25 determiners represented in the last feature layer of the oracle model using LDA.
Figure 6: Ground truth, pretrained MDETR, MDETR fine-tuned on DetermiNet and Oracle model predictions on 100 real images. |
2310.20339 | ExoRecovery: Push Recovery with a Lower-Limb Exoskeleton based on
Stepping Strategy | Balance loss is a significant challenge in lower-limb exoskeleton
applications, as it can lead to potential falls, thereby impacting user safety
and confidence. We introduce a control framework for omnidirectional recovery
step planning by online optimization of step duration and position in response
to external forces. We map the step duration and position to a human-like foot
trajectory, which is then translated into joint trajectories using inverse
kinematics. These trajectories are executed via an impedance controller,
promoting cooperation between the exoskeleton and the user.
Moreover, our framework is based on the concept of the divergent component of
motion, also known as the Extrapolated Center of Mass, which has been
established as a consistent dynamic for describing human movement. This
real-time online optimization framework enhances the adaptability of
exoskeleton users under unforeseen forces thereby improving the overall user
stability and safety. To validate the effectiveness of our approach,
simulations, and experiments were conducted. Our push recovery experiments
employing the exoskeleton in zero-torque mode (without assistance) exhibit an
alignment with the exoskeleton's recovery assistance mode, that shows the
consistency of the control framework with human intention. To the best of our
knowledge, this is the first cooperative push recovery framework for the
lower-limb human exoskeleton that relies on the simultaneous adaptation of
intra-stride parameters in both frontal and sagittal directions. The proposed
control scheme has been validated with human subject experiments. | Zeynep Özge Orhan, Milad Shafiee, Vincent Juillard, Joel Coelho Oliveira, Auke Ijspeert, Mohamed Bouri | 2023-10-31T10:24:37Z | http://arxiv.org/abs/2310.20339v1 | # ExoRecovery: Push Recovery with a Lower-Limb Exoskeleton
###### Abstract
Balance loss is a significant challenge in lower-limb exoskeleton applications, as it can lead to potential falls, thereby impacting user safety and confidence. We introduce a control framework for omnidirectional recovery step planning by online optimization of step duration and position in response to external forces. We map the step duration and position to a human-like foot trajectory, which is then translated into joint trajectories using inverse kinematics. These trajectories are executed via an impedance controller, promoting cooperation between the exoskeleton and the user. Moreover, our framework is based on the concept of the divergent component of motion, also known as the Extrapolated Center of Mass, which has been established as a consistent dynamic for describing human movement. This real-time online optimization framework enhances the adaptability of exoskeleton users under unforeseen forces thereby improving the overall user stability and safety. To validate the effectiveness of our approach, simulations, and experiments were conducted. Our push recovery experiments employing the exoskeleton in zero-torque mode (without assistance) exhibit an alignment with the exoskeleton's recovery assistance mode, that shows the consistency of the control framework with human intention. To the best of our knowledge, this is the first cooperative push recovery framework for the lower-limb human exoskeleton that relies on the simultaneous adaptation of intra-stride parameters in both frontal and sagittal directions. The proposed control scheme has been validated with human subject experiments.
## I Introduction
Lower-limb exoskeletons (LLEs), while promising for assisting those with walking impairments, face significant challenges in maintaining balance and stability, particularly in real-world scenarios with various external disturbances [1, 2, 3]. Balance is a generic term describing the body posture dynamics to prevent falling and it is a crucial ability to ensure upright standing [4, 5]. Researchers revealed three elementary strategies of human balance recovery such as ankle and hip strategies, and stepping strategy with variable step duration [6, 4, 7, 8]. When the disturbances are too large to handle with the aforementioned strategies, they are tackled with a stepping strategy.
When individuals experience balance perturbations, executing a well-coordinated step can help restore equilibrium and prevent falls. By adjusting the position of the feet, the stepping strategy allows individuals to shift their center of mass (CoM), stabilize their posture, and counteract external disturbances. The implementation of an effective stepping strategy with a suitable step length and step duration is particularly important in LLEs, as it enables users to regain balance and navigate safely in challenging environments.
Although the majority of the studies focused on gait assistance with LLEs, balance control in standing and walking is crucial for the control of an LLE for posture and gait assistance [9, 10]. Recently, the balance strategies of humans also inspired researchers in the field of LLEs to assist the balance of users [10, 11].
Most of the effort in the standing balance assistance is taking advantage of actuated ankle joints [12, 13, 14, 15, 16, 17]. There are other studies that investigated ankle, hip, and combined strategy in terms of subject-specific stability limits, slip prevention, importance of hip abd./add. for weight shift and lateral foot placement [1, 18, 19].
Duburcq et al. [20] pioneered a push recovery controller based on reinforcement learning, marking a substantial advancement. They successfully demonstrated reactive push recovery with a humanoid robot, utilizing deep reinforcement learning. Notably, this study did not involve direct interaction between the exoskeleton and the user, and the exoskeleton functioned primarily as a humanoid robot.
In the study by Zhang et al. [21], they proposed a method based on the capture point (CP) concept to enhance the balance restoration capabilities of LLEs under significant interference conditions. However, this study did not address step duration optimization, and its focus was exclusively
Fig. 1: The 2nd prototype of autonomy lower-limb exoskeleton. A simplified kinematics model of the autonomy exoskeleton is illustrated.
on the sagittal plane. Similarly, in [22], an xCoM-based balance assistance strategy in the sagittal plane is proposed for disturbances under forward and backward directions. Vallery et al. [23], proposed another xCoM-based balance controller that provides calculated feed-forward trajectories. The controller is triggered in case of balance loss and is used to keep users in an upright standing position for a stationary exoskeleton. In [24], a balance assistance strategy in the sagittal plane is suggested based on the zero-moment point (ZMP) model-based method. The ZMP has been used to generate the trajectory of movement during the stance phase and assistive torques are designed based on the minimization of modulating of virtual potential energy.
It's worth noting that the CP, Divergent Component of Motion (DCM) and the xCoM share the same definition. The terms CP and DCM are commonly used within the robotics community, while xCoM is more frequently employed in the biomechanics community. Although CP and ZMP-based methods are widely used in the literature on humanoids [25, 26, 27, 28, 29] and also for walking pattern generation for exoskeletons [30, 19, 31], it has not been investigated in detail for step position and duration adaptation of exoskeletons in sagittal and frontal planes. Furthermore, these methods were tested in simulations, leaving room for further hardware validation [32, 33, 34, 35, 36]. Farkhatdinov et al. [37] introduced a push recovery mechanism for the human-exoskeleton. However, this approach primarily focused on applying assistive torque without model-based optimization or consideration of the system's dynamics.
As the exoskeleton user actively participates in the recovery step, it is crucial to override the user's behaviors to provide assistance. Instead, the exoskeleton and the user symbiotically move toward the human desired recovery pose. To the best of our knowledge, currently, there is no robotic exoskeleton that can support adaptive step position and duration for balance recovery in sagittal and frontal planes that are collaboratively interacting with exoskeleton users.
The present work describes an experimental study on exoskeleton-assisted recovery stepping together with its theoretical background for balance recovery in case of perturbation in healthy individuals. The introduced framework offers targeted joint trajectories to facilitate adjustments in stepping location, encompassing hip abd./add. for step-width adaptation, as well as knee and hip flex./ext. for step-length adaptation. In particular, the contributions of this paper are the following:
* We have developed a user-cooperative omnidirectional recovery stepping control strategy in case of a balance loss. The suggested strategy is implemented and verified through simulations and conducted in-lab experimental evaluations with human subjects.
* We propose a framework to detect a possible balance loss under severe perturbations based on divergent component of motion dynamics.
* We have implemented a bio-inspired recovery step trajectory based on human foot position during gait.
## II Methods
### _Linear Inverted Pendulum Model (LIPM)_
The LIPM has been widely utilized to describe the dynamics of the CoM for bipedal locomotion [38]. The LIPM assumes a constant rate of change of centroidal angular momentum and movement of the CoM height within a plane. In [17], it has been suggested that for symbiotic human-exoskeleton balance control, CoM kinematics-based feedback could be beneficial to precede physiological responses. Also, these assumptions are suitable for planning humanoid robot locomotion, as research on human walking indicates minimal variations in centroidal angular momentum and CoM height [39]. Based on the assumptions, the equations of motion for the LIPM can be derived as follows:
\[\ddot{\mathbf{x}}\!=\!\omega^{2}(\mathbf{x}\!-\!\mathbf{cop}) \tag{1}\]
in which \(\mathbf{x}\!=\![x_{com}\!,\!y_{com}]^{T}\) is the horizontal position of the CoM, \(\omega_{0}\!=\!\sqrt{\frac{g}{\Delta z}}\) is the natural frequency of the LIPM, and \(\mathbf{cop}\!=\![cop_{x}\!,\!cop_{y}]^{T}\) is the horizontal position of the center of pressure (CoP).
### _Divergent Component of Motion_
In this section, we provide an overview of the DCM concept's background. The dynamics of the CoM, as modeled by the LIPM, can be split into stable and unstable components. The unstable component is referred to as the DCM and is defined as follows:
\[\boldsymbol{\xi}\!=\!\mathbf{x}\!+\!\frac{\dot{\mathbf{x}}}{\omega} \tag{2}\]
Throughout this study, DCM is represented as \(\xi\) as the notation in [40]. From (2), the CoM dynamics is given by:
\[\dot{\mathbf{x}}\!=\!\omega(\boldsymbol{\xi}\!-\!\mathbf{x}) \tag{3}\]
By differentiating (2) and substituting (1), the DCM dynamics is expressed as :
\[\dot{\boldsymbol{\xi}}\!=\!\omega(\boldsymbol{\xi}\!-\!\mathbf{cop}) \tag{4}\]
Fig. (2) illustrates the relationship between DCM dynamics, CoM, and the CoP. By re-arranging DCM dynamics (4), the following ordinary differential equation (ODE) holds:
\[\dot{\boldsymbol{\xi}}\!-\!\omega\boldsymbol{\xi}\!=\!-\omega\;\mathbf{cop}_{0} \tag{5}\]
The solution to (5) writes:
\[\boldsymbol{\xi}(t)\!=\!e^{f\omega dt}\!\left[\!\int(-\mathbf{cop}_{0}\,\omega )e^{f-\omega dt}dt\!+\!\mathbf{C}\!\right]\!, \tag{6}\]
where \(C\!\in\!\mathbb{R}^{2}\) is the vector of unknown coefficients that can be found by imposing the boundary conditions.
Fig. 2: DCM, CoM and CoP Points correlations for Centroidal Dynamics
Therefore, we can find these coefficients by solving the problem (6) either as an initial value problem, namely
\[\boldsymbol{\xi}(0)\!=\!\boldsymbol{\xi}_{0}\!=\!\mathbf{c}\mathbf{p}_{0}\!+ \!\mathbf{C}_{0}, \tag{7}\]
or as a final value problem:
\[\boldsymbol{\xi}(T)\!=\!\boldsymbol{\xi}_{T}\!=\!\mathbf{c}\mathbf{p}_{0}\!+ \!\mathbf{C}_{T}\,e^{\omega T}. \tag{8}\]
Therefore, by solving the equation (4) as an initial value problem, we arrive at the following equation that represents the time evolution of the DCM:
\[\boldsymbol{\xi}\!=\!(\boldsymbol{\xi}_{0}\!-\!\mathbf{c}\mathbf{p}_{0})\exp (\omega t)\!+\!\mathbf{c}\mathbf{p}_{0} \tag{9}\]
We also can solve the CoM dynamics (3) by treating it as an initial value problem:
\[\mathbf{x}\!=\!(\mathbf{x_{0}}\!-\!\boldsymbol{\xi}_{0})\exp(-\omega t)\!+ \!\boldsymbol{\xi}_{0} \tag{10}\]
As evident from the above equation, the CoM exhibits stable dynamics, with the exponential term being negative. However, the DCM exhibits unstable dynamics, characterized by a positive exponential term. This indicates that the difference between \(\xi_{0}\) and \(cop_{0}\) increases exponentially over time. Therefore, a prerequisite for ensuring the stability of the CoM trajectory is that the DCM trajectory remains stable. As stated in [41], the concept of DCM provides a relatively straightforward approach for formulating stability requirements in walking. A simple rule proves effective in ensuring walking stability: when placing the foot, position the Center of Pressure (CoP) at a certain distance behind and outward from the DCM at the moment of foot contact. This distance between the CoP and the DCM during foot placement is referred to as the DCM offset, and minimizing this distance is crucial for maintaining viable states.
### _Step Adaptation Controller_
This section presents the step adaption mechanism based on DCM dynamics [28, 29]. More precisely, we present below a step adjustment strategy that optimizes the next step position and timing based on the measured DCM.
To find a DCM trajectory that satisfies both the initial and the final condition problems, the coefficient \(C_{0}\) must equal \(C_{f}\). Thus, by combining (7) and (8), one has:
\[\boldsymbol{\xi}_{0}\!-\!\mathbf{c}\mathbf{p}_{0}\!=\!(\boldsymbol{\xi}_{T}\! -\!\mathbf{c}\mathbf{p}_{0})e^{-\omega T}. \tag{11}\]
Now by defining \(\sigma\!=\!e^{\omega T}\) we obtain :
\[\boldsymbol{\xi}_{T}\!+\!\mathbf{c}\mathbf{p}_{0}(-1\!+\!\sigma)\!-\! \boldsymbol{\xi}_{0}\sigma\!=\!0. \tag{12}\]
Let \(\mathbf{c}\mathbf{p}_{T}\) represent the CoP position at the start of the next step, and \(\boldsymbol{\gamma_{T}}=\boldsymbol{\xi_{T}}-\mathbf{c}\mathbf{p}_{\mathbf{T}}\) denote the DCM offset for the next step (i.e, the end of this step). Therefore, straightforward calculations lead to:
\[\boldsymbol{\gamma_{T}}\!+\!\mathbf{c}\mathbf{p}_{T}\!+\!(\mathbf{c}\mathbf{ p}_{0}\!-\!\boldsymbol{\xi}_{0})\sigma\!=\!\mathbf{c}\mathbf{p}_{0}. \tag{13}\]
The step adjustment problem can be formalized as a constrained optimization problem, wherein the search variables consist of \(\gamma_{T}\), \(\mathbf{c}\mathbf{p}_{T}\), and \(\sigma\), and the cost function is appropriately defined in a quadratic manner. It's worth noting that the desired final DCM position and step timing are dependent on \(\gamma_{T}\) and \(\sigma\), respectively. Additionally, \(\mathbf{c}\mathbf{p}_{T}\) is assumed to be located at the center of the foot at the start of the next step. Therefore, we can treat this position as the target for the upcoming footstep placement. The selected cost function aims to minimize the deviation of the desired gait values from the nominal ones:
\[J =\!\alpha_{1}\!\left\|\mathbf{c}\mathbf{p}_{T}\!-\!\mathbf{c} \mathbf{p}_{T,nom}\right\|^{2}\!+\!\alpha_{2}\!\left\|\gamma_{T}\!-\! \boldsymbol{\gamma}_{nom}\right\|^{2} \tag{14}\] \[+\alpha_{3}\!|\sigma\!-\!e^{\omega T_{nom}}|^{2},\]
where \(\alpha_{1}\), \(\alpha_{2}\), \(\alpha_{3}\) are positive numbers and the next ZMP position \(\mathbf{c}\mathbf{p}_{T,nom}\), step duration \(T_{nom}\) and next DCM offset \(\gamma_{nom}\) are the desired values.
We also present the following set of inequality constraints:
\[\left[\begin{matrix}I_{2}&0_{2\times 1}&0_{2}\\ -I_{2}&0_{2\times 1}&0_{2}\\ 0_{1\times 2}&I_{1}&0_{1\times 2}\\ 0_{1\times 2}&-I_{1}&0_{1\times 2}\end{matrix}\right]\!\left[\begin{matrix} \mathbf{c}\mathbf{p}_{T}\\ \sigma\\ \boldsymbol{\gamma_{T}}\end{matrix}\right]\!\leq\!\left[\begin{matrix} \mathbf{c}\mathbf{p}_{T,max}\\ -\mathbf{c}\mathbf{p}\mathbf{p}_{T,min}\\ \sigma_{max}\\ -\sigma_{min}\end{matrix}\right]\!, \tag{15}\]
Here, \(\mathbf{c}\mathbf{p}_{T,max}\) and \(\mathbf{c}\mathbf{p}_{T,min}\) are in \(\mathbb{R}^{2}\), while \(\sigma_{max}\) and \(\sigma_{min}\) belong to \(\mathbb{R}\). These inequality constraints are established considering the constraints imposed by leg kinematics on the maximum step length and by the maximum achievable velocity on the minimum step duration. Lastly, the relationship described in (13) is considered as an equality constraint. Due to the quadratic and linear dependence of the cost function and constraints on the unknown variables, the entire framework can be formulated as a Quadratic Programming (QP) problem. At each control cycle, the QP problem is solved by substituting \(\xi_{0}\) with the current DCM position and dynamically shrinking the single support duration as the robot executes the step. It is worth noting that during push recovery with a human-exoskeleton, the controller attempts to minimize the DCM offset. However, at the end of the recovery step, the capturability constraint for stopping movement, which mandates a DCM offset of zero, is managed with the use of the exoskeleton, employing an ankle strategy. This is consistent with the concept of an assisting mode where humans and robots collaborate to maintain balance.
### _Control Strategy Implementation_
#### Iv-D1 autonomy exoskeleton
The LLE autonomy is developed to partially assist people who have walking impairments due to neuromuscular deficits [42]. The second prototype of the autonomy exoskeleton (autonomy v2) has 6 degrees of freedom (DoF), 3 active and 3 passive per leg, as in the first design. The 3 active DoF are on the hip (abd./add. and flex./ext.) and knee (flex./ext.) joints as shown in Fig. 1. The remaining passive DoF are at the ankle joint (everison/inversion, dorsiflex./plantar flex., and abd./add.).
For hip and knee flex./ext. actuation, each unit consists of a brushless motor (BP4, Faulhaber AG, Switzerland) and a corresponding gearbox (42GPT, Faulhaber AG, Switzerland) with a 108:1 transmission ratio together with an integrated torque sensor at the actuator side. An additional cable transmission (2.6:1) is utilized for hip and knee flex./ext. actuation, while a ball screw transmission is used for hip abd./add..
The actuators, batteries, and electronics of the system are mainly placed in the back modules. The exoskeleton has three interfaces at the foot, the shank, and the trunk for the physical connection to the user. The weight of the device is about 20 kg including the batteries. The size of the lower body segments is adjustable according to user-specific
measurements. The controllers are implemented on the embedded computer of autonomy v2 (BeagleBone Black, Texas Instruments, USA). Wireless communication with autonomy v2 is established through a Wi-Fi module.
#### Iii-A2 Center of Mass Estimation
The CoM position is estimated based on the trunk roll and pitch angles that are reconstructed through the accelerometer and gyroscope data collected through the MPU6050 IMU module on the back module of the exoskeleton. The CoM is assumed at a constant height that has been in the same place as the sensor placement. The exoskeleton is considered as a rigid leg with a fixed length during the stance pose and for small angles.
#### Iii-A3 Balance Loss Detection
For effective balance assistance, real-time assessment of balance and timely detection of upcoming balance loss are paramount. Posturography, a reliable method for objectively quantifying postural sway and balance control, traditionally relies on force plate measurements to assess ground reaction forces [43]. A key metric in this assessment is the sway area, the \(95\%\) confidence ellipse around the mean postural sway in both anteroposterior and mediolateral directions [44].
Since the postural sway is based on the movement of the body's CoM [45], to set the limits for detecting these perturbations, we were inspired by the elliptical shape often used to represent the sway area. This choice allows us to effectively define boundaries within which DCM dynamics can be considered normal or indicative of a perturbation event.
#### Iii-A4 Inverse Kinematics of autonomy
Each side of the exoskeleton is modeled as a three-degree-of-freedom robotic arm considering the foot is an end-effector and the trunk is the ground as shown in Fig. 1. The desired foot trajectory is denoted as \(x\),\(y\),\(z\), where \(l_{0}\) and \(l_{1}\) are the distances between the middle of the trunk connection to the center of rotation of the hip abd./add. joint, between the center of rotations of the hip abd./add. and hip flex./ext. joints. The thigh length is shown as \(l_{2}\), and \(l_{3}\) is the shank length.
A geometric approach is followed to obtain an analytical equation for the inverse kinematics. Since the desired trajectory for the foot is designed \(x\),\(y\),\(z\), through the inverse kinematics hip abd./add., hip flex./ext., and knee flex./ext. joint trajectories are generated by using Eqn. 16.
\[\begin{split}\theta_{1}&=\frac{\pi}{2}+\arctan( \frac{y}{z})-\arctan(\frac{\sqrt{r^{2}\!-\!d^{2}}}{d})\\ \theta_{3}&=\arctan(\frac{-D}{\sqrt{1\!-\!D^{2}}} )\\ \theta_{2}&=\arctan(\frac{\sqrt{r^{2}\!-\!d^{2}}}{x })-\arctan(\frac{l_{2}\!+\!l_{3}\!\cos\!\theta_{3}}{l_{3}\!\sin\!\theta_{3}}) \end{split} \tag{16}\]
where \(r^{2}\!=\!y^{2}\!+\!z^{2}\) and \(D\!=\!\frac{r^{2}\!-\!l_{1}^{2}\!+\!x^{2}\!-\!l_{1}^{2}\!-\!l_{2}^{2}\!-\!l_{ 3}^{2}}\).The \(\theta_{1}\), \(\theta_{2}\), and \(\theta_{3}\) are the hip abd./add., hip flex./ext., and knee flex./ext. joint angles, respectively.
#### Iii-A5 Foot Trajectory Design
Once the optimization problem described in Sec.II-C is solved, the desired foot placement is obtained. From the initial point to the final point, a trajectory is required to perform the desired action. The foot trajectory is designed based on collected data on the human foot position on the vertical axis. The data is collected while walking on a treadmill with an IMU-based motion capture system, Xsens [46]. Based on this data fifth-order splines are generated to follow these trajectories where the peak occurs at the \(40\%\) percent of the total duration with a peak foot height of \(0.07m\). The comparison with collected data to the designed trajectory for a motion of \(1s\) is illustrated in Fig. 3.
The speed and acceleration of the start and end of each joint are taken as zero. For the in-plane motion, the initial position is taken as zero where the final step position is given by the CoP position as the result of the optimization problem.
#### Iii-A6 Impedance Controller
An impedance controller strategy is selected for the implementation of recovery stepping since it provides safe and intuitive assistance during balance recovery in cooperation with the user. As suggested in [23], open-loop assistance is provided, where the exoskeleton only assists when the balance loss is detected. To perform the recovery step, a virtual spring is rendered around the desired trajectory of the joint angles.
The controller architecture for the hip flex./ext. and knee/flex./ext. is shown in Fig.4 where the impedance control loop is followed by the inner P-torque control loop. Spring stiffness of impedance controller is selected as [1.5-0.4-0.4] Nm/deg for hip abd./add., and hip and knee flex./ext.,
Fig. 4: The overview of the controller scheme. The step planner is activated when a recovery step is required based on the balance loss detector.
Fig. 5: The tested optimization weight vectors (\(w_{1}\), \(w_{2}\), \(w_{3}\)) resulted in step durations and next step positions.
Fig. 3: Comparison of foot trajectory data collected during walking to the designed foot trajectory
respectively. Since there is no torque sensor placed at the hip abd./add. joint, torque is controlled over the motor current.
When there is no need to take a recovery step or once the stepping is finished, the exoskeleton is in a zero-torque control mode where the spring stiffness of the impedance controller is set to zero to interfere with the movement of the subject minimally.
## III Experimental Results
### _Effect of Optimization Parameters on Recovery Step_
The constrained optimization problem from Sec. II-C is solved using the open-source software qpOASES [47]. To be able to generate more natural recovery steps in terms of step duration and next CoP position, several distinct weight combinations are tested as \(w_{1}\), \(w_{2}\), and \(w_{3}\) as shown in Fig. 5. While trying to minimize the DCM offset, to be able to cooperate with the exoskeleton user, a set that resulted in shorter step lengths with longer durations, \(w_{3}\) is selected.
### _Real-World Experiments_
Three healthy subjects with the age of 27 \(\pm\) 2.65 years, a height of 168.3 \(\pm\) 3.78 cm, and a body mass of 67.67 \(\pm\) 5.51 kg were asked to perform the trial. The experimental scenario included two different tests one with zero-torque mode and one with assistive mode. In the zero-torque mode, the exoskeleton can be perceived as in transparent mode where the users should ideally limited by the passive dynamics of the system. In the assistive mode, the calculated torques are provided based on the controller scheme explained in Sec. II-D.
During the experiments, participants stood upright on a measurement grid, facing towards the large surround. We instructed subjects to maintain a standing balance throughout each trial. The subjects are pushed from various directions from their back as depicted in Fig.6-(A). If the subject was taking a step to recover from the external push, the step is logged and recorded on the grid manually in a simultaneous manner. If the subject was using another recovery strategy the step is not recorded. Also, if the found solution was not in the direction of the external perturbation due to incoherent CoM movements by the resistance of the user, the steps are not considered.
Although the developed recovery stepping strategy is implemented omnidirectional, it has been tested in the forward direction since fear of falling backward could potentially lead to inaccurate CoM changes [48]. Also, with our push strategy since the subjects were more prone to use the loading-unloading balance strategy side steps were not tested.
#### Iii-B1 Experiments in Zero-torque Mode
Our push recovery experiments were conducted under zero-torque control mode, with the aim of evaluating the intuitiveness of the derived trajectories and analyzing the extent of symbiotic interaction between the exoskeleton and the user's intentions. For each subject 5 recovery steps are taken into account per each side as left and right. During these recovery steps in response to perturbations, we compared the measured step positions with the computed step positions, employing angle differences as a primary metric. An illustrative example of this comparison is presented in Fig. 6-(B), with the swing leg serving as the reference point for the drawn lines. The results of this analysis are presented in Fig. 6-(C).
#### Iii-B2 Experiments with the Controller
We present the joint trajectories, in both the zero-torque control mode and the balance recovery assistance achieved through the impedance controller in Fig. 8. The desired torques for hip flex./ext. and knee flex./ext joints are depicted for the right side as an example of the contribution of the exoskeleton in Fig. 7.
## IV Discussion
In Fig. 5, we present the outcomes of our systematic exploration of weight parameter combinations within the optimization problem. These experiments highlight our
Fig. 6: (A) An illustration of the conducted experiment where on the left subject is standing straight and on the right the subject is landed on the recovery step position. (B) Example of the angle difference between measured CoP position and the computed step positions during zero-torque control mode experiments. (C) The experimental results of three subjects show the computed angle differences between the actual recovery center of pressure positions and the computed center of pressure positions next during push recovery in zero-torque control mode when the exoskeleton was not applying assistive torques in the desired trajectory.
Fig. 7: The desired torques are depicted for right hip flex./ext. and knee flex./ext joints after the impedance controller during the experiments |
2309.03497 | On the containment problem and sporadic simplicial line arrangements | In the paper we present two examples of inductively free sporadic simplicial
arrangements of 31 lines that are non-isomorphic, which allow us to answer
negatively questions on the containment problem recently formulated by Drabkin
and Seceleanu. | Marek Janasz | 2023-09-07T06:12:01Z | http://arxiv.org/abs/2309.03497v1 | # On the containment problem and sporadic simplicial line arrangements
###### Abstract
In the paper we present two examples of inductively free sporadic simplicial arrangements of 31 lines that are non-isomorphic, which allow us to answer negatively questions on the containment problem recently formulated by Drabkin and Seceleanu.
**Keywords**: simplicial arrangements, homogeneous ideals, symbolic powers
**Mathematics Subject Classification (2000)**: 14N20, 13A15, 52A20
## 1 Introduction
Study of the relation between symbolic and ordinary power of homogeneous ideals in the polynomial ring over a given field \(\mathbb{K}\) has a long history and is derived from many different problems in mathematics. In 1995, Eisenbud and Mazur in [17], referring to the proof of Fermat's Last Theorem, were investigating the so-called "fitting ideals" and some symbolic powers of certain associated ideals. What they proved, among other things, is that \(I^{(2)}\subset\mathfrak{m}I\) in the case of perfect ideals of codimension 2. They also showed that this kind of the containment holds for several other classes of ideals. In 2013, Harbourne and Huneke in [23] proposed a certain generalization and began to study the relation \(I^{(m)}\subset\mathfrak{m}^{k}I^{r}\), and their work was continued in [2, 7].
Another way, which in fact ends in the investigations on symbolic and ordinary powers of ideals, were the articles by Skoda [29] and Waldschmidt [33]. They focused on some estimates of the degree of hypersurfaces in \(\mathbb{P}^{N}_{\mathbb{K}}\) passing through fixed points with prescribed multiplicities. A paper by Chudnovski [5] fits into these considerations. Using some complex analysis tools, Chudnovski improved the results of Skoda and Waldschmidt in \(\mathbb{P}^{2}\) and he formulated a still open conjecture for the case of \(\mathbb{P}^{N}\). The generalization of this conjecture was also given by Demailly in [10], and the combination of these conjectures is the subject of intense research [2, 3, 14, 16, 20, 27].
Using the Nagata-Zariski theorem makes it possible to relate geometric questions to algebra. Therefore, the study of containment relations between ordinary and symbolic powers of homogeneous ideals of points is a connection between conjectures formulated by Chudnovsky and Demailly. This perspective led to an increased interest in the so-called containment problem, i.e., the determination of the exponents \((m,r)\) for which the \(m\)-th symbolic power of a homogeneous ideal is contained in the \(r\)-th ordinary power of that ideal. An initial work on this topic was begun by Hochster in 1973 [24], but the groundbreaking result was published only in 2001 by Ein, Lazarsfeld, and Smith [18], where they gave a lower bound on the exponent of the symbolic power, which depends on the dimension of the space \(\mathbb{P}^{N}\). Since then, the cases unsolved by the aforementioned theorem have become the subject of intensive study, in particular the smallest case from the perspective of the magnitude of the powers, namely the containment \(I^{(3)}\subset I^{2}\) for the ideals of reduced points in \(\mathbb{P}^{2}\). While at the beginning the researchers tried to prove that this particular containment holds for all homogeneous ideals, after the paper [15], where the first counterexample defined over \(\mathbb{C}\) has been presented, a lot of counterexamples defined over different fields have been published (see [4, 26]). Despite a growing number of counterexamples, the true nature of the relation between \(I^{(3)}\) and \(I^{2}\) is still unknown. In [11], Drabkin and Seceleanu study reflection arrangements given by
(irreducible) complex pseudoreflection groups. As a result, they give a complete description of the relation between the third symbolic power and the second ordinary power of radical ideal \(J(\mathcal{A})\), which defines the singular locus of the complex reflection arrangement \(\mathcal{A}\). The work on this problem motivates them to state some open questions, among which they ask ([11, Question 6.7.-6.8.]): _Are the containments \((J(\mathcal{A}))^{(2r-1)}\subseteq(J(\mathcal{A}))^{r}\) always satisfied for any \(r\geq 2\) and any hyperplane arrangement that is inductively/recursively free?_
In the present paper we give a negative answer to these questions, namely we prove the following.
**Theorem A**.: _There are two non-isomorphic inductively free simplicial arrangements consisting of \(31\) lines such that they have the same weak combinatorics, and having the property that for one arrangement the containment \((J(\mathcal{A}))^{(3)}\subseteq(J(\mathcal{A}))^{2}\) holds, but does not for the other._
The structure of the paper is as follows. In Section 2, we recall some basic definitions and tools that we will use in the rest of this paper concerning line arrangements and symbolic powers of homogeneous ideals. In Section 3 we give very detailed information about a family of line arrangement known as \(\mathcal{A}(12k+7)\), giving line equations and proving that some line arrangements from this family are inductively free. This result is used in Section 4, where we prove Main Theorem of this paper. At the end, we provide our SINGULAR code to let interested readers check the containment between \((J(\mathcal{A}))^{(3)}\) and \((J(\mathcal{A}))^{2}\).
## 2 Preliminaries
In this section we recall all necessary definitions regarding hyperplane arrangements that we will exploit in the paper. For more information regarding this subject, please consult [13, 28].
Let \(\mathbb{K}\) be a field of characteristic zero and let \(V\) be a fixed vector space of dimension \(\ell\) over \(\mathbb{K}\). Let \(\{x_{1},\ldots,x_{\ell}\}\) be the dual basis of \(V^{*}\) associated with \(V\), then the symmetric algebra \(S(V^{*})\) is isomorphic to the ring of polynomials \(S=\mathbb{K}[x_{1},\ldots,x_{\ell}]\).
A pair \((\mathcal{A},V)\) is called an \(\ell\)-arrangement of hyperplanes, i.e., this is an arrangement of dimension \((\ell-1)\) linear subspaces in \(V\). The symbol \(\Phi_{\ell}\) denotes the empty \(\ell\)-arrangement. If the dimension is clear from the context, we use the name arrangement for short. Each hyperplane \(H\in\mathcal{A}\) is the kernel (up to a constant) of a linear form \(l_{H}\in V^{*}\). The product of all linear forms
\[Q(\mathcal{A})=\prod_{H\in\mathcal{A}}l_{H}\]
is called the defining polynomial of \(\mathcal{A}\). In the case of empty arrangements, we put \(Q(\Phi_{l})=1\).
By \(L(\mathcal{A})\) we denote the intersection lattice of \(\mathcal{A}\), i.e., the set of all non-empty intersections of hyperplanes \(H_{i}\) in \(\mathcal{A}\). Taking any \(X\in L(\mathcal{A})\), a subarrangement \(\mathcal{A}_{X}\) of \(\mathcal{A}\) is called _localization_ and it is defined as
\[\mathcal{A}_{X}=\{H\in\mathcal{A}\mid X\subseteq H\}.\]
For a chosen \(X\), we define a subarrangement of \(\mathcal{A}\) by
\[\mathcal{A}^{X}=\{X\cap H\::X\not\subseteq H\:\text{and}\:X\cap H\neq\emptyset\},\]
which we call the _restriction_ of \(\mathcal{A}\) to \(X\).
**Definition 2.1**.: A simplicial arrangement is a finite set \(\mathcal{A}=\{H_{1},\ldots,H_{n}\}\) of (central) hyperplanes in \(\mathbb{R}^{\ell}\) such that all connected components of the complement
\[M(\mathcal{A}):=\mathbb{R}^{\ell}\setminus\bigcup_{H\in\mathcal{A}}H\]
are simplicial cones.
Denote by \(Der_{\mathbb{K}}(S)\) the set of all \(\mathbb{K}\)-linear maps (derivations) \(\theta:S\longrightarrow S\) such that for all \(f,g\in S\) one has
\[\theta(fg)=f\theta(g)+g\theta(f).\]
It is known that the set \(\left\{\frac{\partial}{\partial x_{i}}\right\}_{i=1}^{\ell}\) forms a (canonical) basis for \(Der_{\mathbb{K}}(S)\), i.e.,
\[Der_{\mathbb{K}}(S)=\bigoplus_{i=1}^{\ell}S\cdot\frac{\partial}{\partial x_{i }}.\]
Any homogeneous element \(0\neq\theta\in Der_{\mathbb{K}}(S)\) can be expressed as \(\theta=\sum_{i=1}^{\ell}g_{i}\cdot\frac{\partial}{\partial x_{i}}\), where \(g_{i}\in S\) are homogeneous polynomials of degree \(d\). For such \(\theta\) we denote by \(pdeg\,\theta=d\) its polynomial degree.
For any \(f\in S\) being homogeneous, we define an \(S\)-submodule of \(Der_{\mathbb{K}}(S)\) as
\[D(f)=\{\theta\in Der_{\mathbb{K}}(S):\theta(f)\in f\cdot S\}.\]
In the case of arrangement \(\mathcal{A}\), we use the notation \(D(\mathcal{A})\) instead of \(D(Q(\mathcal{A}))\).
**Definition 2.2**.: If \(D(\mathcal{A})\) is a free \(S\)-module, then we say that \(\mathcal{A}\) is a _free arrangement_.
Let \(\mathcal{A}\) be a free arrangement for which \(\{\theta_{1},\ldots,\theta_{\ell}\}\) is a homogeneous basis of \(D(\mathcal{A})\). We say that the set of integers \(\exp(\mathcal{A})=\{pdeg\,\theta_{1},\ldots,pdeg\,\theta_{\ell}\}\) is the set of the exponents of \(\mathcal{A}\).
If we denote by \(\theta_{E}\in Der_{\mathbb{K}}(S)\) the Euler derivation, then we have the decomposition of \(D(\mathcal{A})\), namely
\[D(\mathcal{A})=S\cdot\theta_{E}\oplus D_{0}(\mathcal{A}).\]
Now for an arrangement \(\mathcal{A}\) and fixed \(H\in\mathcal{A}\), it is convenient to study triples of arrangements \((\mathcal{A},\mathcal{A}^{{}^{\prime}},\mathcal{A}^{{}^{\prime\prime}})\) of arrangements, where \(\mathcal{A}^{{}^{\prime}}=\mathcal{A}\setminus\{H\}\) and \(\mathcal{A}^{{}^{\prime\prime}}=\mathcal{A}^{H}.\) The next theorem is very useful in all our considerations.
**Theorem 2.3**.: _(Addition-Deletion, see [28]) Suppose \(\mathcal{A}\neq\Phi_{\ell}\). Let \((\mathcal{A},\mathcal{A}^{{}^{\prime}},\mathcal{A}^{{}^{\prime\prime}})\) be a triple. Any two of the following statements imply the third:_
\[\mathcal{A}\text{ is free with }\exp(\mathcal{A}) =\{b_{1},\ldots,b_{\ell-1},b_{\ell}\},\] \[\mathcal{A}^{{}^{\prime}}\text{ is free with }\exp(\mathcal{A}^{{}^{\prime}}) =\{b_{1},\ldots,b_{\ell-1},b_{\ell}-1\},\] \[\mathcal{A}^{{}^{\prime\prime}}\text{ is free with }\exp(\mathcal{A}^{{}^{\prime\prime}}) =\{b_{1},\ldots,b_{\ell-1}\}.\]
In this paper we deal with line arrangements \(\mathcal{A}\) defined over the complex numbers, therefore we will use the following reformulation of Theorem 2.3.
**Theorem 2.4**.: _Let \(\mathcal{A}\) be a line arrangement in \(\mathbb{P}_{\mathbb{C}}^{2}\) and \(H\in\mathcal{A}\). Let \(\mathcal{A}^{\prime}:=\mathcal{A}\setminus\{H\}\). If the following conditions hold:_
1. \(\mathcal{A}^{\prime}\) _is free and has the exponents_ \(\exp(\mathcal{A}^{\prime})=\{1,a,b\}\)_,_
2. \(|Sing(\mathcal{A})\cap H|=b+1\) _(or_ \(a+1\)_, respectively),_
_then \(\mathcal{A}\) is free with the exponents \(\exp(\mathcal{A})=\{1,a+1,b\}\) (or \(\exp(\mathcal{A})=\{1,a,b+1\}\), respectively)._
In the sequel, we focus on the following definition.
**Definition 2.5**.: ([28, Definition 4.53]). The class \(\mathcal{IF}\) of inductively free arrangements is the smallest class of arrangements which satisfies both conditions:
1. \(\Phi_{\ell}\in\mathcal{IF}\) for \(\ell\geq 0\),
2. if there exists \(H\in\mathcal{A}\) such that \(\mathcal{A}^{\prime\prime}\in\mathcal{IF}\), \(\mathcal{A}^{\prime}\in\mathcal{IF}\), and \(\exp(\mathcal{A}^{\prime\prime})\subset\exp(\mathcal{A}^{\prime})\), then \(\mathcal{A}\in\mathcal{IF}\).
Examples of inductively free arrangements
The main object of our considerations is a special family of line arrangements, denoted in the literature by \(\mathcal{A}(12k+7)\). This infinite family was originally described in the paper by Grunbaum [1]. Here, we recall this construction and its basic properties.
For fixed \(k\), each element of the family consists of exactly \(12k+7\) lines, including the line at infinity \(z=0\). The equations of these lines are given explicitly in Table 1.
The arrangements \(\mathcal{A}(19)\) and \(\mathcal{A}(31)\) are exactly the sporadic simplicial arrangements \(A(19,1)\) and \(A(31,2)\) listed in Grunbaum's catalogue [22]. From [12], we know that these are free arrangements with the exponents \(\exp(\mathcal{A}(19))=\{1,7,11\}\) and \(\exp(\mathcal{A}(31))=\{1,13,17\}\). We start with the following combinatorial observation that is crucial for our further considerations. According to our best knowledge, this observation is not known in the literature.
**Theorem 3.1**.: _The arrangements of line \(\mathcal{A}(19,1)\) and \(\mathcal{A}(31,2)\) are inductively free._
Proof.: We will divide our proof of this theorem into two steps. In the first step, we will show that the arrangement \(\mathcal{A}(19,1)\) is inductively free. For this purpose, we present Table 2 below, where we deliver the sequences of the exponents for \(\mathcal{A}^{{}^{\prime}}\), the equation of each line that we add to the arrangement starting from \(\Phi_{3}\), and then the exponents of \(\exp(\mathcal{A}^{{}^{\prime\prime}})\). Each subsequent row of the table allows us to verify the conditions contained in Theorem 2.4.
Based on the last row in Table 2, we see that the arrangement \(\mathcal{A}(19)\) is a free with the exponents \(\exp(\mathcal{A}(19))=\{1,7,11\}\).
For the second part of the proof, we apply Theorem 2.4 to arrangement \(\mathcal{A}(19)\) by adding suitably chosen lines that are indicated in Table 3 below.
\begin{table}
\begin{tabular}{l l} \hline & \(\mathcal{A}(12k+7)\) \\ \hline \hline \(2x-eiz\), & \\ \(x-ey+iez\), & for \(i\in\{-(k+1),-k,\ldots,-1,0,1,\ldots,k,k+1\}\) \\ \(x+ey-iez\), & \\ \hline \(2y-jz\), & \\ \(ex-y+jz\), & for \(j\in\{-(k-1),-(k-2),\ldots,-1,0,1,\ldots,k-2,k-1\}\) \\ \(ex+y-jz\), & \\ \hline \hline \end{tabular}
\end{table}
Table 1: Equations of lines of \(\mathcal{A}(12k+7)\).
\begin{table}
\begin{tabular}{l l l l l l} \hline \(\exp\ \mathcal{A}^{{}^{\prime}}\) & \(\ell_{i}\) & \(\exp\ \mathcal{A}^{{}^{\prime\prime}}\) & \(\exp\ \mathcal{A}^{{}^{\prime}}\) & \(\ell_{i}\) & \(\exp\ \mathcal{A}^{{}^{\prime\prime}}\) \\ \hline \(\{0,0,0\}\) & \(\Phi_{3}\) & \(\{0,0\}\) & \(\{1,4,5\}\) & \(\ell_{10}:x+ey+ez\) & \(\{1,5\}\) \\ \(\{0,0,1\}\) & \(\ell_{1}:z\) & \(\{0,1\}\) & \(\{1,5,5\}\) & \(\ell_{11}:x-ey-ez\) & \(\{1,5\}\) \\ \(\{0,1,1\}\) & \(\ell_{2}:ex+y\) & \(\{1,1\}\) & \(\{1,5,6\}\) & \(\ell_{12}:x+ey-ez\) & \(\{1,5\}\) \\ \(\{1,1,1\}\) & \(\ell_{3}:ex-y\) & \(\{1,1\}\) & \(\{1,5,7\}\) & \(\ell_{13}:x-ey+ez\) & \(\{1,7\}\) \\ \(\{1,1,2\}\) & \(\ell_{4}:y\) & \(\{1,1\}\) & \(\{1,6,7\}\) & \(\ell_{14}:2x-2ez\) & \(\{1,7\}\) \\ \(\{1,1,3\}\) & \(\ell_{5}:x\) & \(\{1,1\}\) & \(\{1,7,7\}\) & \(\ell_{15}:x+ez\) & \(\{1,7\}\) \\ \(\{1,1,4\}\) & \(\ell_{6}:x+ey\) & \(\{1,1\}\) & \(\{1,7,8\}\) & \(\ell_{16}:x+ey+2ez\) & \(\{1,7\}\) \\ \(\{1,1,5\}\) & \(\ell_{7}:x-ey\) & \(\{1,5\}\) & \(\{1,7,9\}\) & \(\ell_{17}:x-ey+2ez\) & \(\{1,7\}\) \\ \(\{1,2,5\}\) & \(\ell_{8}:2x-ez\) & \(\{1,5\}\) & \(\{1,7,10\}\) & \(\ell_{18}:x-ey-2ez\) & \(\{1,7\}\) \\ \(\{1,3,5\}\) & \(\ell_{9}:2x+ez\) & \(\{1,5\}\) & \(\{1,7,11\}\) & \(\ell_{19}:x+ey-2ez\) & \(\{1,7\}\) \\ \hline \end{tabular}
\end{table}
Table 2: List of the exponents for arrangements building \(\mathcal{A}(19)\), where \(e=\sqrt{3}\).
Attaching successively lines \(\ell_{20},...,\ell_{31}\) to \(\mathcal{A}(19)\), and using each time Theorem 2.4, we conclude that the obtained arrangements are free. All the details describing our procedure are presented in the following diagram below:
\[\begin{array}{rclcl}\begin{array}{rcl}\mathcal{A}(19)\\ \exp(\mathcal{A})=\{1,6,11\}\end{array}&\longrightarrow&\begin{array}{rcl} \mathcal{A}(19)\cup\{\ell_{20}\}\\ \exp(\mathcal{A})=\{1,8,11\}\end{array}&\longrightarrow\ldots\\ \longrightarrow&\begin{array}{rcl}\mathcal{A}(19)\cup\{\ell_{20},\ldots,\ell_ {24}\}\\ \exp(\mathcal{A})=\{1,11,11\}\end{array}&\longrightarrow&\begin{array}{rcl} \mathcal{A}(19)\cup\{\ell_{20},\ldots,\ell_{25}\}\\ \exp(\mathcal{A})=\{1,11,12\}\end{array}&\longrightarrow\\ \longrightarrow&\begin{array}{rcl}\mathcal{A}(19)\cup\{\ell_{20},\ldots,\ell_ {26}\}\\ \exp(\mathcal{A})=\{1,11,13\}\end{array}&\longrightarrow&\begin{array}{rcl} \mathcal{A}(19)\cup\{\ell_{20},\ldots,\ell_{27}\}\\ \exp(\mathcal{A})=\{1,12,13\}\end{array}\\ \longrightarrow&\begin{array}{rcl}\mathcal{A}(19)\cup\{\ell_{20},\ldots,\ell_ {28}\}\\ \exp(\mathcal{A})=\{1,13,14\}\end{array}&\longrightarrow&\begin{array}{rcl} \mathcal{A}(19)\cup\{\ell_{20},\ldots,\ell_{29}\}\\ \exp(\mathcal{A})=\{1,13,14\}\end{array}&\longrightarrow\\ \ldots&\longrightarrow&\begin{array}{rcl}\mathcal{A}(19)\cup\{\ell_{20}, \ldots,\ell_{31}\}\\ \exp(\mathcal{A})=\{1,13,17\}\end{array}.\end{array}\]
Thus we obtain that the arrangement
\[\mathcal{A}(31):=\mathcal{A}(19)\cup\{\ell_{20},\ldots,\ell_{31}\}\]
is inductively free with the exponents \(\exp(\mathcal{A}(31))=\{1,13,17\}\), which completes the proof.
## 4 Inductively free arrangements and a counterexample to the containment problem.
The line arrangement \(\mathcal{A}(31)\) from the previous part of our paper turns out to be very important in the context of an open problem in the so-called containment problems for symbolic powers of homogeneous ideals. Let us recall here the definition of symbolic powers.
**Definition 4.1**.: Let \(I\subseteq S\) be a homogeneous ideal. For a fixed positive integer \(m\), we define the \(m\)-th symbolic power of the ideal \(I\) as
\[I^{(m)}=S\cap\Big{(}\bigcap_{Q\in\mathrm{Ass}(I)}I_{Q}^{m}\Big{)},\]
where \(Ass(I)\) denotes the set of all prime ideals associated with I and \(I_{Q}\) denotes the location of \(I\) at \(Q\).
Reader unfamiliar with symbolic power and containment problem is referred to [30].
In [11], Drabkin and Seceleanu study arrangements of hyperplanes that come from irreducible complex reflection groups, proving that for some cases we have the failure of the containment
\[(J(\mathcal{A}))^{(3)}\not\subseteq(J(\mathcal{A}))^{2}, \tag{1}\]
where \(J(\mathcal{A})\) denotes the radical ideal associated with the configuration of all intersection points of a given arrangement \(\mathcal{A}\) and \((J(\mathcal{A}))^{(3)}\) denotes the third symbolic power of \(J(\mathcal{A})\). In the light of the results obtained in [11], it is natural to ask the following question.
**Question 4.2**.: ([11, Question 6.7.]) Are the containments \((J(\mathcal{A}))^{(2r-1)}\subseteq(J(\mathcal{A}))^{r}\) always satisfied for any \(r\geq 2\) and any hyperplane arrangement that is inductively free?
Here we answer **negative** to this question for \(r=2\) and by taking the whole singular locus of the arrangement \(\mathcal{A}(31)\) described in the previous section. In fact, we will show even more, namely we will present two inductively free arrangements of \(31\) lines such that for one arrangement the condition (1) holds, but for the second one it does not. In order to do so, let us present briefly a construction of the second arrangement of \(31\) lines. Surprisingly, this is a simplicial line arrangement which is denoted by \(\mathcal{A}(31,3)\) in Grunbaum's catalogue [22]. Here are the details.
Consider ten lines given by:
\[x\pm aez,\;2y\pm az,\;2x\pm bz,\]
where by \(e=\sqrt{3}\), \(a\in\{0,1\}\), and \(b\in\{0,1,3\}\). The visualization in the affine part of the projective plane of these ten lines is presented in Figure 1.
\[\begin{array}{c|c
The following statement shows that the answer to Question 4.2 is negative.
**Theorem 4.3**.: _The arrangement \(\mathcal{A}(31,3)\) is inductively free and one has \((J(\mathcal{A}(31,3)))^{(3)}\not\subseteq(J(\mathcal{A}(31,3)))^{2}\)._
Proof.: To prove that the configuration \(\mathcal{A}(31,3)\) is inductively free, we will create a table containing the exponents of \(\exp(\mathcal{A}^{{}^{\prime}})\), the equations of lines that we add to the arrangement \(\mathcal{A}(19,1)\), and then the exponents of \(\exp(\mathcal{A}^{{}^{\prime}})\).
Observe that each row in Table 4 allows us to verify condition (2) in Definition 2.5. Evidence of noncontainment \((J(\mathcal{A}(31,3)))^{(3)}\not\subseteq(J(\mathcal{A}(31,3)))^{2}\) has been provided in [25] and due to this reason we refer to this paper for details. The verification was performed using Singular. The affine part \((z=1)\) of the element \(F\in(J(\mathcal{A}(31,3)))^{(3)}\setminus(J(\mathcal{A}(31,3)))^{2}\) is shown in Figure 3.
The next example illustrates the fact that being inductively free for an arrangement \(\mathcal{A}\) does not directly transfer into lack of containment \((J(\mathcal{A}))^{(3)}\subseteq(J(\mathcal{A}))^{2}\).
**Theorem 4.4**.: _There are two non-isomorphic inductively free simplicial arrangements consisting of \(31\) lines such that they have the same weak combinatorics, and having the property that for one arrangement the containment \((J(\mathcal{A}))^{(3)}\subseteq(J(\mathcal{A}))^{2}\) holds, not for the other._
In other words, the weak combinatorics of line arrangements does not determine the property of being an example for the non-containment. For the clarity of the exposition, let us recall that for an arrangement \(\mathcal{A}\) of \(d\) lines in the plane, the weak combinatorics is the vector of the form \((d;t_{2},...,t_{d})\) with \(t_{i}\) being the number of \(i\)-fold intersection points in \(\mathcal{A}\).
It is worth noticing that Theorem 4.4 should be compared with a result from [19], where the authors observed a similar phenomenon, but in a different setting, namely in the case of real line arrangements possessing the maximal possible number of triple intersection points.
Let us present our proof of Theorem 4.4.
Proof.: The arrangement \(\mathcal{A}(31,3)\) is constructed from configuration \(\mathcal{A}(19,1)\) by adding \(12\) appropriately chosen lines, according to Table 4. The arrangements \(\mathcal{A}(31,2)\) and \(\mathcal{A}(31,3)\) are not isomorphic simplicial arrangements of lines, and this fact is proved in [22], even though they have the same weak combinatorics, namely:
\[t_{2}=54,\,t_{3}=42,\,t_{4}=21,\,t_{5}=6,\,t_{6}=1,\,t_{8}=3,\]
and \(t_{i}=0\) for others.
Let us pass to the containment question \((J(\mathcal{A}))^{(3)}\subseteq(J(\mathcal{A}))^{2}\). In Theorem 4.3, we explained the non-containment for the singular locus of \(\mathcal{A}(31,3)\), and this check was done using SINGULAR. In the case of \(\mathcal{A}(31,2)\), we can preform exactly the same computations showing that the containment
\[(J(\mathcal{A}(31,2))^{(3)}\subseteq(J(\mathcal{A}(31,2))^{2}\]
does hold, which completes the proof.
In Appendix, you can find a script that can be run in SINGULAR which allows us to verify the containment \((J(\mathcal{A}(31,2))^{(3)}\subseteq(J(\mathcal{A}(31,2))^{2}\).
**Remark 4.5**.: Let us point our here that there is a way to extend the class of inductively free arrangements \(\mathcal{IF}\), namely we can add the following condition ([28, Definition 4.60 (3)]):
if there exists \(H\in\mathcal{A}\) such that \(\mathcal{A}^{\prime\prime}\in\mathcal{IF}\), \(\mathcal{A}\in\mathcal{IF}\), and \(\exp(\mathcal{A}^{\prime\prime})\subset\exp(\mathcal{A})\), then \(\mathcal{A}^{\prime}\in\mathcal{IF}\),
then we come to the class of recursively free hyperplanes arrangements \(\mathcal{IF}\).
It is known that we have the following relations (see [9, 31, 34])
\[\text{inductively free }\subsetneq\text{ recursively free }\subsetneq\text{ free}.\]
It turns out that our example of a pair of arrangements \(\mathcal{A}(31,2)\) and \(\mathcal{A}(31,3)\) allows us to answer a question posed by Drabkin and Seceleanu in the negative ([11, Question 6.8]). More precisely, our example shows that the containment \(J(\mathcal{A})^{(3)}\subset J(\mathcal{A})^{2}\) does not hold for recursively free line arrangements \(\mathcal{A}\). It is still an open question if we can find a configuration of lines which is recursively free, but not inductively free and gives negative answer.
Acknowledgments.I would like to thank Grzegorz Malara and Piotr Pokora for all their help, valuable comments and inspiring discussions.
**Appendix.**
proc PtsIdeal(poly p, poly q, poly r) { matrix M[2][3]=p,q,r,x,y,z; ideal @I=minor(M,2); return(std(@I)); } option(redSB); ring R=(0,e),(x,y,z),dp; minpoly=e2-3;
/* The list L contains the coordinates of the singular points of the arrangement A(31,2). */
"loading arrangement A(31,2)..."; list L= (-7/2e),-1/2,1,(7/2e),-1/2,1,(7/2e),1/2,1,(-7/2e),1/2,1, (-3/2e),-11/2,1,(2e),5,1,(3/2e),11/2,1,(-2e),-5,1, (3/2e),-11/2,1,(-2e),5,1, (-3/2e),11/2,1,(2e),-5,1, (3/4e),5/4,1,(1/4e),7/4,1,(1/4e),-7/4,1,(3/4e),-5/4,1, (-3/4e),5/4,1,(-1/4e),7/4,1,(-1/4e),-7/4,1,(-3/4e),-5/4,1, (-e),-1/2,1,(-e),1/2,1,(e),-1/2,1,(e),-1/2,1,(e),1/2,1,
(5/2e),-1/2,1,(-3/2e),7/2,1,(5/2e),1/2,1,(-3/2e),-7/2,1, (-5/2e),-1/2,1,(3/2e),7/2,1,(-5/2e),1/2,1,(3/2e),-7/2,1, (-e),-4,1,(-e),4,1,(e),4,1,(e),-4,1, (3/2e),5/2,1,(-2e),-1,1, (-3/2e),-5/2,1,(2e),1,1, (-1/2e),-7/2,1,(1/2e),7/2,1,(1/2e),-7/2,1, (-3/2e),-1/2,1,(e),2,1, (3/2e),-1/2,1,(-e),2,1, (3/2e),1/2,1,(-e),-2,1, (-1/2e),5/2,1,(-1/2e),-5/2,1, (3/2e),3/2,1,(-3/2e),-3/2,1, (3/2e),-3/2,1,(-3/2e),3/2,1, 0,3,1,0,-3,1,(-1/4e),-1/4,1,(1/4e),1/4,1, (1/4e),-1/4,1,0,-1/2,1,0,1/2,1, (-e),-1,1,(e),1,(-e),1,1,(e),-1,1,(1/2e),-1,1,(1/2e),1/2,1, (-1/2e),1/2,1,(1/2e),-1/2,1,0,1,1,0,-1, (3/2e),0,1,(-3/2e),0,1,(3/4e),9/4,1,(3/4e),-9/4,1, (3/4e),-9/4,1,(-3/4e),9/4,1,(3e),0,1,(-3e),0,1, (-3/2e),-9/2,1,(3/2e),9/2,1,(3/2e),-9/2,1, (-1/3e),0,1,(1/3e),0,1,(-1/6e),-1/2,1,(1/6e),1/2,1, (1/6e),-1/2,1,(-1/6e),1/2,1,(2e),0,1,(-2e),0,1, (-e),-3,1,(e),3,1,(-e),3,1, (-1/2e),0,1,(1/2e),0,1,(1/4e),3/4,1,(-1/4e),-3/4,1, (-1/4e),3/4,1,(1/4e),-3/4,1,(e),0,1,(-e),0,1, (-1/2e),-3/2,1,(1/2e),3/2,1,(-1/2e),3/2,1,(1/2e),-3/2,1, 0,0,1,(e),1,0,(-e),1,0,0,1,0,-1,0,0,1,(e),0,-1,(e),0;
"generating ideals I"(3) and I"2..."; ideal I=1; ideal I3=1; for(int i=1;i<=(size(L) div 3);i++){ I=intersect(I,PtsIdeal(L[3*i-2],L[3*i-1],L[3*i])); I3=intersect(I3,(PtsIdeal(L[3*i-2],L[3*i-1],L[3*i]))^3); if((i mod 10) == 0){ string(i)+" points of " +string(size(L) div 3)+" in total used";} I=std(I^2); I3=std(I3);
"number of generators of I"(3) not in I^2: "+string(size(NF(I3,I)));
|
2309.11093 | K-pop Lyric Translation: Dataset, Analysis, and Neural-Modelling | Lyric translation, a field studied for over a century, is now attracting
computational linguistics researchers. We identified two limitations in
previous studies. Firstly, lyric translation studies have predominantly focused
on Western genres and languages, with no previous study centering on K-pop
despite its popularity. Second, the field of lyric translation suffers from a
lack of publicly available datasets; to the best of our knowledge, no such
dataset exists. To broaden the scope of genres and languages in lyric
translation studies, we introduce a novel singable lyric translation dataset,
approximately 89\% of which consists of K-pop song lyrics. This dataset aligns
Korean and English lyrics line-by-line and section-by-section. We leveraged
this dataset to unveil unique characteristics of K-pop lyric translation,
distinguishing it from other extensively studied genres, and to construct a
neural lyric translation model, thereby underscoring the importance of a
dedicated dataset for singable lyric translations. | Haven Kim, Jongmin Jung, Dasaem Jeong, Juhan Nam | 2023-09-20T06:54:55Z | http://arxiv.org/abs/2309.11093v4 | # K-pop Lyric Translation: Dataset, Analysis, and Neural-Modelling
###### Abstract
Lyric translation, a field studied for over a century, is now attracting computational linguistics researchers. We identified two limitations in previous studies. Firstly, lyric translation studies have predominantly focused on Western genres and languages, with no previous study centering on K-pop despite its popularity. Second, the field of lyric translation suffers from a lack of publicly available datasets; to the best of our knowledge, no such dataset exists. To broaden the scope of genres and languages in lyric translation studies, we introduce a novel single lyric translation dataset, approximately 89% of which consists of K-pop song lyrics. This dataset aligns Korean and English lyrics line-by-line and section-by-section. We leveraged this dataset to unveil unique characteristics of K-pop lyric translation, distinguishing it from other extensively studied genres, and to construct a neural lyric translation model, thereby underscoring the importance of a dedicated dataset for singable lyric translations.
Lyric Translation, K-pop Translation, Lyrics Information Processing
## I Introduction
Singable lyric translation is a common practice to bolster the global resonance and appeal of music across diverse genres, from opera and animated musical songs (such as those from Disney) to children's songs and hymns [1]. With the continuous globalization of music, the importance and popularity of singable lyric translation are increasing [2], particularly on social media platforms like YouTube.
Despite its widespread appeal, singable lyric translation is acknowledged as a challenging discipline as it calls for a sufficient understanding of musicology and linguistics [2, 3]. Moreover, previous research emphasizes that lyric translation also requires solid cultural consideration, given the distinct poetic norms of each language [4, 5, 6]. Consequently, due to these inherent intricacies, the study of lyric translation remains largely unexplored. While some studies have endeavored to investigate singable lyric translations, their research has primarily centered on Western languages, predominantly English and German, and Western genres, such as opera and animated musical songs [4, 7, 8, 9, 10]. To our knowledge, there has been no comprehensive analysis of Korean pop (K-pop) translations, despite their substantial popularity on social platforms.
Another challenge in lyric translation studies is the absence of a publicly available dataset. As far as we can tell, no public singable lyric translation dataset currently exists, creating a barrier to fully deciphering the art of lyric translation. Hence, systematic analysis of singable lyric translation has primarily relied on individual case studies [6, 7, 10, 11]. Moreover, while automatic lyric translation has gained popularity, the development of neural lyric translation models has been largely dependent on semi-supervised methods [12, 13] or privately sourced datasets [14].
To address these issues, we have compiled a Korean-English lyric translation dataset, of which approximately 89% comprises lyrics for K-pop songs. This dataset, which contains lyrics for a thousand songs, has been meticulously aligned on a line-by-line and section-by-section basis by humans. The following section of this paper will delve into the construction methodology and details of the dataset. Moving further, we will uncover the unique characteristics of K-pop translation that differentiate it from previously extensively analyzed genres, and demonstrate the application of our dataset to the neural lyric translation task, revealing the necessity of singable
Fig. 1: An illustration of K-pop translation, featuring “ID Peace B” by BoA, with English singable lyrics, Korean singable lyrics, and their corresponding English translations.
lyrics dataset for the enhancement of model performance. We will conclude the paper by highlighting the insights acquired through our experiments using our dataset.
## II Dataset
In order to facilitate systematic analysis of lyric translation and advance the development of neural lyric translation models, we introduce a novel singable lyric translation dataset, which comprises pairs of Korean-English lyrics for a thousand songs: 886 K-pop songs, 62 animated musical songs, 34 theatre songs, etc. This dataset incorporates essential metadata such as the name of artist and track, and genre. In addition, it presents meticulous line-by-line and section-by-section alignments of English and Korean lyrics. As we will later show, these alignments, although requiring substantial manual effort as automation is not possible, play a pivotal role in the processes of analysis, neural model development, and evaluation. Although the lyrics cannot be directly downloaded due to copyright issues, they can be accessed via public APIs or URLs. Additionally, we provide an alignment code to aid in both line-wise and section-wise analysis. Our dataset primarily concentrates on K-pop, which constitutes about 89% of the total data. Nevertheless, we have intentionally included lyrics from other well-studied genres, like animated musical songs (e.g., songs for Disney animation) or theatre songs, to enable comparative analysis across genres. We believe that this dataset offers value as it reveals insights into K-pop lyric translation, a topic not sufficiently explored in prior research, and the translation of lyrics between Korean and English, languages with substantial grammatical differences. Despite the limited scope of genre and language, we aim to provide insights into the importance of a singable lyric dataset for a diverse array of academic pursuits. A snippet of the sample data is depicted in Table I. A more comprehensive view of the sample data is shown in Appendix -A and the statistical details of the dataset are presented in Appendix -B. This dataset is available for download via the provided link 1.
Footnote 1: [https://github.com/havenpersona/k-pop](https://github.com/havenpersona/k-pop)
### _Source Corpora Collection_
Our dataset includes both pairs of official lyrics for the same songs, that have been officially released in both Korean and English (e.g., a pair of Korean and English versions of "Cry for Me" by Twice) and pairs of official Korean lyrics and high-quality unofficial English singable translations (e.g., the official Korean lyrics of "Attention" by New Jeans and their singable translation by YouTube Emily Dimes). Including unofficial translations, which take up 65.2% of the entire dataset, significantly enhances the size of our dataset.
### _Human Alignment_
Owing to the subjective nature of lyric structure--with no universal agreement on what to call a line and what to call a section--the internet-sourced lyrics for the same song in English and Korean are not aligned on a line-by-line or section-by-section basis. Despite this, the neural model development, evaluation, and analysis of singable lyric translations require these alignments to identify the line-wise and section-wise correspondence and relationship. We observed, however, that automatically generating these alignments is currently unattainable. Some might propose syllable counting as a potential alignment method. However, as it is common to modify melodies to fit varying syllable counts, this method is not practical [4, 15]. Furthermore, the inclusion of non-lexical vocables, like "oohs" and "aahs," in one language's lyrics, and their absence in the other, creates additional inconsistency and challenges in auto-alignment. Consequently, we manually aligned lyrics line-by-line and section-by-section to ensure that lyrics on the same line share the same melodies and sections are divided by the same criteria. To demonstrate this alignment task, we provide Figure 2, which uses "Beautiful" by Amber as an example. This figure illustrates three main points: 1) the different ways sections and lines can be separated, 2) how the same line can consist of a varying number of syllables, and 3) how the same nonlexical sound can be represented in different ways (e.g., "yeah" in English lyrics and "yeah yeah" in Korean lyrics), which makes the auto-alignment unfeasible.
## III Unpacking K-pop Translation
In this section, we aim to quantitatively compare the attributes of K-pop translations with those of other extensively researched genres and identify the unique features of K-pop that set its translation process apart. We are basing our comparison on official translations of 234 K-pop songs, 62 animated musical songs, and 34 musical theatre songs in our dataset.
### _Semantic Pattern_
A unique characteristic of K-pop lies in its incorporation of both Korean and English within song lyrics. Upon analyzing the K-pop songs in our dataset, we found 30.2% of the lines are entirely in English, and a blend of English and Korean
\begin{table}
\begin{tabular}{l l l l} \hline \hline
**Sec-** & **Line** & **English** & **Korean** \\
**\#** & **\#** & **English** & _(English Translation)_ \\ \hline \hline
1 & 1 & You don’t know me & You don’t know me \\ \hline
1 & 2 & L-O-V-E or hatred & L-O-V-E or hatred \\ \hline
3 & Hi you with a smile, not goodbye & _(Iasted of a breakout)_ & _(Iasted of a breakout)_ \\ \hline
4 & All the while, & & _(I only have an incorrect smile)_ \\ & I’ll be sure to leave you wondering & _(I'll be sure to leave you wondering)_ & _(I’ll be sure)_ \\ \hline
2 & 5 & Oh, on the outside I’ll be all calm & _(I’ll be sure)_ \\ & 6 & Baby no more real love & Baby, no more real love \\ \hline
7 & Imma pretend we’re going strong &
\begin{tabular}{l} \{\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\uparrow\)\(\uparrow\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\)\(\uparrow\uparrow\)\(\uparrow\)\(\uparrow\
is observed in 20.7% of lines. In K-pop song translations into English, English lyrics often remain untranslated, as illustrated in line 1, 2, and 6 of sample data provided in Table I. As a result, a superficial comparison between K-pop and other genres could lead to the misconception that K-pop has a high line-wise semantic similarity between the original and translated lyrics. However, we observed that this differs from real-world K-pop translation practices, which tend to focus on section-by-section relationships. Using sample data from Table I as an example, the English lyrics in line 5, "Oh, on the outside I'll be all calm," don't directly align semantically with the Korean lyrics, "Pretending to not know anything." However, when viewed at the section level, the English and Korean lyrics of section 2 show the semantical relatedness by sharing a love theme and a playful mood.
For our analysis, we numerically assessed semantic textual similarity (_sts_) between English and Korean lyrics. In order to do this, we followed a method previously suggested [16]: calculating the cosine similarity between the embeddings of the original and translated lyrics, generated by a pre-trained sentence embedding model [17]. 2 Because this model was trained using English language data, Korean lyrics were automatically translated into English using Google Translator 3 before getting their embeddings. For instance, the semantic textual similarity between \(x_{i}\) = "
K-pop lyrics exists on a section-by-section basis rather than on a line-by-line basis.
### _Phoneme Repetition Pattern_
The highly repetitive nature of K-pop is reflected not only in melodies but also in the corresponding lyrics, as they are composed to complement music [19]. To quantify the degree of phoneme repetition, we leveraged the concept of _phoneme distinct_-\(2\) (\(pho\)), defined as the ratio of distinct phoneme bigrams to the total number of phoneme bigrams [16]. For example, consider a section with two lines \(X_{1}\) = {"On the winds can change", "I'll stray off the path I'm walking"}. First, the section is decomposed into phonemes and <eos> token is added to each line: "AA", "N",...,"CH", "EY", "N", "JH", "\(<\)eos>", "AY",..., "W", "AO", "K", "IH", "NG", and "\(<\)eos>". Next, the decomposed components are grouped into bi-grams: "AAN",..., "CHEV", "EYN", "NH", "JH\(<\)eos>",..., "WAO", "AOK", "KIW", "IHNG", "NG\(<\)eos>". Finally, the _pho_ of the section \(X_{1}\) (\(pho(X_{1})\)) is obtained by dividing the number of unique bigrams by the total number of bigrams. Given that the number of unique bigrams decreases with the repetition of phonemes, a low _pho_ value implies a higher degree of repetition, and a higher ratio suggests the opposite.
To numerically represent the degree of phoneme repetition (\(Pho_{deg}\)) for a single song, we averaged out the _pho_ value across all sections. To capture the variability of phoneme repetition degree (\(Pho_{var}\)) for a song, we computed the standard deviation of the _pho_ values for each section within the song. Table III presents the average degree and variability (\(Pho_{deg}\) and \(Pho_{var}\)) of phoneme repetition for K-pop, animated musical, and theatre songs in our dataset. K-pop displays the lowest average value, which suggests a high level of repetition inherent to the genre. Additionally, it is noteworthy that K-pop has the highest \(Pho_{var}\) value. This denotes significant variability, implying that a typical K-pop song features a mix of highly repetitive sections and others that are less so. As suggested by previous studies, the extent of phoneme repetition in original lyrics is mirrored in translated versions [4, 20]. This is evidenced by the similar \(Pho_{deg}\) and \(Pho_{var}\) values between the English and Korean versions in each genres.
## IV Neural K-pop Translation
One of the research opportunities that our dataset provides is the development of a model that can automatically generate singable translations of lyrics, using only textual data. Although past studies have relied on semi-supervised approaches with human-translated non-singable lyrics due to the unavailability of a public singable lyric translation dataset [12, 13], we present an example of building a neural network model that automatically translates Korean pop lyrics into English, using our dataset. To underscore the potential role of a singable lyric translation dataset, we will further contrast the outcomes of a fully semi-supervised approach versus a fine-tuning approach. The usage examples are provided with two different approaches, line-wise and section-wise. The results of these approaches will be compared with those of a pre-trained English to Koren translation model which is not specifically designed for lyric translation but shares the same architecture as our models.
### _Training_
Due to the scarcity of both non-singable and single lyric translation datasets, a previous study initially trained an English-to-Mandarin lyric translation model with a general Mandarin-English machine translation dataset [12]. Subsequently, the model was trained using non-singable human-translated lyrics, treating the original lyrics as the target and their translations as the source. In parallel to this previous approach, we began training a transformer-based model [21], which adopts the architecture of the Marian MT model [22], using general translation dataset, with tokenizing source and target lyrics with a pre-trained Korean-English tokenizer [23]. This was followed by the use of non-singable machine-translated lyrics (instead of non-singable human-translated lyrics, owing to the scarcity of aligned data pairs of English lyrics and human-translated non-singable Korean lyrics). Finally, we fine-tuned the model with our singable lyric translation dataset. Unlike the previous methodology that used melody information along with non-singable lyrics as input, we only used textual data. The training details are explained in Appendix -C.
Fig. 3: Density plots showing the distribution of line-by-line semantic textual similarity (\(sts\)) between the English and Korean lyrics for K-pop songs, considering both instances where untranslated English lyrics are included and where they are excluded, animated musical songs, and theatre songs.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{**Genre**} & \multicolumn{2}{c}{\(Pho_{deg}\)} & \multicolumn{2}{c}{\(Pho_{var}\)} \\ \cline{2-5} & en & kr & en & kr \\ \hline K-pop & 0.69 & 0.67 & 0.15 & 0.15 \\ Animation & 0.79 & 0.79 & 0.10 & 0.10 \\ Theatre & 0.79 & 0.77 & 0.09 & 0.09 \\ \hline \hline \end{tabular}
\end{table} TABLE III: The average values of \(Pho_{deg}\) and \(Pho_{var}\). Each signifies the degree and variability of phoneme repetition, respectively.
### _Data Preprocessing_
In order to match the syllable count, the previous study built an English-to-Mandarin lyric translation model that integrated syllable tokens, which represent the number of syllables, at the beginning of each source and target lyric line [12]. Because the efficacy of these tokens has been only proven in Mandarin text generation, where one character consistently corresponds to one syllable, we compare models with syllable tokens against those without them in order to study the impact of syllable tokens on English text generation, where the count of characters does not necessarily reflect the number of syllables.
We modified data in two different ways: line-wise and section-wise, as seen in Figure 4. Below are the details of data modification methods for models that incorporate syllable tokens (<SYL>). The methods to construct data for building models without <SYL> remain the same except for the omission of the syllable tokens, and that sections are split using <SEP> tokens instead of <SYL> tokens in the section-wise approach.
**General** We obtained 500,000 pairs of Korean sentences and their corresponding English translations [24]. For line-wise training, we simply incorporated syllable tokens <SYLs>, where the value of \(s\) represents the total syllable count of each target sentence. As an example, "annyeonghaseyo" and its English correspondence "Hello" would be presented as "<SYL2> annoyonghaseyo" and "<SYL2> Hello" because "Hello" consists of two syllables. Note that the \(s\) value does not have any relation with syllable counts for Korean segments. For section-wise training, we randomly divided both Korean and English sentences into \(n\) segments. Given that the syllable counts for each English segment are \(\{s_{1},...,s_{n}\}\), we inserted tokens \(\{\text{<SYL}s_{1}\text{>},...,\text{<SYL}s_{n}\text{>}\}\) prior to each segment, both in English and Korean.
**Non-singable Lyrics** We sourced 10,000 English lyrics randomly from the internet, which were then automatically translated into Korean using Google Translator. Here, the original English lyrics acted as the target, while the machine-translated Korean lyrics were used as the source. For line-wise training, syllable tokens representing the total syllable count of each line were inserted. For section-wise training, we determined the syllable counts of each target line \(\{s_{1},...,s_{n}\}\) within a section. Consequently, tokens \(\{\text{<SYL}s_{1}\text{>},...,\text{<SYL}s_{n}\text{>}\}\) were placed preceding each respective source and target line. The order of lines within a section was randomly shuffled, so that the model can learn to translate source sentences in varying order.
**Singable Lyrics** We employed our singable lyric translation dataset to fine-tune our models. For both line-wise and section-wise training, we inserted syllable tokens before each line. In the line-wise approach, each line was treated as individual data, while in the section-wise approach, the entire section was considered one data unit and the order of lines within a section was randomly shuffled.
### _Evaluation Metrics_
Researchers have suggested that traditional methods for analyzing conventional text generation are not appropriate
Fig. 4: Data utilization order during the training phase
for lyrics, given their unique linguistics characteristics [25]. As a result, prior study for automatically assessing lyric translations concentrated on comparing lyrical features of the original lyrics to those of the translated lyrics without using a reference [16] rather than using traditional metrics for machine translation evaluation. In alignment with these previously suggested methods, we compared the source lyrics to those automatically translated by our models, focusing on syllable count, semantics, and phoneme repetition.
To numerically assess the generated lyrics' syllable count, which is one of the most important factors that determine the singability, we used two metrics: the error rate and the syllable count distance (SCD). The error rate is defined as the rate at which the model generates lines with incorrect syllable counts. On the other hand, the SCD is defined in the following way [16]. Suppose that we have a pair of original lyrics \(\mathbf{X}\) and translated lyrics \(\tilde{\mathbf{X}}\), each consisting of \(n\) lines, where syllable counts for each line are represented as \(\{s_{1},...,s_{n}\}\) and \(\{\tilde{s_{1}},...,\tilde{s_{n}}\}\). For example, if "I'll stray off the path \(\Gamma\)m walking" is the second line of the Korean lyrics, and "haneuvel-pihae-sumji (\(\exists\)), \(\exists\), \(\exists\), \(\exists\), \(\exists\)]" is its equivalent line in the English lyrics, then \(s_{2}\) equals 8 and \(\tilde{s_{2}}\) equals 7. The SCD is calculated as shown below.
\[SCD(\mathbf{X},\tilde{\mathbf{X}})=\frac{1}{2n}\sum_{i=1}^{n}(\frac{|s_{i}- \tilde{s_{i}}|}{s_{i}}+\frac{|s_{i}-\tilde{s_{i}}|}{\tilde{s_{i}}}) \tag{3}\]
As for the semantics, we evaluated section-wise semantic similarity by using Equation 2 (\(Sem_{sec}\)) and semantic coherence between lines by using the BERT-based next sentence prediction (NSP) model [26]. While the NSP task was originally proposed to predict whether two sentences given are logically connected, we used the model to evaluate whether two consecutive lines are generated in a coherent manner. To achieve this, we fine-tuned a pre-trained NSP model, bert-based-uncased. using English lyrics from 7,103 songs that are not included in our training and evaluation data. Given all pairs of consecutively generated (translated) lines, we averaged out the predicted probability of whether two lines are consecutive or not, which we will call the NSP score in this paper.
Finally, we quantitatively assessed the degree and variability of phoneme repetition by obtaining the average value and standard deviation of the _pho_ across all sections (\(Pho_{deg}\) and \(Pho_{var}\)) for each song as we did in the previous section.
### _Quantitative Results_
Given that none of our evaluation methods require a reference, we used external data: lyrics from 2,038 K-pop songs that are not accompanied with corresponding English translations. We ensured that these lyrics did not duplicate any songs present in our training data. When drawing inferences from models with <SYLs> tokens, the value of \(s\) during the inference phase was determined based on the syllable count of each line in the source lyrics contrary to the training phase, where the \(s\) value was introduced based on the syllable count of the target lyrics. To generate inferences for a single section, composed of \(n\) lines, the line-wise approach models inferred on a line-by-line basis, thereby requiring \(n\) inference iterations, while the generation of section-wise approach models involved one iteration. The baseline model made inference only on a line-by-line basis, as it does not have the ability to split a translated section into lines. To draw inference, we used the beam search method, one of the most popular search methods in machine translation tasks [27], with four beams.
**Syllable Count and Semantics** When the models were fine-tuned with our dataset, there was a notable reduction in the average SCD and error rate (see Table IV). For instance, after fine-tuning the section-wise model using our dataset without <SYL>, the SCD dropped from 0.45 to 0.23, and the error rate decreased from 0.78 to 0.73. This suggests that the model could adapt to match syllable counts when learning from pairs of singable lyrics, even without explicit training for syllable count matching. The ability to match syllable counts was further enhanced when models were trained with <SYL> tokens, as evidenced by the significantly low average SCD and error rate values compared to those without and the baseline model. This underscores the effectiveness of <SYL> tokens in matching syllable counts in English textual data.
These improvements were achieved at the expense of semantic similarity, as suggested by the decline in the \(Sem_{sec}\) score from the baseline model to semi-supervised models and further decline from the semi-supervised to fine-tuned models. We interpret that they have learned to make a balance between semantics and singability. This interpretation aligns with real-world lyric translation practices where semantics
\begin{table}
\begin{tabular}{c c c c c c} \hline
**Input/** & **Training** & **Syllable Count** & \multicolumn{2}{c}{**Semantics**} \\ \cline{3-6}
**Output** & **Approach** & \multicolumn{2}{c}{} & \\ \cline{3-6}
**Form** & \multicolumn{1}{c}{\multirow{2}{*}{SD}} & \multicolumn{2}{c}{Error Rate} & \multicolumn{1}{c}{\multirow{2}{*}{\(Sem_{sec}\)}} & \multicolumn{1}{c}{\multirow{2}{*}{NSP}} \\ \hline \multirow{3}{*}{(Junczys-Dowmunt et al. (2018))} & Baseline & 0.34 (0.10) & 0.77 & 0.71 & 0.47 (0.08) & (0.08) \\ \cline{2-6} & Semi-supervised & 0.29 (0.31) & 0.75 & 0.64 & 0.51 \\ \cline{2-6} & Semi-supervised & 0.08 (0.20) & 0.48 & 0.68 & 0.50 (0.09) & (0.07) \\ \cline{2-6} & Fine-tuned & 0.16 (0.16) & 0.69 & 0.66 & 0.49 (0.10) & (0.07) \\ \cline{2-6} & Fine-tuned & 0.08 (0.38) & 0.42 & 0.62 & 0.51 (0.12) & (0.07) \\ \hline \multirow{3}{*}{Section-wise} & Semi-supervised & 0.45 (0.56) & 0.78 & 0.64 & 0.58 (0.12) & (0.07) \\ \cline{2-6} & Semi-supervised & 0.18 (0.12) & 0.71 & 0.68 & 0.54 (0.09) & (0.07) \\ \cline{2-6} & Fine-tuned & 0.23 (0.28) & 0.73 & 0.59 & 0.57 (0.12) & (0.08) \\ \cline{2-6} & Fine-tuned & 0.09 (0.11) & 0.56 & 0.60 & 0.54 (0.11) & (0.07) \\ \hline \multirow{3}{*}{Dataset} & 0.10 (0.04) & - & 0.59 & 0.57 (0.15) & (0.07) \\ \cline{2-6} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \end{tabular}
\end{table} TABLE IV: Comparative evaluation of syllable count and semantics, presenting the average value and associated standard deviation for each metric.
are often sacrificed to enhance singability. This is evidenced by the statistical results of the fine-tuned models, especially those trained with <SYL>, that closely mirror those of K-pop songs in our dataset. Therefore, we conclude that the models most accurately emulate real-world translation patterns when trained with our dataset using <SYL>. (Note that while they achieved a degree of SCD comparable to that of the dataset, they still struggled to maintain the exact syllable counts, as indicated by significant error rate values. This is due to the nature of English, distinct from that of Mandarain, where identical tokens don't consistently equate to the same number of syllables and the same phrases aren't always perceived to have an equivalent syllable count.)
While achieving decent performance in terms of syllable count and semantic similarity, the line-wise models displayed low semantic coherence, as indicated by a smaller NSP score than the section-wise models. This is due to the inherent characteristic of the line-wise models that requires making inferences without considering preceding or subsequent lines. It is also noteworthy that the baseline model showed the lowest NSP score assumably because the model failed to coherently capture the lyrical nuances of each line.
**Phonetic Pattern** The baseline model struggles to replicate the repetitive phonetic pattern, an important lyrical characteristic of K-pop, as suggested by a notably higher \(Ph_{deg}\) value compared to the source lyrics (see Table V). A similar trend is observed in line-wise models without <SYL> tokens. On the other hand, the section-wise models without <SYL> also displayed significant disparities from the source lyrics in \(Ph_{deg}\) value, but in a different manner: these models exhibited excessive repetition with a markedly high degree of variability because they occasionally generated the identical phrase too frequently within a section.
When trained with <SYL> tokens, the models adeptly emulated the repetitive nuances of K-pop lyrics with stability, as they know when to continue and when to halt generation. The ability of these models to mirror the repetitive patterns was further enhanced when fine-tuned with our dataset by learning from real-world examples. As a result, they produced English lyrics with \(Ph_{deg}\) values aligning closely with the source Korean lyrics in both line-wise and section-wise approaches. Based on our observation that the \(Ph_{deg}\) and \(Ph_{var}\) values of the original lyrics are reflected in the translated lyrics, we can infer that the model learns the unique phonetic pattern of K-pop when fine-tuned with our dataset using <SYL> tokens.
### _Qualitative Results_
We present inference examples drawn by the baseline model as well as self-supervised and fine-tuned models trained with <SYL> for a section from the K-pop song "In & Out" by Red Velvet in Figure 5. For simplicity, we will denote line-wise and section-wise semi-supervised models as SS-Line and SS-Sect, and line-wise and section-wise fine-tuned models as FT-Line and FT-Sect, respectively.
**Syllable Counts and Semantics** The baseline model's generated lyrics showed a significant difference in syllable counts with the original lyrics, failing to maintain singability. For instance, the fifth bar contains six notes and therefore, the original lyrics corresponding to this part have six syllables (Jo-a-ha-neun-geo-ya). However, the baseline model produced lyrics with only three syllables, making them unsingable, even with minor melody adjustments. On the other hand, the translations of semi-supervised and fine-tuned models generated lyrics with syllable counts comparable to those of the source lyrics. 4
Footnote 4: Refer to [https://github.com/havenpersona/k-pop](https://github.com/havenpersona/k-pop) for the vocalized inference examples.
The original lyrics in the second bar, "I want to make you mad ($\(\downarrow\)\(\downarrow\)\(\uparrow\)\(\uparrow\)) $\(\uparrow\)\(\downarrow\)\(\downarrow\)\(\uparrow\))", were translated by the baseline and semi-supervised section-wise model (SS-Sect) into "I wanna spins you off", a direct translation of the original meaning. Despite successfully reflecting the original meaning, it contains 6 syllables, while the original lyrics consist of 7 syllables. Conversely, the FT-Sect model successfully generated a 7-syllable line, "I think I'm crazy for you", which is not an accurate translation of the corresponding line. However, given that the original lyrics express deep affection for someone, we interpret that the FT-Sect model effectively captured the overall mood and topic of the song, yielding a decent (though not higher than SS-Sect) section-wise semantic similarity (\(Sem_{sec}\)). Moreover, the output from the FT-Sect model employs more lyrical expressions typically found in English lyrics about love, whereas the SS-Sect model, while achieving literal accuracy, fails to express love naturally in English. A similar tendency is observed in the line-wise models. The semi-supervised line-wise model (SS-line)'s generated lyrics, "I wanna make you mad", failed to have accurate syllable counts while achieving semantic accuracy. Conversely, the lyrics translated by the fine-tuned line-wise model (FT-line), "I want you to stay with me" maintained the number of syllables but not in semantic accuracy, while successfully capturing the
\begin{table}
\begin{tabular}{c c c c} \hline
**Input/** & **Training Approach** & \(Ph_{deg}\) & \(Ph_{var}\) \\ \hline & Baseline [22] & 0.70 & 0.14 \\ \hline \multirow{3}{*}{Line-wise} & Semi-supervised & 0.70 & 0.14 \\ \cline{2-4} & Semi-supervised (+\(<\)SYL>) & 0.68 & 0.13 \\ \cline{2-4} & Fine-tuned & 0.66 & 0.13 \\ \cline{2-4} & Fine-tuned (+\(<\)SYL>) & 0.64 & 0.13 \\ \hline \multirow{3}{*}{Section-wise} & Semi-supervised & 0.61 & 0.18 \\ \cline{2-4} & Semi-supervised (+\(<\)SYL>) & 0.66 & 0.14 \\ \cline{2-4} & Fine-tuned & 0.53 & 0.18 \\ \cline{2-4} & Fine-tuned (+\(<\)SYL>) & 0.63 & 0.13 \\ \hline \multicolumn{4}{c}{Source} & 0.64 & 0.13 \\ \hline \end{tabular}
\end{table} TABLE V: Comparative evaluation of phonetic patterns.
topic, mood, and lyrical expression. These examples further suggest that the models, when fine-tuned with a singable lyrics translation dataset, have learned to prioritize singability over semantic accuracy, reflecting real-world lyrics translation practices.
This example further illustrates the semantic incoherence of line-wise models, particularly the self-supervised model. For example, the consecutive lines, "He was like this I was " and "I don't know my mind, yeah", not only lack sensibility but also a logical connection. Conversely, the FT-sect model consistently focuses on expressing love for someone, without performing a direct word-for-word translation. This results in lower semantic accuracy but a reasonably good level of semantic coherence.
**Phonetic Pattern** The original (source) lyrics and melody lines in Figure 5 feature highly repetitive characteristics of K-pop. Similarly, both semi-supervised and fine-tuned models show the repetitive phonetic pattern. However, the ability of the line-wise model to create a sense of repetition is naturally limited to line-wise repetition, as seen in phrases like "I miss you, I miss you" in bar 6. Conversely, the section-wise model can generate a sense of repetition on a section-wise basis, as demonstrated in phrases like "Oh baby I want you" repeated in bars 1, 5, and 6. This capability results in the FT-sect model having a lower \(Ph_{deg}\) value than the FT-line model, as it captures the repeating patterns across the section.
## V Conclusions
In this paper, we introduced a novel singable lyrics dataset that precisely aligns Korean and English lyrics for a thousand songs on a line-by-line and section-by-section basis. As we demonstrated, this alignment is pivotal for analyzing and evaluating lyric translations. Unlike previous translation studies that primarily focused on Western languages and genres, our study targets Korean pop. We utilized this dataset to analyze the unique characteristics of K-pop translations in terms of semantic and phonetic patterns. Additionally, we first suggested that a singable lyrics dataset can be used to build a neural model that translates lyrics into singable forms, even without musical information given, as the model draws inferences from lyrics that are already singable. We compared two approaches to construct a neural lyric translation model, line-wise and section-wise, along with observing the effectiveness of <SYL> for these approaches, offering insights into the development of neural models capable of translating text akin to lyrics with structured line-by-line and section-by-section characteristics, such as poetry. We hope that this paper will expand the boundaries of singable lyric translation studies and offer valuable insights into this field.
Fig. 5: Automatic translations of “In & Out” by Red Velvet, generated by the baseline model [22] as well as the semi-supervised and fine-tuned models provided with the original lyrics, their pronunciation, and meanings for comparison. When the syllable count of the generated lyrics exceeds the target count, two or more syllables are put under one note considered to be easily arranged by a music expert. When the syllable count of the generated lyrics is less than the target count, one or more notes, considered “musically removable”, are not accompanied by lyrics in the score. |
2309.07829 | Minimality of the $\mathcal D$-groupoid of symmetries of a projective
structure | In this article we study Kummer's $\mathcal D$-groupoid, which is the
groupoid of symmetries of a meromorphic projective structure. We give necessary
and sufficient conditions for its minimality, in the sense of not having
infinite sub-$\mathcal D$-groupoids. The condition that we find turns out to be
equivalent to the strong minimality of the non-linear Schwarzian equation and
the non-integrability by means of Liouvillian functions of the linear
Schwarzian equation. | Alejandro Arenas Tirado, David Blázquez-Sanz, Guy Casale | 2023-09-14T16:19:20Z | http://arxiv.org/abs/2309.07829v1 | # Minimality of the \(\mathcal{D}\)-groupoid of symmetries of a projective structure
###### Abstract.
In this article we study Kummer's \(\mathcal{D}\)-groupoid, which is the groupoid of symmetries of a meromorphic projective structure. We give necessary and sufficient conditions for its minimality, in the sense of not having infinite sub-\(\mathcal{D}\)-groupoids. The condition that we find turns out to be equivalent to the strong minimality of the non-linear Schwarzian equation and the non-integrability by means of Liouvillian functions of the linear Schwarzian equation.
**keywords**: \(\mathcal{D}\)-groupoid, Schwarzian equation, Schwarzian derivative, Strong minimality, Symmetric Power, Lie groupoid.
###### Contents
* 1 Introduction
* 1.1 The Schwarzian equation as a \(\mathrm{PSL}_{2}\)-connection
* 1.2 Some relevant facts of Picard-Vessiot theory
* 2 \(\mathcal{D}\)-Groupoids
* 2.1 Jets of biholomorphisms
* 2.2 Zariski Topology of \(\mathrm{Aut}(X)\)
* 2.3 Kolchin Topology and definition of \(\mathcal{D}\)-groupoid
* 2.4 \(\mathcal{D}\)-algebra of a \(\mathcal{D}\)-groupoid
* 3 Kummer's groupoid
* 3.1 Linearization of the Kummer's equation
* 3.2 Symmetric Power
* 3.3 Strong Minimality
## 1. Introduction
The _Schwarzian derivative_ of a meromorphic function \(f\in\mathcal{K}\) (the field of meromorphic functions over an open subset of a Riemann surface) with respect to a coordinate \(z\) is defined by the expression:
\[S_{z}(f)=\left(\frac{f^{\prime\prime}}{f^{\prime}}\right)^{\prime}-\frac{1}{2} \left(\frac{f^{\prime\prime}}{f^{\prime}}\right)^{2}\ \ \text{where}\ \ ^{\prime}=\frac{d}{dz}\]
Introduction
Let \(\mathbb{C}\) be a smooth smooth manifold and \(\mathbb{C}\) a smooth smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) be a smooth manifold and \(\mathbb{C}\) a smooth manifold. Let \(\mathbb{C}\) a smooth manifold and \(\mathbb{C}\) a smooth manifold.
In subsection 1.1 we explain our take on Schwarzian equations using the jet spaces and invariant connections. The interested reader may find additional information in [5, 3]. Subsection 1.2 contains some well known tools of Picard-Vessiot theory that we will use in the proof of our result, main sources are [9, 6, 17]. Section 2 is devoted to the definition of \(\mathcal{D}\)-groupoid and its linearization. Here we give a synthetic exposition with the purpose of just give the necessary tools to the reader. The main reference is [13] and there is also a general presentation of the subject in [2]. Finally in section 3 we go into the exploration of Kummer's groupoid. The computation of the equation of Kummer's groupoid and its linealization is classical, then we go into the proof of some preliminary results and then our main result.
### The Schwarzian equation as a \(\mathrm{PSL}_{2}\)-connection
We consider an algebraic curve \(X\) as a suitable ramified covering of \(\mathbb{C}\) so that the algebraic function \(R(\lambda)\in\mathbb{C}(X)\) is seen as a rational function on \(X\). We also remove from \(X\) all the zeroes and poles of \(d\lambda\) and poles of \(R(\lambda)\) so that the vector field \(\frac{d}{d\lambda}\) has neither zeroes nor poles on \(X\) and \(R(\lambda)\) is a regular function on \(X\).
We see equations (1.1) and (1.2) geometrically as foliations in the jet space. Thus, we consider \(J=J_{*}^{2}(\overline{\mathbb{C}},X)\) the variety of 2-jets of local biholomorphisms from \(\overline{\mathbb{C}}\) to \(X\). An affine open subset of this jet space is \(\mathbb{C}\times X\times\mathbb{C}^{*}\times\mathbb{C}\) where the 4-tuple \((\tau,\lambda,\lambda_{\tau},\lambda_{\tau\tau})\) represents the order 2 development of a biholomorphim sending \(\tau\) to \(\lambda\) with derivatives \(\lambda_{\tau}\), and \(\lambda_{\tau\tau}\). Differential equation (1.1) is seen geometrically as the equations of the integral curves of the foliation \(\mathcal{F}=\langle D_{\tau}\rangle\) generated by the vector field \(D_{\tau}\) in the jet space \(J\):
\[D_{\tau}=\frac{\partial}{\partial\tau}+\lambda_{\tau}\frac{\partial}{\partial \lambda}+\lambda_{\tau\tau}\frac{\partial}{\partial\lambda_{\tau}}+\left( \frac{3}{2}\frac{\lambda_{\tau\tau}^{2}}{\lambda_{\tau}}+\lambda_{\tau}^{3}R( \lambda)\right)\frac{\partial}{\partial\lambda_{\tau\tau}}.\]
The integral curves of \(\mathcal{F}\) are the graphs of the 2-jet prolongations
\[j^{2}\hat{\lambda}\colon\overline{\mathbb{C}}\dashrightarrow J,\quad\tau \mapsto(\tau,\hat{\lambda}(\tau),\hat{\lambda}_{\tau}(\tau),\hat{\lambda}_{ \tau\tau}(\tau))\]
of solutions \(\hat{\lambda}(\tau)\) of (1.1).
On the other hand, note that the inversion of biholomorphims gives an isomorphism \(J\simeq J_{*}^{2}(X,\overline{\mathbb{C}})\) that corresponds just to the interchange between the dependent and independent variable \(\lambda\) and \(\tau\). We consider this isomorphism just as change of coordinates in \(J\) giving rise to new coordinates \(\lambda,\tau,\tau_{\lambda},\tau_{\lambda\lambda}\) for which the same foliation \(\mathcal{F}\) represents equation (1.2).
A projective structure on \(X\) is a maximal atlas \(\{(\mathcal{U}_{i},\tau_{i})\}_{i\in I}\) of coordinates \(\tau_{i}\colon X\to\overline{\mathbb{C}}\) with the property that transition functions in \(\overline{\mathbb{C}}\) are elements of \(\mathrm{PSL}_{2}(\mathbb{C})\). An important feature about equation (1.2) is that it defines a projective structure on \(X\). Note that, if \(\tau(\lambda)\) is a solution of (1.2) then any other solution with the same domain of definition is of the form \(g(\tau(\lambda))\) for some \(g\in\mathrm{PSL}_{2}(\mathbb{C})\).
This structure of the solution space of (1.2) is reflected in the jet space. The action of \(\mathrm{PSL}_{2}\) on \(\overline{\mathbb{C}}\) lifts to an action on \(J\simeq J_{*}^{2}(X,\overline{\mathbb{C}})\). The natural projection \(\pi\colon J\to X\) is a principal bundle with structure group \(\mathrm{PSL}_{2}\).
The foliation \(\langle D_{\tau}\rangle\) turns out to be a \(\mathrm{PSL}_{2}\)-invariant connection. Therefore, the theory of strongly normal extensions can be applied to the equation (1.2); its Galois group will be an algebraic subgroup of \(\mathrm{PSL}_{2}\).
Indeed, there is a well known explicit relation between the linear-Schwarzian equation (1.2) and the second order linear differential equation,
\[\frac{d^{2}\psi}{d\lambda^{2}}=-\frac{1}{2}R(\lambda)\psi. \tag{1.4}\]
Namely, the quotient \(\tau=\psi_{1}/\psi_{2}\) between two any linearly independent solutions \(\psi_{1}\), \(\psi_{2}\) or (1.4) is a solution of (1.2). This relation can be seen geometrically as an equivariant 2-cover of principal bundles,
\[\mathrm{SL}_{2}(\mathbb{C})\times X\to J\] \[\left(\left[\begin{array}{cc}a&b\\ c&e\end{array}\right],\lambda\right)\mapsto j_{\lambda}^{2}\left(\frac{a\tau+ b}{c\tau+e}\right)\]
that maps the companion system
\[\frac{d}{d\lambda}\left[\begin{array}{cc}\psi_{1}&\psi_{2}\\ \psi_{1}^{\prime}&\psi_{2}^{\prime}\end{array}\right]=\left[\begin{array}{ cc}0&1\\ -\frac{1}{2}R(\lambda)&0\end{array}\right]\left[\begin{array}{cc}\psi_{1}& \psi_{2}\\ \psi_{1}^{\prime}&\psi_{2}^{\prime}\end{array}\right] \tag{1.5}\]
of equation (1.4) to the linear-Schwarzian equation (1.2). Note that this map is well defined except on the singularities of \(d\lambda\) that we already removed from our algebraic curve \(X\). In [3] and [5] it is shown that the equation (1.1) is strongly minimal if and only if the Galois group of the equation (1.4) is exactly \(\mathrm{SL}_{2}\). This last condition is equivalent to the non-integrability of either (1.2) or (1.4) by means of Liouvillian functions.
### Some relevant facts of Picard-Vessiot theory
Let us discuss here some facts around the integrability of equations (1.2) and (1.4) in the context of Picard-Vessiot theory, which is the part of differential Galois theory that deals with linear differential equations. The interested reader may consult section 7.3 of [6] or [9] for the original source. There is also slightly more extended exposition of these facts in [3]. In our case the field of coefficients is \(\mathcal{K}=\mathbb{C}(X)\) endowed of the derivation \(\frac{d}{d\lambda}\). All the facts explained in this section hold for any differential field of characteristic zero with algebraically closed field constants.
Let us consider the differential field extension \(\mathcal{K}\subseteq\mathcal{K}\langle\psi_{1},\psi_{2}\rangle\) spanned by two linearly independent solutions of (1.4). The differential Galois group of equation (1.4) is the group of differential field automorphisms \(G=\mathrm{Aut}(\mathcal{K}\langle\psi_{1},\psi_{2}\rangle/\mathcal{K})\). This group is naturally represented as an algebraic subgroup of \(\mathrm{SL}_{2}(\mathbb{C})\). For each \(\sigma\in G\) we have,
\[\left[\begin{array}{cc}\sigma(\psi_{1})&\sigma(\psi_{2})\end{array}\right]= \left[\begin{array}{cc}\psi_{1}&\psi_{2}\end{array}\right]\cdot\left[\begin{array} []{cc}a_{\sigma}&c_{\sigma}\\ b_{\sigma}&e_{\sigma}\end{array}\right] \tag{1.6}\]
The fundamental theory of integrability by Liouvillian functions in differential Galois theory says that linear differential equation as (1.2) is integrable by mean of Liouvillian functions if and only if the Lie algebra of \(G\) is solvable. On the other hand, all proper subgroups of \(\mathrm{SL}_{2}(\mathbb{C})\) have solvable Lie algebra. Eigenvectors of the action of Lie algebra of \(G\) on the vector space of solutions are related with algebraic solutions of the auxiliar Riccati equation,
\[u^{\prime}+u^{2}+\frac{1}{2}R(\lambda)=0 \tag{1.7}\]
satisfied by \(u=\frac{d\log\psi}{d\lambda}\), the logarithmic derivative of a solution of (1.4). The following proposition accounts for the preliminary considerations before Kovacic's algorithm in [9] and are well known facts in the context of Picard-Vessiot theory.
**Proposition 1.1**.: _Let us consider the differential equation (1.4). There are the following four mutually exclusive possibilities for the solutions:_
**(Case 1):**: _The Ricatti equation (_1.7_) has at least a solution_ \(u\in\mathcal{K}\)_._ \(\psi=e^{\int u}\) _is a Liouvillian solution of_ 1.4 _and the Galois group_ \(G\) _is conjugated to a group of triangular matrices._
**(Case 2):**: _The Ricatti equation (_1.7_) has a pair of conjugated solutions_ \(u_{\pm}\) _that are algebraic of degree_ \(2\) _over_ \(\mathcal{K}\)_._ \(\psi_{\pm}=e^{\int u_{\pm}}\)_, are algebraically independent Liouvillian solutions of (_1.4_) and the Galois group_ \(G\) _is conjugated to a subgroup of the infinite dihedral group._
**(Case 3):**: _All solutions of (_1.4_) are algebraic over_ \(\mathcal{K}\) _and the Galois group_ \(G\) _is conjugated to a finite crystallographic group._
**(Case 4):**: _Equation (_1.4_) has no Liouvillian solution and the Galois group is_ \(G=\mathrm{SL}_{2}(\mathbb{C})\)_._
Let us consider now \(\tau=\psi_{1}/\psi_{2}\), which is a solution of (1.2), and the tower of differential fields,
\[\mathcal{K}\subseteq\mathcal{K}\langle\tau\rangle\subseteq K\langle\psi_{1},\psi_{2}\rangle.\]
From equation (1.6) we have that,
\[\sigma(\tau)=\frac{a_{\sigma}\tau+b_{\sigma}}{c_{\sigma}\tau+d_{\sigma}}\in \mathbb{C}\langle\tau\rangle\]
It follows that \(\mathcal{K}\subset\mathcal{K}\langle\tau\rangle\) is a Picard-Vessiot extension whose differential Galois group is the quotient of \(G\) by the stabilizer of \(\tau\),
\[\mathrm{Aut}(\mathcal{K}\langle\tau\rangle/\mathcal{K})=\overline{G}=G/(G \cap\{\mathrm{I},-\mathrm{I}\}).\]
Note that cases 1, 2, 3 of Proposition 1.1 will lead us to a Liouvillian extension \(\mathcal{K}\subseteq\mathcal{K}(\tau)\) and \(\overline{G}\) a proper subgroup of \(\mathrm{PSL}_{2}(\mathbb{C})\) and case 4 implies the non-existence of Liouvillian solutions for (1.2) and \(\overline{G}=\mathrm{PSL}_{2}(\mathbb{C})\).
## 2. \(\mathcal{D}\)-Groupoids
The notion of \(\mathcal{D}\)-groupoid was introduced by B. Malgrange in [12] in the context of non-linear differential Galois theory. We may also call then algebraic Lie pseudogroups, as they are systems of algebraic differential equations whose solutions form a Lie pseudogroup. The original proposal allowed differential equations that where analytic in the base and algebraic on the derivatives, but later formulations restricted the definition to algebraic differential equations. Here, we will give a definition that is equivalent to definition 5.2 in [13] (Definition 2.2); the equivalence between these definitions can be found in appendix A of [2].
From this part on, in order to make the notation simple the derivative symbol \(f^{\prime}\) will be used with the meaning of the derivative with respect to \(\lambda\).
### Jets of biholomorphisms
The jet space \(J^{k}(X,X)\) is defined as the set of equivalence classes of contact of order \(\geq k\) at points of \(X\) of local biholomorphisms from open subsets of \(X\) to open subsets of \(X\).1 This space \(J^{k}(X,X)\) has a natural structure of algebraic variety (see, for instance [2]), and it contains the open subset of jets of biholomorphism:
Footnote 1: To be more precise, in general, taking a \(\pi\colon E\to M\) a submersion between varieties; \(\varphi_{1}\), \(\varphi_{2}\) local sections defined around \(p\in M\) have contact of order \(\geq k\) if \(d\varphi_{1}\), and \(d\varphi_{2}\) have contact of order \(\geq k-1\), that is, \(\varphi_{1}(p)=\varphi_{2}(p),\cdots,d_{p}(d^{k-1}\varphi_{1})=d_{p}(d^{k-1} \varphi_{2})\). A local homomorphism is seen as a local section of the trivial bundle \(\pi_{1}\colon X\times X\to X\).
\[\mathrm{Aut}_{k}(X):=J^{k}_{*}(X,X)\]
Jets of biholomorphisms may be composed and inverted taking into account their sources and targets, so that \(\mathrm{Aut}_{k}(X)\) is an algebraic groupoid over \(X\); the groupoid of \(k\)-jets of invertible local biholomorphisms on \(X\).
In order to clarify the algebraic structure of \(\mathrm{Aut}_{k}(X)\) let us examine the case in which \(X\subseteq\mathbb{C}\) is an affine subset of the complex numbers, with coordinate \(\lambda\). The general case is recovered by gluing coverings of that case.
A \((k+2)\)-tuple \((\lambda_{0},\varphi_{0},\varphi_{0}^{\prime},...,\varphi_{0}^{(k)})\) corresponds to the \(k\)-jet in \(\lambda_{0}\) of any biholomorphism of the form:
\[\varphi:\lambda\mapsto\varphi_{0}+\varphi_{0}^{\prime}(\lambda-\lambda_{0})+ \frac{\varphi_{0}^{\prime\prime}}{2}(\lambda-\lambda_{0})^{2}+\cdots+\frac{ \varphi_{0}^{(k)}}{k!}(\lambda-\lambda_{0})^{k}+o(\lambda-\lambda_{0})^{k+1}\]
From now on we do not mention the subindex zero, so \(\lambda\), \(\varphi\), \(\varphi^{\prime}\), \(\ldots\), \(\varphi^{(k)}\) is a system of coordinates in \(\mathrm{Aut}_{k}(X)\simeq X\times X\times\mathbb{C}^{*}\times\mathbb{C}^{k-1}\). The same direct product decomposition is possible for an affine algebraic curve \(X\), providing that the vector field \(\frac{d}{d\lambda}\) has neither zeros or poles in \(X\). The composition law in \(\mathrm{Aut}_{k}(X)\) is given by Faa di Bruno formulae, and thus, it is polynomial in
the coordinates of \(\operatorname{Aut}_{k}(X)\) wich turns out to be an algebraic groupoid over \(X\). Truncations \(\operatorname{Aut}_{k}(X)\to\operatorname{Aut}_{k-1}(X)\) are compatible with composition and inversion. Hence, the projective limit \(\operatorname{Aut}(X):=\varprojlim\operatorname{Aut}_{k}(X)\) inherits the groupoid structure. Elements of \(\operatorname{Aut}(X)\) are formal local biholomorphisms, not necessarily convergent.
**Definition 2.1**.: The following types of algebraic subvarieties of \(\operatorname{Aut}_{k}(X)\) will be considered.
1. A strict algebraic subgroupoid of \(\operatorname{Aut}_{k}(X)\) is a Zariski closed subset \(Z\subset\operatorname{Aut}_{k}(X)\) which is also a smooth subgroupoid.
2. A rational subgroupoid \(\mathcal{G}_{k}\subset\operatorname{Aut}_{k}(X)\) is a Zariski closed subset such that on an open \(U\subset X\), \(\mathcal{G}_{k}|_{U}\) is dense in \(\mathcal{G}_{k}\) and is a strict algebraic subgroupoid of \(\operatorname{Aut}_{k}(U)\).
### Zariski Topology of \(\operatorname{Aut}(X)\)
Let us consider the case in which \(X\) is an affine curve and the vector field \(\frac{d}{d\lambda}\) has neither zeroes or poles, so that \(\operatorname{Aut}_{k}(X)\simeq X\times X\times\mathbb{C}^{*}\times\mathbb{C} ^{k-1}\) is an affine algebraic manifold with ring of regular functions \(\mathcal{O}_{\operatorname{Aut}_{k}(X)}\). Taking limit \(k\to\infty\) we obtain the proalgebraic variety \(\operatorname{Aut}(X)\) with ring of regular functions,
\[\mathcal{O}_{\operatorname{Aut}(X)}=\bigcup_{k}\mathcal{O}_{\operatorname{ Aut}_{k}(X)}.\]
The general case is similar but instead of ideals of the ring of regular functions \(\mathcal{O}_{\operatorname{Aut}_{k}(X)}\) we would be forced to use coherent sheaves of ideals of its structural sheaf.
A _Zariski closed_ subset \(Z\) of \(\operatorname{Aut}(X)\) is defined by a radical ideal \(\mathcal{E}\subset\mathcal{O}_{\operatorname{Aut}(X)}\). The \(k\)-order truncation of \(Z\) is the Zariski closed \(Z_{k}\) of \(\operatorname{Aut}_{k}(X)\), defined by the ideal \(\mathcal{E}_{k}=\mathcal{E}\cap\mathcal{O}_{\operatorname{Aut}_{k}(X)}\), \(Z_{k}:=V(\mathcal{E}_{k})\). In this way, \(Z\) can be viewed as a sequence of closed Zariski sets \(\{Z_{k}\}_{k\in\mathbb{N}}\) such that each projection \(Z_{k}\to Z_{k-1}\) is dominant, that means
\[\cdots\longrightarrow Z_{k}\longrightarrow Z_{k-1}\longrightarrow\cdots \longrightarrow Z_{0}\subset X\times X\]
is a dominant sequence.
### Kolchin Topology and definition of \(\mathcal{D}\)-groupoid
The total derivative is a canonical way of extending derivations in \(\mathcal{O}_{X}\) to derivations of \(\mathcal{O}_{\operatorname{Aut}(X)}\). Given a vector field \(\vec{w}\) in \(X\), we define its total derivative,
\[\vec{w}^{tot}\colon\mathcal{O}_{\operatorname{Aut}_{k}(X)}\to\mathcal{O}_{ \operatorname{Aut}_{k+1}(X)},\quad(\vec{w}^{tot}f)(j_{x}^{k+1}\varphi)=\vec{w} _{x}(f\circ j^{k}\varphi).\]
Let us recall that \(\frac{d}{d\lambda}\) is a vector field without zeros nor poles in \(X\). Through the total derivation mechanism it extends to a derivation of \(\mathcal{O}_{\operatorname{Aut}(X)}\) that we denote by the same symbol (without the \({}^{tot}\) superscript), so that \(\mathcal{O}_{\operatorname{Aut}(\operatorname{X})}\), endowed with
\[\frac{d}{d\lambda}=\frac{\partial}{\partial\lambda}+\varphi^{\prime}\frac{ \partial}{\partial\varphi}+\varphi^{\prime\prime}\frac{\partial}{\partial \varphi^{\prime}}+\ldots\]
is a differential ring.
An ideal \(\mathcal{J}\) of \(\mathcal{O}_{\mathrm{Aut}(X)}\) is an _\(\mathcal{D}\)-ideal_ if for every differential function \(f\) in \(\mathcal{J}\) the total derivatives of \(f\) are also in \(\mathcal{J}\). A subset \(Y\subset\mathrm{Aut}(X)\) is a _Kolchin closed_ if it is a Zariski closed, whose ideal is a \(\mathcal{D}\)-ideal. That is, a closed set in Kolchin topology is given by the zeros of a radical \(\mathcal{D}\)-ideal.
**Definition 2.2**.: A \(\mathcal{D}\)-groupoid \(\mathcal{G}=\{\mathcal{G}_{k}\}_{k\in\mathbb{N}}\subset\mathrm{Aut}(X)\) is a Kolchin closed set such that for all \(k\), \(\mathcal{G}_{k}\) is a rational subgroupoid. The smaller \(k\) such that \(\mathcal{G}_{k}\subset\mathrm{Aut}_{k}(X)\) is called the order of \(\mathcal{G}\).
* Given a \(\mathcal{D}\)-groupoid \(\mathcal{G}\) then there is an open set \(\mathcal{U}\subset X\) such that \(\mathcal{G}|_{\mathcal{U}}\) is a groupoid over \(\mathcal{U}\), see [2] appendix A.
* Solutions of \(\mathcal{G}\), meaning, local biholomorphisms \(f\colon X\dasharrow X^{2}\) such that for all \(x\) in its domain \(j_{x}f\in\mathcal{G}\) form a pseudogroup of transformations of \(X\).
* In the general setting \(\mathcal{G}\) is defined by a system of algebraic PDE with as many independent variables and unknowns as the dimension of the base variety. As \(X\) is one-dimensional then \(\mathcal{G}\) is defined by ODE. Let us consider \(\mathcal{G}\) of order \(k\). As \(\mathcal{G}_{k}\) dominates \(\mathrm{Aut}_{k-1}(X)\) it must be an hypersurface of \(\mathrm{Aut}_{k}(X)\). As \(\mathrm{Aut}_{k}(X)\) is affine then the ideal of \(\mathcal{G}_{k}\) is spanned single element \(F(\lambda,\varphi,\dots,\varphi^{(k)})\in\mathcal{O}_{\mathrm{Aut}_{k}(X)}\). From the differential equation \(F=0\) we deduce, \[\frac{d}{d\lambda}F=\varphi^{(k+1)}\frac{\partial F}{\partial\varphi^{(k)}}+Q =0;\quad\varphi^{(k+1)}=-\frac{Q}{F_{\varphi^{(k)}}}\] a \((k+1)\)-order differential equation where the \((k+1)\)-th derivative is expressed as a rational function of the lower degree derivatives, and same for higher order derivatives. Summarizing, a \(\mathcal{D}\)-groupoid of order \(k\) on an affine algebraic curve \(X\) with a non vanishing vector field is always determined by a single \(k\)-th order differential equation.
### \(\mathcal{D}\)-algebra of a \(\mathcal{D}\)-groupoid
Consider the tangent bundle \(TX\to X\), and define \(\mathrm{aut}_{k}(X)=J^{k}(TX/X)\) as the bundle of \(k\)-jets of sections of the tangent bundle. The vector bundle \(\mathrm{aut}_{k}(X)\to X\) with its anchor \(\mathrm{aut}_{k}(X)\to\mathrm{aut}_{0}(X)=TX\) is the Lie algebroid of \(\mathrm{Aut}_{k}(X)\). Let us explore how elements of \(\mathrm{aut}_{k}(X)\) are identified with vectors tangent to the identity in \(\mathrm{Aut}_{k}(X)\). For this, consider, \(j_{x}^{k}\vec{w}^{(k)}\in\mathrm{aut}_{k}(X)\) so that
\[f(\lambda)=f(x)+f^{\prime}(x)(\lambda-\lambda(x))+f^{\prime\prime}(x)\frac{( \lambda-\lambda(x))^{2}}{2}+\cdots\]
For \(\varepsilon\) varying in \(\mathbb{C}\), \(j_{x}^{k}(\varepsilon\vec{w})\) is a curve on \(\mathrm{aut}_{k}(X)\). If \(\varepsilon\) is sufficiently small, The existence theorem for ordinary differential equations implies that the exponential
\[\exp(\varepsilon\vec{w})\colon(X,x)\to(X,y(\varepsilon))\]
is an analytic map, where \(y(\varepsilon)=\exp(\varepsilon\vec{w})x\). Thus, by taking the \(k\)-jet of exponential \(j_{x}^{k}\!\exp(\varepsilon\vec{w})\) as a curve in \(\operatorname{Aut}_{k}(X)\),
\[\Phi_{k}\colon\operatorname{aut}_{k}(X)\hookrightarrow T(\operatorname{Aut}_{k }(X)),\quad\vec{w}\mapsto\left.\frac{d}{d\varepsilon}\right|_{\varepsilon=0}j_ {x}^{k}(\exp(\epsilon\vec{w})),\]
so that each element of \(\operatorname{aut}_{k}(X)\) is seen a a vector tangent to the identity in \(\operatorname{Aut}_{k}(X)\) preserving the source map. Taking the projective limit \(k\to\infty\) we obtain,
\[\Phi\colon\operatorname{aut}(\operatorname{X})\hookrightarrow T( \operatorname{Aut}(X))\]
whose expression in coordinates is
\[j_{x}\left(f(\lambda)\frac{\partial}{\partial\lambda}\right)\mapsto f(x)\frac{ \partial}{\partial\varphi}+f^{\prime}(x)\frac{\partial}{\partial\varphi_{ \lambda}}+f^{\prime\prime}(x)\frac{\partial}{\partial\varphi_{\lambda\lambda}}+\ldots\]
Now, we are studying certain \(\mathcal{D}\)-groupoid \(\mathcal{G}\subset\operatorname{Aut}(X)\). We may ask for which germs vector fields \(j_{x}\vec{w}\in\operatorname{aut}(X)\) have exponential \(j_{x}(\varepsilon\vec{w})\) in \(\mathcal{G}\). This happens if \(\Phi(j_{x}\vec{w})\) is a vector tangent to \(\mathcal{G}\). This is the problem of linearizing the groupoid \(\mathcal{G}\).
Consider some differential equation
\[F(\lambda,\varphi,\varphi^{\prime},\varphi^{\prime\prime},\ldots,\varphi^{(k)} )=0\]
which vanish on the \(\mathcal{G}\). Here \(F\in\mathcal{O}_{\operatorname{Aut}(X)}\). We linearize \(F\) by taking,
\[\ell F\left(j_{x}^{k}f(\lambda)\frac{\partial}{\partial\lambda}\right)=dF \left(\Phi_{k}\left(j_{x}^{k}f(\lambda)\frac{\partial}{\partial\lambda}\right)\right)\]
so that \(\ell F\in\mathcal{O}_{\operatorname{aut}(X)}\) is in fact a linear form in the coefficients of the power series development of \(f(\lambda)\) and thus a linear differential equation for the unknown \(f\).
We can now define \(\operatorname{Lie}(\mathcal{G}_{k})\subset\operatorname{aut}_{k}(X)\) as the rational linear bundle defined by all the linearizations \(\ell F\) of functions vanishing on \(\mathcal{G}_{k}\), and the \(\mathcal{D}\)-Lie algebra of \(\mathcal{G}\) as the projective limit of this system \(\operatorname{Lie}(\mathcal{G})\subset\operatorname{aut}(X)\). It is a system of linear differential equations whose solutions are vector fields in \(X\). Without discussing the structure of these objects, let us take note of the two following relevant facts:
* The Lie bracket of two solutions of \(\operatorname{Lie}(\mathcal{G})\), when defined, is also a solution of \(\operatorname{Lie}(\mathcal{G})\).
* If \(\mathcal{G}\) is determined by single differential equation of order \(k\) then \(\operatorname{Lie}(\mathcal{G})\) is determined by a single linear differential equation of order \(k\).
More theoretical results on \(\mathcal{D}\)-Lie algebras in relation with \(\mathcal{D}\)-Lie groupoid can be found in [13].
## 3. Kummer's groupoid
Let us consider \(\varphi\colon X\dashrightarrow X\) an local biholomorphism. We want to check if \(\sigma\) is compatible with the projective structure induced by equation (1.2) in the sense that composition with \(\sigma\) sends local projective charts on local projective charts of the same structure. In other word, for each solution \(\tau\) of (1.2) the composition \(\tau\circ\sigma\) is also a solution. From the chain rule for the Schwarzian derivative we obtain:
\[S_{\lambda}(\tau\circ\varphi)=(S_{\lambda}(\tau)\circ\varphi)\varphi_{\lambda} ^{2}+S_{\lambda}(\varphi)\]
Now, from equation (1.2) after composition with \(\varphi\) we obtain \(S_{\lambda}(\tau)\circ\varphi=R(\varphi)\). Finally we obtain a differential equation for \(\varphi\) as function of \(\lambda\):
\[S_{\lambda}(\varphi)=R(\lambda)-R(\varphi)\varphi_{\lambda}^{2}. \tag{3.1}\]
The above differential equation characterizes the symmetries of (1.1) and then it defines a \(\mathcal{D}\)-groupoid of \(X\) which is the Kolchin closed subset \(\mathcal{G}\subset\mathrm{Aut}(X)\) determined by the radical \(\mathcal{D}\)-ideal generated by \(S_{\lambda}(\varphi)-R(\lambda)+R(\varphi)\varphi_{\lambda}^{2}\). As this equation was first presented by Kummer in [10]), we refer to \(\mathcal{G}\) as Kummer's groupoid. This equation also appear in various different work during the last century : in Ritt's work on hypertranscendency of Koenings's linearisations [16], in Ecalle synthesis of binary parabolic diffeomorphisms [7], or in the classification of rational transformations of \(\mathbb{CP}_{1}\) preserving an rational geometric structure [4].
In \(\mathrm{Aut}_{3}(X)\), the rational subgroupoid \(\mathcal{G}_{3}\) consisting of 3-jets of solutions of the Kummer equation is:
\[\mathcal{G}_{3}:=\{j_{\lambda}^{3}\varphi|\varphi\colon\lambda\mapsto\varphi( \lambda),\ R(\varphi)\varphi_{\lambda}^{2}+S_{\lambda}(\varphi)=R(\lambda)\}\]
It corresponds to the 3-jets of local biholomorphisms of \(X\) that are compatible with the projective structure. Note that \(\mathcal{G}\) is defined by a single differential equation of order 3, which is seen as a function on \(\mathrm{Aut}_{3}(X)\). By applying total derivative with respect to \(\lambda\) we obtain that all derivatives of order higher than two can be written as a function of \(\lambda,\varphi,\varphi_{\lambda},\varphi_{\lambda\lambda}\) and therefore:
\[\mathcal{G}\simeq\mathcal{G}_{k}\simeq\ldots\mathcal{G}_{3}\simeq\mathcal{G} _{2}=\mathrm{Aut}_{2}(X).\]
as algebraic varieties. Hence \(\mathcal{G}\) is an algebraic Lie groupoid of complex dimension 4 and it is isomorphic, as Lie groupoid, to \(\mathrm{Aut}_{2}(X)\).
### Linearization of the Kummer's equation
Let us write the Kummer equation (3.1)
\[R(\varphi)\varphi^{\prime 2}+S-R(\lambda)=0\]
where the Schwarzian derivative is now seen as a function of the coordinates in \(\mathrm{Aut}_{3}(X)\)
\[S(\lambda,\varphi,\varphi^{\prime},\varphi^{\prime\prime},\varphi^{\prime\prime \prime})=\frac{\varphi^{\prime\prime\prime}}{\varphi^{\prime}}-\frac{3}{2}\left( \frac{\varphi^{\prime\prime}}{\varphi^{\prime}}\right)^{2}.\]
Let us consider \(\vec{w}=f(\lambda)\frac{\partial}{\partial\lambda}\). We obtain by direct computation:
\[\frac{\partial S}{\partial\varphi} =0;\] \[\frac{\partial S}{\partial\varphi^{\prime}} =-\frac{\varphi^{\prime\prime\prime}}{\varphi^{\prime 2}}+2\frac{ \varphi^{\prime\prime}}{\varphi^{\prime}}\frac{\varphi^{\prime\prime}}{\varphi^ {\prime 2}}\equiv 0\quad\text{(along $\mathcal{G}$)};\] \[\frac{\partial S}{\partial\varphi^{\prime\prime}} =-3\frac{\varphi^{\prime\prime}}{\varphi^{\prime 2}}\equiv 0\quad \text{(along $\mathcal{G}$)};\] \[\frac{\partial S}{\partial\varphi^{\prime\prime\prime}} =\frac{1}{\varphi^{\prime}}\equiv 1\quad\text{(along $\mathcal{G}$)};\] \[\Phi(j_{x}\vec{w})S =f^{\prime\prime\prime}(x);\] \[\Phi(j_{x}\vec{w})R(\lambda) =0;\] \[\Phi(j_{x}\vec{w})R(\varphi)\varphi^{\prime 2} =f(x)R^{\prime}(\lambda(x))\lambda^{\prime 2}+2R(\lambda(x))f^{ \prime}(x)\lambda^{\prime}\] \[\equiv f(x)R^{\prime}(\lambda)+2R(\lambda(x))f^{\prime}(x)\quad \text{(along $\mathcal{G}$)}\]
And thus, the linearized differential equation is
\[f^{\prime\prime\prime}+2R(\lambda)f^{\prime}+R^{\prime}(\lambda)f=0. \tag{3.2}\]
It provides the necessary and sufficient conditions for the vector field \(f(\lambda)\frac{\partial}{\partial\lambda}\) to be tangent to \(\mathcal{G}\). This linear differential equation, defines a linear and closed Kolchin sub-bundle \(\operatorname{Lie}(\mathcal{G})\) of \(\operatorname{aut}(X)\), known as \(\mathcal{D}\)_-Lie algebra of the Kummer's groupoid_\(\mathcal{G}\).
### Symmetric Power
In this section we show the relation between the equation (3.2) of \(\operatorname{Lie}(\mathcal{G})\) and the the Riccati equation (1.7). For this purpose, we need to introduce the symmetric power of the second order linear differential equation (1.4). As before, let \(\mathcal{K}\) the differential field consisting of \(\mathbb{C}(X)\) endowed with \(\frac{d}{d\lambda}\), and lat \(\psi_{1}\), \(\psi_{2}\) be a pair of linearly independent solutions, so that we have the following extensions of differential fields.
The three functions \(\psi_{i}\psi_{j}\) for \(i,j=1,2\) generate the solution space of a third order equation, which is known as the _second symmetric power_. To derive it, consider \(f=\psi^{2}\), where \(\psi=a\psi_{1}+b\psi_{2}\) is any solution of (1.4), we have:
\[f^{\prime} =2\psi\psi^{\prime}\] \[f^{\prime\prime} =2\psi^{\prime 2}+2\psi\psi^{\prime\prime}=2\psi^{\prime 2}-R( \lambda)a\] \[f^{\prime\prime\prime} =4\psi^{\prime}\psi^{\prime\prime}-R^{\prime}(\lambda)a-R( \lambda)a^{\prime}\] \[=4\psi^{\prime}\left(-\frac{1}{2}R(w)\psi\right)-R^{\prime}( \lambda)a-R(\lambda)a^{\prime}\] \[=-2\psi\psi^{\prime}R(\lambda)-R^{\prime}(\lambda)a-R(\lambda)a^ {\prime}\] \[=-2R(\lambda)a^{\prime}-R^{\prime}(\lambda)a\]
So, rewriting
\[f^{\prime\prime\prime}+2R(\lambda)f^{\prime}+R^{\prime}(\lambda)f=0 \tag{3.3}\]
Which is the same third order differential equation that (3.2). In this way, we say that the third order linear differential equation (3.2) is the second symmetric power of the second order linear equation (1.4). From the above diagram, we have that \(\mathcal{K}\subset\mathcal{K}\langle\psi_{1}^{2},\psi_{2}^{2},\psi_{1}\psi_{ 2}\rangle\) is the Picard-Vessiot extension of the third order equation (3.3).
**Proposition 3.1**.: _The Galois groups \(\operatorname{Aut}(\mathcal{K}\langle\psi_{1}^{2},\psi_{2}^{2},\psi_{1}\psi_ {2}\rangle/\mathcal{K})\) and \(\operatorname{Aut}(\mathcal{K}\langle\psi_{1},\psi_{2}\rangle/\mathcal{K})\) have isomorphic Lie algebras._
Proof.: Let us examine the diagram of extensions of differential fields, with \(G=\operatorname{Aut}(\mathcal{K}\langle\psi_{1},\psi_{2}\rangle/\mathcal{K})\), \(H=\operatorname{Aut}(\mathcal{K}\langle\psi_{1},\psi_{2}\rangle/\mathcal{K} \langle\psi_{1}^{2},\psi_{2}^{2},\psi_{1}\psi_{2}\rangle)\) so that, by Galois correspondence we have that \(\operatorname{Aut}(\mathcal{K}\langle\psi_{1},\psi_{2}\rangle/\mathcal{K}) \simeq G/H\).
It suffices to observe that the second inclusion is either the identity or an algebraic extension of finite degree two. By the Galois correspondence, the Galois group of the small extension is a quotient of that of the large extension by a finite normal subgroup and therefore has the same Lie algebra.
**Proposition 3.2**.: _The following statements are equivalent:_
1. \(\operatorname{Aut}(\mathcal{K}\langle\psi_{1},\psi_{2}\rangle/\mathcal{K})= \operatorname{SL}_{2}\)_._
2. \(\operatorname{Aut}(\mathcal{K}\langle\psi^{2}\rangle/\mathcal{K})= \operatorname{PSL}_{2}\)_._
3. _The Riccati equation (1.7) has no algebraic solution over_ \(\mathcal{K}\)_._
Proof.: From the Galois correspondence [17, Proposition 1.34 p. 25] applied to the above diagram (a)\(\Longleftrightarrow\)(b), since \(\operatorname{PSL}_{2}\) is the only possible quotient of \(\operatorname{SL}_{2}\) by a finite group. The equivalence (a)\(\Longleftrightarrow\)(c) is consequence of Proposition 1.1 (see [17, Excercise 1.36 page 28]).
We can now proof the main result of this article.
**Theorem 3.3**.: _If the Ricatti equation (1.7) has no algebraic solutions, then the Kummer \(\mathcal{D}\)-groupoid \(\mathcal{G}\) has no sub-\(\mathcal{D}\)-groupoids of order greater than \(0\)._
Proof.: Assume that \(\mathcal{H}\subseteq\mathcal{G}\) is a sub-\(\mathcal{D}\)-groupoid of order greater than \(0\). Then the order of \(\mathcal{H}\) is \(1\), \(2\) or \(3\). Let us examine those cases separately.
* Let us assume that \(\mathcal{H}\) is of order \(3\). The equation (3.1) of \(\mathcal{G}\) allows to express \(\sigma^{\prime\prime\prime}\) as a rational function of \(\lambda,\sigma,\sigma^{\prime},\sigma^{\prime\prime}\). Therefore \(\mathcal{G}_{3}\) is the graph of a section from \(\mathrm{Aut}_{2}(X)\) to \(\mathrm{Aut}_{3}(X)\). Therefore, it is an irreducible surface. As \(\mathcal{H}\) is also an hypersuface of \(\mathrm{Aut}_{3}(X)\) and it is contained in \(\mathcal{G}_{3}\) then we have \(\mathcal{H}_{3}=\mathcal{G}_{3}\) and then \(\mathcal{H}=\mathcal{G}\).
* Let us assume that \(\mathcal{H}\) is of order \(2\). Then \(\mathrm{Lie}(\mathcal{H})\) is determined by a second order differential equation with coefficients in \(\mathcal{K}=\mathbb{C}(X)\).
\[f^{\prime\prime}+\alpha(\lambda)f^{\prime}+\beta(\lambda)f=0 \tag{3.4}\]
As \(\mathrm{Lie}(\mathcal{H})\subset\mathrm{Lie}(\mathcal{G})\) then, the \(3\)-dimensional solution space of the equation (3.2) contains two linearly independent solutions of (3.4). Note that this \(3\)-dimensional space is spanned by \(\psi_{1}^{2}\), \(\psi_{2}^{2}\) and \(\psi_{1}\psi_{2}\) where \(\psi_{1}\) and \(\psi_{2}\) are linearly independent solutions of (1.4). Inside this \(3\)-dimensional space, the set of elements of the form \(\psi^{2}\) with \(\psi\) a solution of (1.4) form a cone. In a complex \(3\)-dimensional vector space every plane intersects a cone on \(2\) lines or a double line. Thus, there is a solution \(f=\psi^{2}\) of (3.4) which is a square of a solution of (1.4). Replacing and using the original equation,
\[(\beta(\lambda)-R(\lambda))\psi^{2}+2\alpha(\lambda)\psi\psi^{\prime}+2\psi^{ \prime 2}=0\]
* If \(\beta(\lambda)=R(\lambda)\), we have \(2\psi^{\prime}(\psi\alpha(\lambda)-\psi^{\prime})=0\). In that case, we also have that either \(\psi\) is constant or \(\psi^{\prime}/\psi=\alpha(\lambda)\) is a solution of the Riccati equation (1.7). Both cases contradict the hypothesis.
* If \(\beta(\lambda)=R(\lambda)\), there are algebraic functions of degree one or two such that: \[(\beta(\lambda)-R)(\psi-\gamma_{1}(\lambda)\psi^{\prime})(\psi-\gamma_{2}( \lambda)\psi^{\prime})=0\] It implies that \(\psi\) satisfies a linear first order equation with a coefficient in an algebraic extension of \(\mathcal{K}\), so that it is a Liouvillian function, which contradicts the hypothesis (by proposition 1.1, there is no Liouvillian solutions of (1.4)).
* Let us assume that \(\mathcal{H}\) is of order \(1\). Then \(\mathrm{Lie}(\mathcal{H}_{1})\) is determined by a linear differential equation of order one whose solution space is contained in the solution space of (3.2). Let us consider then a solutions \(\psi\) of this equation inside the Picard-Vessiot extension \(\mathcal{K}\langle\psi_{1}^{2},\psi_{2}^{2},\psi_{1}\psi_{2}\rangle\) of equation (3.2). We have a tower of Picard-Vessiot extensions, \[\mathcal{K}\subseteq\mathcal{K}\langle f\rangle\subset\mathcal{K}\langle\psi_{ 1}^{2},\psi_{2}^{2},\psi_{1}\psi_{2}\rangle.\]
As \(\mathcal{K}\subseteq\mathcal{K}\langle f\rangle\) is Picard-Vessiot then the group of automorphisms of \(\mathcal{K}\langle\psi_{1}^{2},\psi_{2}^{2},\psi_{1}\psi_{2}\rangle\) fixing \(\mathcal{K}\langle f\rangle\) is a normal subgroup of \(\mathrm{PSL}_{2}(\mathbb{C})\). As \(\mathrm{PSL}_{2}(\mathbb{C})\) it can be either the identity or \(\mathrm{PSL}_{2}(\mathbb{C})\). c.i) If it is the identity, then by the Galois correspondence the group of automorphisms of \(\mathcal{K}\langle f\rangle\) over \(\mathcal{K}\) should be isomorphic to \(\mathrm{PSL}_{2}(\mathbb{C})\). But this is impossible as the dimension of the Galois group, which is \(3\), must coincide with the transcendence degree the extension, which is \(1\). c.ii) If it is \(\mathrm{PSL}_{2}(\mathbb{C})\) then by Galois correspondence \(\psi\in\mathcal{K}\), but \(f\) is a non-vanishing solution of (3.2) that does not have Liouvillian solutions.
Therefore \(\mathcal{G}\) does not have any sub-\(\mathcal{D}\)-groupoid of order greater than \(0\).
### Strong Minimality
The notion of strong minimality comes from model theory and in the particular case of differential equations generalizes and unifies some classical notions of irreducibility. We say that an algebraic differential equation of third order (1.1) is _strongly minimal_ if and only if for every differential field \(\mathcal{L}\) (of which it is assumed that it includes the algebraic number) and every solution \(j\) the transcendence degree \(\mathrm{tr.deg.}_{\mathcal{L}}\mathcal{L}\langle j\rangle\) is necessary \(0\) or \(3\). That means no solution can satisfy a lower order equation unless it is an algebraic solution. A differential field \(\mathcal{L}\) for which there exists a solution \(j\) of (1.1) such that the transcendence degree of \(\mathcal{L}\langle j\rangle\) over \(\mathcal{L}\) is \(1\) or \(2\) is called a _witness of non-strong minimality_.
The proof of strong minimality of the Schwarzian equation (1.1) with \(R(\lambda)\in\mathbb{C}(\lambda)\) can be found in [5].
**Proposition 3.4** (cf. [5] Th. 3.2).: _Assume \(R(\lambda)\in\mathbb{C}(\lambda)\). If the Riccati equation (1.7) has no algebraic solutions, then the equation (1.1) is strongly minimal._
Furthermore, this criterium is a necessary and sufficient condition.
**Proposition 3.5**.: _If the Riccati equation (1.7) has some algebraic solution, then the non-linear Schwarzian equation (1.1) is not strongly minimal._
Proof.: Suppose that the Riccati equation (1.7) has an algebraic solution \(u(\lambda)\) in \(\mathbb{C}(\lambda)^{\mathrm{alg}}\). In that case, automatically, some solutions of the Schwarzian equation satisfy as well:
\[\frac{\lambda_{\tau\tau}}{(\lambda_{\tau})^{2}}=2u(\lambda)\]
which is a second-order equation. Hence, \(\mathbb{C}(\lambda)^{\mathrm{alg}}\) is witnesses the non-strong minimality of the equation.
Putting together these results with our main Theorem 3.3 and Proposition 1.1 we obtain the following.
**Proposition 3.6**.: _Assume \(R(\lambda)\in\mathbb{C}(\lambda)\). The following statements are equivalent:_
1. _Riccati equation (_1.7_) has no algebraic solutions._
2. _Kummer groupoid_ \(\mathcal{G}\)_, defined by the equation (_3.1_), has no proper sub-_\(\mathcal{D}\)_-groupoids of order greater than_ \(0\)_._
3. _The linear equation (_1.4_) has Galois group_ \(\operatorname{SL}_{2}(\mathbb{C})\) _over_ \(\mathbb{C}(X)\)_._
4. _The Schwarzian equation (_1.1_) is strongly minimal._
5. _The linear-Schwarzian equation (_1.2_) has no Liouvillian solutions._
Proof.: The only point we still need to check is the following. If there is an algebraic solution \(u\in\mathbb{C}(X)^{\operatorname{alg}}\) of the Riccati equation (1.7), then \(\mathcal{G}\) has some non trivial pseudogroup of order greater than zero. In order to clarify this point, let us see how solutions of the Ricatti equation (1.7) allow us to reduce the linear-Schwarzian equation (1.2). If \(u(\lambda)\) is a solution of the Riccati equation, then let us look for a solution \(\tau\) of the differential equation:
\[\frac{\tau_{\lambda\lambda}}{\tau_{\lambda}}=-2u(\lambda) \tag{3.5}\]
We compute the Schwarzian derivative of \(\tau\) obtaining:
\[S_{\lambda}(\tau)=(-2u(\lambda))_{\tau}-\frac{1}{2}(-2u(\lambda))^{2}=-2(u^{ \prime}(\lambda)+u(\lambda)^{2})=R(\lambda),\]
so that any solution of (3.5) is also a solution of (1.2). Two any solutions of (3.5) with a common domain of definition are related by an affine transformation of \(\overline{\mathbb{C}}\). Thus, (3.5) is the differential equation of an affine structure inside the projective structure determined by (1.2). The inverses of the affine charts satisfy the differential equation,
\[\frac{\lambda_{\tau\tau}}{\lambda_{\tau}^{2}}=2u(\lambda). \tag{3.6}\]
If we look for symmetries \(\varphi\), such that for any solution \(\lambda\) of (3.6), the composition \(\varphi\circ\lambda\) is also a solution, we arrive to the differential equation
\[2u(\varphi)\varphi_{\lambda}=2u(\lambda)+\frac{\varphi_{\lambda\lambda}}{ \varphi_{\lambda}}. \tag{3.7}\]
As \(u\) is, in general, a transcendent function, equation (3.7) does not define an Zariski closed subset of \(\operatorname{Aut}_{2}(X)\). However, if \(u\in\mathbb{C}(X)^{\operatorname{alg}}\) then equation (3.7) defines a \(\mathcal{D}\)-groupoid of order \(2\) contained in \(\mathcal{G}\).
It is interesting the criterium of strong-minimality for (1.1) coincides also with the simplicity of the \(\mathcal{D}\)-groupoid of its symmetries. It remains unclear to us a direct relation between the simplicity of the \(\mathcal{D}\)-groupoid and the strong minimality of the equation. It would be useful to know if the \(\mathcal{D}\)-groupoid can be used as a tool to detect strong minimality for some other differential equations. |
2309.16924 | Incremental Rotation Averaging Revisited and More: A New Rotation
Averaging Benchmark | In order to further advance the accuracy and robustness of the incremental
parameter estimation-based rotation averaging methods, in this paper, a new
member of the Incremental Rotation Averaging (IRA) family is introduced, which
is termed as IRAv4. As the most significant feature of the IRAv4, a
task-specific connected dominating set is extracted to serve as a more reliable
and accurate reference for rotation global alignment. In addition, to further
address the limitations of the existing rotation averaging benchmark of relying
on the slightly outdated Bundler camera calibration results as ground truths
and focusing solely on rotation estimation accuracy, this paper presents a new
COLMAP-based rotation averaging benchmark that incorporates a cross check
between COLMAP and Bundler, and employ the accuracy of both rotation and
downstream location estimation as evaluation metrics, which is desired to
provide a more reliable and comprehensive evaluation tool for the rotation
averaging research. Comprehensive comparisons between the proposed IRAv4 and
other mainstream rotation averaging methods on this new benchmark demonstrate
the effectiveness of our proposed approach. | Xiang Gao, Hainan Cui, Shuhan Shen | 2023-09-29T01:51:04Z | http://arxiv.org/abs/2309.16924v3 | # Incremental Rotation Averaging Revisited and More: A New Rotation Averaging Benchmark
###### Abstract
In order to further advance the accuracy and robustness of the incremental parameter estimation-based rotation averaging methods, in this paper, a new member of the Incremental Rotation Averaging (IRA) family is introduced, which is termed as IRAv4. As the most significant feature of the IRAv4, a task-specific connected dominating set is extracted to serve as a more reliable and accurate reference for rotation global alignment. In addition, to further address the limitations of the existing rotation averaging benchmark of relying on the slightly outdated Bundleer camera calibration results as ground truths and focusing solely on rotation estimation accuracy, this paper presents a new COLMAP-based rotation averaging benchmark that incorporates a cross check between COLMAP and Bundleer, and employ the accuracy of both rotation and downstream location estimation as evaluation metrics, which is desired to provide a more reliable and comprehensive evaluation tool for the rotation averaging research. Comprehensive comparisons between the proposed IRAv4 and other mainstream rotation averaging methods on this new benchmark demonstrate the effectiveness of our proposed approach.
Large-scale rotation averaging, task-specific connected dominating set, rotation averaging benchmarking.
## I Introduction
Image-based large-scale scene 3D reconstruction is a fundamental task in computer vision community, which has been widely investigated in recent years [1, 2, 3, 4]. As its core step, Structure from Motion (SfM) [5, 6, 7, 8, 9] aims to simultaneously recover camera pose and scene structure given pair-wise image feature matches. According to the camera pose initialization scheme, SfM methods could be roughly divided into incremental and global ones. The camera poses are sequentially estimated in incremental SfM via a iterative optimization pipeline [6, 8], but are simultaneously solved in global SfM by the motion averaging methodology [5, 9]. Compared with incremental SfM methods, global ones feature fewer optimization iterations and estimation parameters, which makes them with more advantage in theory and greater potential for application when dealing with increasingly larger reconstruction scene scales.
Motion averaging, which takes relative camera motions (rotations and translations) as input and produces absolute camera poses (orientations and locations), is the primary technique used for camera pose recovery in global SfM. Due to scale ambiguity of the relative translation estimated via essential matrix decomposition [10], motion averaging is conducted in the manner of first rotation and then translation averaging in most cases. As the former of the above two phases, rotation averaging [11] directly influences the effect of the subsequent translation averaging phase, and even all the remaining 3D reconstruction procedure, _e.g._ multi-view triangulation for scene recovery and global Bundle Adjustment (BA) for parameter optimization. Though increasing attention has been drawn in recent years [12, 13, 14, 15, 16, 17, 18, 19, 20], the problem of rotation averaging is still far from being solved due to the large scale, imbalanced connectivity, and high-level noise in the Epipolar-geometry Graph (EG).
Recently, a series of rotation averaging methods [21, 22, 23, 24] are proposed based on the ideology of incremental parameter estimation stemmed from incremental SfM. As the primitive and primary method, Incremental Rotation Averaging (IRA) [21] performs incremental absolute rotation computation and relative rotation outlier filtering simultaneously, by which the rotation estimation accuracy and robustness are both guaranteed. To further enhance the efficiency and scalablility, IRA++ [22] is proposed, where the input EG is clustered to construct several low-level intra-sub-EGs and a high-level inter-one. Then IRA is performed on all the (intra- and inter-) sub-EGs to achieve local estimation and global alignment of absolute rotations, respectively. In order to achieve a task-specific EG clustering for better rotation averaging performance, IRAv3 [23] is presented, where the cluster affiliation of each camera is dynamically determined with its absolute rotation (in the local coordinate system of intra-sub-EG) simultaneously estimated. To accomplish a better global alignment of the local rotation estimates, inspired by Jiang _et al._[25], IRAv3+ [24] is introduced. Instead of performing a cluster-level rotation averaging, which is done in both IRA++ and IRAv3, multiple Connected Dominating Sets (CDSs) are randomly extracted to serve as the reference for rotation local-to-global alignment.
However, in this paper we argue that the accuracy and robustness of the IRA series described above could be advanced one step further. Though the effectiveness of the cluster-based pipeline with task-independent CDS serving as global reference has been demonstrated in both IRAv3+ [24] and Jiang _et al._[25], we believe that with a task-specific CDS extracted, more reliable global reference construction together with more accurate camera pose globalization would be further achieved. Based on the above analysis, a novel rotation averaging method termed as IRAv4 is proposed in this paper, which is built upon IRAv3+. The major difference between them lies in that instead of first extracting multiple CDSs and then
estimating the absolute rotations in the CDSs' local coordinate system by leveraging IRA, which is done in IRAv3+, the (task-specific) CDS is extracted by incrementally selecting the Next-Best Vertex (NBV) to maximize the supports from the currently extracted ones in the CDS, together with its absolute rotation (in the local coordinate system of CDS) simultaneously estimated.
Moreover, we find two issues that need to be addressed when evaluating the performance of most existing rotation averaging methods [26, 27, 28, 29, 16, 21, 30, 19, 31, 22, 23, 24]. Firstly, the evaluation is mostly performed based on the 1DSM [32] dataset with the camera calibration results of the slightly outdated Bundler [33] serving as ground truth. And secondly, the evaluation mostly focuses on the camera rotation estimation accuracy only while ignoring the influence of the estimated rotations to the downstream task, _i.e._ translation averaging. To deal with these, a new COLMAP-based [6] rotation averaging benchmark is rebuilt upon the 1DSfM dataset in this paper, where the EG of each test data in the 1DSfM dataset1 is regenerated by leveraging COLMAP2 and OpenCV3 libraries. To provide a more reliable ground-truth source, only the camera poses pass the cross check between COLMAP and Bundler are employed in this new benchmark. In order to additionally evaluate the effectiveness of the estimated rotations for the downstream translation averaging task, in our new benchmark, the rotations estimated by different rotation averaging methods, together with the relative translations in the regenerated EG, are feed into a well-established translation averaging method, BATA [34], to obtain absolute camera locations for location estimation accuracy evaluation. Based on the new benchmark, comprehensive evaluation is performed on the proposed IRAv4 and several currently mainstream rotation averaging methods, including the existing IRA series [21, 22, 23, 24], and some other methods [35, 26, 27, 30, 31, 28]. Among all these methods for performance evaluation, the proposed IRAv4 in this paper achieves state of the art in both rotation estimation and the downstream location estimation on the new benchmark, by which its effectiveness is demonstrated.
Footnote 1: [https://www.cs.cornell.edu/projects/1dsfm/](https://www.cs.cornell.edu/projects/1dsfm/)
Footnote 2: [https://demuc.de/collmap/](https://demuc.de/collmap/)
Footnote 3: [https://opencv.org/](https://opencv.org/)
The main contributions of this paper are threefold:
1) A novel cluster-based rotation averaging pipeline is proposed, where a task-specific CDS is extracted to serve as the global reference for rotation local-to-global alignment. 2) A new rotation averaging benchmark based on the 1DSfM dataset and the COLMAP library is presented, where both the ground truth source and the performance evaluation metrics are redefined for a more reliable and comprehensive evaluation. 3) A comprehensive evaluation is conducted between the proposed IRAv4 and several mainstream rotation averaging methods on the new benchmark, where state-of-the-art performance of IRAv4 is achieved to demonstrate its effectiveness.
## II Brief Description on the IRA Series
Before introducing the proposed IRAv4 method in this paper, brief descriptions on the existing IRA series, including IRA [21], IRA++ [22], IRAv3 [23], and IRAv3+ [24], are provided for their better understanding. And more details on them could be found in the original papers.
**IRA**[21] mainly has two steps: 1) The camera triplet with minimum cyclic rotation deviation after local optimization is selected as the initial seed and its optimized absolute rotations (in the camera triplet's local coordinate system) are served as the seed estimation. 2) The camera with most supporting EG edges during chaining-based absolute rotation pre-computation is selected as the NBV and the pre-computed absolute rotation is served as its initialization; and then, either local (on the newest estimated absolute rotation only) or global optimization (on all the currently estimated absolute rotations) is performed. Note that inlier/outlier relative rotation measurements related to the absolute rotations to be optimized could be distinguished based on their current estimates, and only the inliers are involved in the above optimization. The NBV selection, initialization, and optimization are iteratively performed until all the absolute rotations have been estimated.
**IRA++**[22] contains five steps: 1) Community detection-based EG clustering [36] is carried out on the input EG to obtain several intra-sub-EGs. 2) IRA is performed on each low-level intra-sub-EG to estimate the cameras' absolute rotations in its local coordinate system. 3) Voting-based single rotation averaging [29] is conducted to estimate the relative rotation between the local coordinate systems of each intra-sub-EG pair. 4) IRA is performed again on the high-level inter-sub-EG to estimate the absolute rotation of each intra-sub-EG's local coordinate system. 5) And finally, rotation global alignment and optimization is conducted to first globally align the absolute rotations of all the vertices in the input EG to a uniform coordinate system, and then globally optimize them to produce the final rotation averaging result.
**IRAv3**[23] comprises five steps as well, with its last three steps similar to those of IRA++. And for the first two: 1) Community detection-based seed construction is performed to construct several cluster seeds for the follow-up on-the-fly procedures in its second step. 2) On-the-fly EG clustering and intra-sub-EG rotation estimation are conducted to dynamically assign unregistered vertices to certain EG clusters and iteratively estimate their absolute rotations in their assigned clusters' local coordinate systems. The second step is the core one of IRAv3 and contains three sub-steps, including potential cluster and vertex pre-selection, NBV selection and cluster affiliation, and incremental absolute rotation computation, with the last two iteratively performed for the on-the-fly procedures.
**IRAv3+**[24] consists of five steps once again, which shares the same first two and last one steps with IRAv3 for 1) and 2) on-the-fly EG clustering and intra-sub-EG rotation estimation, and 5) rotation global alignment and optimization, respectively. After executing the first two steps for dynamic vertex cluster affiliation and rotation estimation, the third and forth ones are sequentially conducted: 3) Multiple CDSs are randomly extracted and IRA is performed on the CDSs-based sub-EG for global alignment reference construction. 4) Cluster-wise relative rotation for rotation local-to-global alignment is estimated by leveraging the guidance of the cluster-to-reference common vertices. Subsequently, similar to both IRA++ and IRAv3, the
local absolute rotations in each cluster are firstly globally aligned to a uniform coordinate system (that of the CDSs-based sub-EG for IRAv3+), and then globally optimized.
## III The Proposed IRAv4 Method
The proposed IRAv4 in this paper is directly built upon the IRAv3+ [24], and the major difference between them is the manner of global reference construction. It should be noted that the process of global reference construction involves two sub-steps of 1) reference-based sub-EG extraction and 2) sub-EG rotation estimation. Based on the description in the last section, the above two sub-steps in IRAv3+ are conducted sequentially to result in a task-independent global reference. However, for our proposed IRAv4, it follows the incremental parameter estimation pipeline during the global reference construction, where the vertices in the reference-based sub-EG are incrementally selected together with their absolute rotations simultaneously estimated. By this way, a task-specific reference for rotation local-to-global alignment is constructed. The process of the task-specific reference construction is detailed in the following. Before that, the rotation averaging problem is briefly formulated for better understanding at first: The input EG is denoted as \(\mathcal{G}=(\mathcal{V},\mathcal{E})\), where \(\mathcal{V}\) and \(\mathcal{E}\) denote the sets of cameras and camera pairs with sufficient image local feature match liniers for essential matrix estimation, and the rotation averaging problem is defined as given the relative rotation measurements \(\{\mathbf{R}_{i,j}|e_{i,j}\in\mathcal{E}\}\), and estimating the absolute camera orientations \(\{\mathbf{R}_{i}|v_{i}\in\mathcal{V}\}\).
### _Task-Specific CDS-Based Global Reference Construction_
Similar to Jiang _et al._[25] and IRAv3+ [24], CDS-based sub-EG is also used to serve as a global reference for camera pose alignment in IRAv4. The CDS problem in graph theory asks for a _minimum-size_ and _connected_ subset of vertices with the following property which accounts for the concept of _dominating_: Each vertex is required to either be in the CDS, or adjacent to some vertex of the CDS. The CDS problem is usually solved based on an approximation algorithm [37], which roughly proceeds as follows:
**Init:**: Mark all vertices in \(\mathcal{V}\) white. Select the vertex \(v^{*}\) with most neighbours in a weighted or unweighted manner. Mark \(v^{*}\) and its neighbours black and gray, respectively.
**Iter:**: For all the gray vertices, select the vertex \(v^{*}\) with most white neighbours in a weighted or unweighted manner. Mark \(v^{*}\) and its white neighbours in black and gray, respectively. Iterate the above step until there is no white vertex in \(\mathcal{V}\), and the black vertex set constitutes the CDS.
Fig. 1(a) gives a toy example on the traditional (or task-independent) CDS extraction procedure described above.
However, in this paper we claim that compared with the task-independent CDS extracted based on graph theory, a task-specific CDS would provide a more reliable global reference for camera rotation alignment, by which more accurate and robust rotation estimation could be achieved. The procedure of the task-specific CDS extraction in this paper is stemmed from the pipeline of IRA [21], and the major difference between them are the termination conditions. As described in the last section, IRA's iteration of NBV selection, initialization, and optimization stops until all the absolute rotations of the vertices in \(\mathcal{V}\) have been estimated and optimized. Nevertheless, the task-specific CDS extraction in IRAv4 shares the termination condition of the traditional CDS extraction to guarantee that it is extracted to exhibit the properties of _connected_ and _dominating_, _i.e._ the vertices in the CDS are connected and all the vertices in \(\mathcal{V}\) is either in or adjacent to some vertex of the CDS. Note that during the extraction of the task-specific CDS in IRAv4, the absolute rotations of the extracted vertices are simultaneously estimated, and the property of _minimum_ in size is no longer guaranteed. In fact, as the vertex with higher reliability instead of larger coverage is selected with priority, the number of vertices in the task-specific CDS of IRAv4 is always larger than that of the task-independent one extracted by the traditional method [37]. The task-specific CDS extraction procedure for global reference construction, which is illustrated by a toy example in Fig. 1(b)), is detailed in the following.
For the _initialization_ step, the camera triplet set in \(\mathcal{G}\) is obtained and denoted as \(\mathcal{T}\), and for each triplet \(t_{i,j,k}\in\mathcal{T}\), similar to IRA, the absolute rotations of \(\{v_{i},v_{j},v_{k}\}\) in the triplet's local coordinate system are initialized and optimized by measurement chaining and residual minimization of the relative rotations, by which the CDS's initial triplet is selected together with the absolute rotations estimated. Specifically, the absolute rotations in \(t_{i,j,k}\) are firstly initialized as:
\[\mathbf{R}_{i}=\mathbf{I},\mathbf{R}_{j}=\mathbf{R}_{i,j},\mathbf{R}_{k}=\mathbf{R}_{i,k}, \tag{1}\]
and then fed into the following triplet-based chaining check:
\[d_{\mathbf{R}}(\mathbf{R}_{j,k},\mathbf{R}_{i,k}\mathbf{R}_{i,j}^{\top})<\theta_{\rm th}, \tag{2}\]
where \(d_{\mathbf{R}}(\mathbf{R}_{1},\mathbf{R}_{2})=\arccos\frac{\mathrm{tr}(\mathbf{R}_{2}\mathbf{R}_{1 }^{\top})-1}{2}\) is the angular distance between two rotation matrices, \(\mathbf{R}_{1}\) and \(\mathbf{R}_{2}\), and \(\theta_{\rm th}=3^{\circ}\)
Fig. 1: Toy examples of the traditional method (a) based on an approximation algorithm [37] and the task-specific method (b) proposed in IRAv4 for Connected Dominating Set (CDS) extraction. For (a), the red edges denote those between the currently selected vertex (\(v^{*}\) or \(v^{*}_{i}\), where \(v^{*}_{i}\) denotes the selected vertex in the \(i\)-th iteration) and its adjacent unselected ones. And for (b), the red edges denote those of the selected triplet \(t^{*}_{i,j,k}\) and the edge supporting set for the selected Next-Best Vertex (NBV) \(v^{*}_{i}\) in the initialization step and the \(i\)-th iteration step, respectively. It could be observed from the figure that the number of vertices in the task-specific CDS is usually larger than that of the CDS extracted in the traditional way (5 _vs_. 3 in this toy example). Please refer to the main text for more details.
is the outlier threshold in this paper. The triplet set that passes the above chaining check, \(\mathcal{T}^{\prime}\), is involved in the following optimization for initial seed selection and estimation:
\[\mathbf{R}_{i}^{*},\mathbf{R}_{j}^{*},\mathbf{R}_{k}^{*}=\arg\min\sum_{\begin{subarray}{c}v_{ i},v_{j}\in\mathcal{V}_{i_{i},j,k}\\ e_{i,j}\in\mathcal{E}_{i_{i},j,k}\end{subarray}}d_{\mathbf{R}}^{2}(\mathbf{R}_{i,j}, \mathbf{R}_{j}\mathbf{R}_{i}^{\top}), \tag{3}\]
the above optimization problem and the other ones in the rest of this paper are solved by the Ceres Solver library4, and the triplet \(t_{i,j,k}^{*}=\{v_{i^{*}},v_{j^{*}},v_{k^{*}}\}\) with the largest selection reward defined in the following, together with the optimized rotations \(\{\mathbf{R}_{i^{*}}^{*},\mathbf{R}_{j^{*}}^{*},\mathbf{R}_{k^{*}}^{*}\}\) are served as the initial seed construction:
Footnote 4: [http://www.ceres-solver.org/](http://www.ceres-solver.org/)
\[t_{i,j,k}^{*}=\arg\max_{t_{i,j,k}\in\mathcal{T}^{\prime}}\sum_{ \begin{subarray}{c}v_{i},v_{j}\in\mathcal{V}_{i_{i},j,k}\\ e_{i,j}\in\mathcal{E}_{i_{i},j,k}\end{subarray}}\cos(d_{\mathbf{R}}(\mathbf{R}_{i,j}, \mathbf{R}_{j}^{*}\mathbf{R}_{i}^{*\top})). \tag{4}\]
For the _iteration_ step, the vertex sets of the currently selected and not selected to the global reference are denoted as \(\mathcal{V}^{s}\) and \(\mathcal{V}^{t}\), respectively. Note that the absolute rotations of all the vertices in \(\mathcal{V}^{s}\) have been estimated in \(\mathcal{V}^{s}\)'s local coordinate system, which are denoted as \(\{\mathbf{R}_{m}|v_{m}\in\mathcal{V}^{s}\}\). And the vertex \(v_{p^{*}}\) in \(\mathcal{V}^{t}\) that receives largest supports from \(\mathcal{V}^{s}\) is selected from \(\mathcal{V}^{t}\) to \(\mathcal{V}^{s}\) together with its rotation firstly initialized and then optimized. This iteration step of selection, initialization, and optimization stops until the (connecting and dominating) termination conditions of the task-specific CDS construction reaches, and then the current \(\mathcal{V}^{s}\) together with the estimated absolute rotations of the vertices in \(\mathcal{V}^{s}\) are served as the global reference constructed by IRAv4. Specifically, for each vertex \(v_{p}\) in \(\mathcal{V}^{t}\), the edge set between it and \(\mathcal{V}^{s}\) is obtained and denoted as \(\mathcal{E}_{p}\), and for each edge \(e_{m,p}\) in \(\mathcal{E}_{p}\) connecting \(v_{m}\) in \(\mathcal{V}^{s}\) and \(v_{p}\) in \(\mathcal{V}^{t}\), the pre-computing set of \(v_{p}\)'s absolute rotation is obtained by:
\[\left\{\mathbf{R}_{p}^{m}=\mathbf{R}_{m,p}\mathbf{R}_{m}\big{|}v_{m}\in\mathcal{V}^{s},v_ {p}\in\mathcal{V}^{t},e_{m,p}\in\mathcal{E}_{p}\right\}. \tag{5}\]
Though the items in \(\{\mathbf{R}_{p}^{m}\}\) represent the absolute rotation pre-computations of the same vertex \(v_{p}\), the ideal identity is hard to hold practically due to the inevitable errors in both \(\mathbf{R}_{m,p}\) and \(\mathbf{R}_{m}\). To deal with this, similar to IRA, the supporting set of each item in the pre-computing set defined in Eq. 5 for each vertex \(v_{p}\) in \(\mathcal{V}^{t}\) is leveraged for the iteration step of the task-specific CDS extraction. Specifically, by leveraging the pre-computation of \(\mathbf{R}_{p}^{m}\) and the rotation estimations in \(\mathcal{V}^{s}\) connected by \(\mathcal{E}_{p}\), the relative rotations on \(\mathcal{E}_{p}\) could be re-computed and compared with the corresponding measurements for edge supporting set \(\mathcal{E}_{p}^{m}\) acquisition:
\[\mathcal{E}_{p}^{m}=\left\{d_{\mathbf{R}}(\mathbf{R}_{n,p},\mathbf{R}_{p}^{m}\mathbf{R}_{n}^{ \top})<\theta_{\mathrm{th}}\big{|}v_{n}\in\mathcal{V}^{s},e_{n,p}\in\mathcal{ E}_{p}\right\}. \tag{6}\]
Then, the selection reward for edge \(e_{m,p}\) is computed by:
\[\mathrm{rwd}(m,p)=\sum_{\begin{subarray}{c}v_{n}\in\mathcal{V}^{s}\\ e_{n,p}\in\mathcal{E}_{p}^{m}\end{subarray}}\cos\left(d_{\mathbf{R}}(\mathbf{R}_{n,p}, \mathbf{R}_{p}^{m}\mathbf{R}_{n}^{\top})\right), \tag{7}\]
and the edges(vertices) \(e_{m^{*},p}\)(\(v_{m^{*}}\)) and \(e_{m^{*},p^{*}}\)(\(v_{p^{*}}\)) for the vertex \(v_{p}\)'s rotation pre-computation and the iteration step's NBV determination are selected by:
\[\begin{cases}v_{m^{*}}=\arg\max_{v_{m}\in\mathcal{V}^{s}}\mathrm{rwd}(m,p), \\ v_{p^{*}}=\arg\max_{v_{p}\in\mathcal{V}^{t}}\mathrm{rwd}(m^{*},p).\end{cases} \tag{8}\]
Then, the absolute rotation of the selected NBV \(v_{p^{*}}\) could be initialized by \(\mathbf{R}_{m^{*},p^{*}}\mathbf{R}_{m^{*}}\). After NBV selection and initialization, optimization is further performed for accuracy and robustness improvement of the task-specific CDS construction. During the optimization procedure, local optimization on each newly selected NBV \(v_{p^{*}}\) is continuously performed and global optimization on all the currently estimated vertices \(\{v_{p^{*}}\}\cup\mathcal{V}^{s}\) is intermittently carried out once the size of \(|\{v_{p^{*}}\}\cup\mathcal{V}^{s}|\) having grown by a certain rate, say \(5\%\). Specifically, for the local optimization, the inlier edge set for providing optimization constraints is first obtained by:
\[\mathcal{E}_{p^{*}}^{m^{*}}=\left\{d_{\mathbf{R}}(\mathbf{R}_{n,p^{*}},\mathbf{R}_{p^{*}}^{ m^{*}}\mathbf{R}_{n}^{\top})<\theta_{\mathrm{th}}\Big{|}v_{n}\in\mathcal{V}^{s},e_{n,p^{*}} \in\mathcal{E}_{p^{*}}\right\}, \tag{9}\]
where \(\mathbf{R}_{p^{*}}^{m^{*}}=\mathbf{R}_{m^{*},p^{*}}\mathbf{R}_{m^{*}}\) is the rotation initialization of the vertex \(v_{p^{*}}\). Then, the absolute rotation of \(v_{p^{*}}\) is locally optimized by:
\[\mathbf{R}_{p^{*}}^{*}=\arg\min\sum_{\begin{subarray}{c}v_{n}\in\mathcal{V}^{s} \\ e_{n,p^{*}}\in\mathcal{E}_{p}^{m^{*}}\end{subarray}}d_{\mathbf{R}}^{2}(\mathbf{R}_{n,p^ {*}},\mathbf{R}_{p^{*}}\mathbf{R}_{n}^{\top}). \tag{10}\]
For the global optimization, the inlier edge set is also obtained:
\[\begin{split}&\left(\mathcal{E}_{p^{*}}\cup\mathcal{E}^{s} \right)^{*}=\left\{d_{\mathbf{R}}(\mathbf{R}_{m,n},\mathbf{R}_{m}^{\top})<\theta_{\mathrm{ th}}\right\}\\ &\mathrm{for}\ v_{m},v_{n}\in\{v_{p^{*}}\}\cup\mathcal{V}^{s},e_{m,n} \in\mathcal{E}_{p^{*}}\cup\mathcal{E}^{s},\end{split} \tag{11}\]
where \(\mathcal{E}^{s}\) is the edge set of \(\mathcal{V}^{s}\). Then, the absolute rotations of the vertices in \(\{v_{p^{*}}\}\cup\mathcal{V}^{s}\) are globally optimized by:
\[\{\mathbf{R}_{m}^{*}\}=\arg\min\sum_{\begin{subarray}{c}v_{m},v_{n}\in\{v_{p^{*}} \}\cup\mathcal{V}^{s}\\ e_{m,n}\in\left(\mathcal{E}_{p^{*}}\cup\mathcal{E}^{s}\right)^{*}\end{subarray}}d_{ \mathbf{R}}^{2}(\mathbf{R}_{m,n},\mathbf{R}_{n}\mathbf{R}_{m}^{\top}). \tag{12}\]
### _Other Key Steps Stemming from the IRAv3+_
As the main difference between IRAv3+ [24] and IRAv4 proposed in this paper is the manner of global reference construction, the other key steps of IRAv4 stemming from IRAv3+ are described in this sub-section for better understanding.
**Community detection-based cluster seed construction:** With the objective of performing the following on-the-fly EG clustering and local rotation averaging, several cluster seeds should be constructed in advance for the cluster growing procedure. In both IRAv3 [23] and IRAv3+, they are constructed based on the community detection method [36]. Specifically, the above community detection method is performed on the input EG to generate community-based structure, and for each sub-EG of the community, camera triplet is selected, together with their absolute rotations estimated, which is similar to the initialization step of the task-specific CDS-based global reference construction in IRAv4 (_cf._ Eq. 1 to Eq. 4).
**On-the-fly EG clustering and local rotation averaging:** This step is first adopted in IRAv3, which is similar in principle to the iteration step of the task-specific CDS-based
global reference construction in IRAv4 (_cf._ Eq. 5 to Eq. 12). To achieve this, for each unaffiliated vertex, the edge set between it and the currently affiliated vertex set of each cluster is first obtained. Then, by leveraging these edge sets, the supporting edge set of each unaffiliated vertex within each cluster could be obtained by absolute rotation pre-computation (_cf._ Eq. 5) and relative rotation re-computation (_cf._ Eq. 6). On this basis, the next-best affiliated vertex, together with its affiliated cluster and initialized rotation are simultaneously determined by supporting set global maximization (_cf._ Eq. 7 and Eq. 8). Then, the rotation estimation are locally (only on the next-best affiliated vertex) or globally (on all the vertices of the affiliated cluster) optimized (_cf._ Eq. 9 to Eq. 12). The above procedure is iteratively performed until all the vertices are affiliated, together with their rotations estimated.
**Common vertices-guided alignment rotation estimation:** After reference construction and EG clustering, cluster-to-reference rotation is estimated for local absolute rotation alignment. As observed in IRAv3+ that the rotation averaging-based absolute rotation estimates are more accurate and reliable than those of the essential matrix decomposition-based relative ones, a common vertices-guided alignment rotation estimation method is proposed. For a particular cluster, either one common vertex or one shared edge between it and the reference could induce an estimate of the cluster-to-reference alignment rotation. As the common vertices-induced ones are with higher priority, they are served as guidance for alignment rotation estimation. Specifically, for each vertex-induced estimate, one could obtain its supporters from the edge-induced ones, and the one with most supporters is used as the initialization of the alignment rotation, which is further optimized by leveraging the constraints provided by its supporters. Readers may refer to the original paper of IRAv3+ [23] for more details.
**Local absolute rotation global alignment and optimization:** Given the alignment rotation of each cluster, the absolute rotations in the cluster's local coordinate system are globally aligned to the reference's coordinate system. After that, inlier edge set of the original EG is firstly obtained based on the aligned rotations and then used for providing constraints for globally optimizing them (_cf._ Eq. 11 and Eq. 12).
## IV New Rotation Averaging Benchmark Creation
It could be observed that most of the existing rotation averaging methods [26, 27, 28, 29, 16, 21, 30, 19, 31, 22, 23, 24] perform performance evaluation based on the 1DSfM [32] dataset with the slightly outdated Bundler's [33] camera calibration results serving as the ground-truth source of camera rotations, which has been presented for more than ten years. In order to give a more comprehensive evaluation on the existing mainstream rotation averaging methods, a new 1DSfM-derived benchmark based on COLMAP [6] is rebuilt here. COLMAP is a general-purpose, end-to-end image-based 3D reconstruction pipeline with a graphical and command-line interface, which is proposed more recently and updated more frequently. In recent years, COLMAP has become the most widely recognized and commonly used open-source SfM toolbox with various downstream applications, such as novel view synthesis [38, 39] and neural surface reconstruction [40, 41]. As a result, we believe that it is time for COLMAP to take the place of Bundler for rotation averaging benchmarking. The new benchmark creation process mainly contains three parts: epipolar-geometry graph regeneration, ground-truth data acquisition, and evaluation metrics definition, which are thoroughly described in the following, respectively.
### _Epipolar-Geometry Graph Regeneration_
In order to present a new rotation averaging benchmark by making full use of COLMAP, the Epipolar-geometry Graph (EG) of each test data in the 1DSfM dataset is regenerated at first. It should be noted that only the images in the Maximum Connected Component (MCC) are considered during the EG regeneration procedure here for the efficiency consideration, and this image connectivity information is provided by the \(\mathrm{cc.txt}\) file contained in the original 1DSfM dataset. Then, the images to be processed are fed into the COLMAP toolbox for local feature extraction and matching with default parameters. Given the pair-wise feature matches produced by COLMAP, OpenCV library is employed for essential matrix estimation and decomposition to produce relative motion (rotation and translation), where the feature-level point to epipolar line distance threshold and the desirable level of estimation confidence are set to \(1\mathrm{pixel}\) and \(99.9\%\) respectively during the RANdom SAmple Consensus (RANSAC) [42] procedure of the essential matrix estimation [10] process. After that, the image pair with more than \(16\) pairs of inlier feature matches for RANSAC-based essential matrix estimation is added as a graph edge to the regenerated EG. It should be further noted that the camera intrinsic parameters are required during essential matrix estimation, which are provided by the \(\mathrm{coords.txt}\) file contained in the original 1DSfM dataset here.
### _Ground-Truth Data Acquisition_
After EG regeneration, COLMAP is employed again for the ground-truth data acquisition of camera absolute rotations (and locations) for rotation averaging evaluation. Specifically, for each test data of the 1DSfM dataset, given the image matching result of COLMAP itself obtained in the last subsection, the SfM procedure integrated in COLMAP is conducted with default parameters, and the camera calibration result is denoted as \(\left\{\mathbf{R}_{i}^{\mathrm{C}},\mathbf{c}_{i}^{\mathrm{C}}|v_{i}\in\mathcal{V}_{ \mathrm{C}}\right\}\), where \(\mathcal{V}_{\mathrm{C}}\) is the set of cameras successfully registered by COLMAP. In addition, the camera calibration result of Bundler, provided by the \(\mathrm{gt\_bundle.out}\) file contained in the original 1DSfM dataset, is denoted as \(\left\{\mathbf{R}_{j}^{\mathrm{B}},\mathbf{c}_{j}^{\mathrm{B}}|v_{j}\in\mathcal{V}_{ \mathrm{B}}\right\}\). It should be noted that \(\left\{\mathbf{R}_{j}^{\mathrm{B}},\mathbf{c}_{j}^{\mathrm{B}}\right\}\) has been aligned to the GPS coordinate system and is with real physical scale, while \(\left\{\mathbf{R}_{i}^{\mathrm{C}},\mathbf{c}_{i}^{\mathrm{C}}\right\}\) suffers from the issue of scale ambiguity due to the metric reconstruction nature of SfM. As a result, in order to use the camera calibration result of COLMAP as ground-truth camera poses for rotation (and translation) averaging evaluation, it should be aligned to the coordinate system of Bundler for scale recovery. The alignment is achieved by estimating a similarity transformation \(\mathbf{T}_{\mathrm{C,B}}=\left(\begin{smallmatrix}\mathrm{cc.,n}\mathbf{R}_{\mathrm{C,B}}&\mathbf{t}_{\mathrm{C,B}}\\ \mathbf{0}^{\mathrm{T}}&\end{smallmatrix}\right)\)[43] based on RANSAC with the co-calibrated cameras by COLMAP and Bundler, \(\mathcal{V}_{\mathrm{C\cap B}}\), where
the locational distance threshold and the desirable estimation confidence are set to \(1\mathrm{m}\) and \(99.9\%\), respectively. Then, based on the estimated transformation \(\mathbf{T}_{\mathrm{C,B}}\), \(\left\{\mathbf{R}_{i}^{\mathrm{C}},\mathbf{c}_{i}^{\mathrm{C}}\right\}\) could be aligned to \(\left\{\mathbf{R}_{j}^{\mathrm{B}},\mathbf{c}_{j}^{\mathrm{B}}\right\}\) by:
\[\left\{\mathbf{R}_{i}^{\mathrm{C}^{*}}=\mathbf{R}_{i}^{\mathrm{C}}\mathbf{R}_{\mathrm{C,B} }^{\top},\mathbf{c}_{i}^{\mathrm{C}^{*}}=s_{\mathrm{C,B}}\mathbf{R}_{\mathrm{C,B}}\mathbf{ c}_{i}^{\mathrm{C}}+\mathbf{t}_{\mathrm{C,B}}\Big{|}v_{i}\in\mathcal{V}_{\mathrm{C}} \right\}. \tag{13}\]
Before performing rotation averaging method benchmarking, it should be further noticed that the basic assumption of involving the camera calibration result of the incremental SfM methods (Bundler or COLMAP) to serve as ground-truth source for motion (rotation and translation) averaging evaluation lies in that the camera orientation and localization results of incremental SfM with iterative RANSAC-based outlier filtering and BA-based parameter optimization [44] are acknowledged to be with much higher accuracy than those of motion averaging before performing the final global BA. Nonetheless, the incremental SfM-based calibration result inevitably contains estimation errors and is still not the real ground-truth camera poses. To make the evaluation with more reliability, inspired by the work of long-term visual localization benchmarking [45], the camera poses calibrated by COLMAP that pass Bundler-based cross check with certain pose accuracy level are used as the final camera pose ground-truth data. Its basic idea lies in that the camera poses calibrated by both COLMAP and Bundler with enough closeness in the 6-dimensional camera pose space are used for the following rotation averaging evaluation. Specifically, given the rotational (\(\theta_{\mathrm{th}}^{\mathrm{h|m|c}}\)) and locational (\(d_{\mathrm{th}}^{\mathrm{h|m|c}}\)) distance thresholds with different accuracy levels of camera poses: high-accuracy \((2^{\circ},0.25\mathrm{m})\), medium-accuracy \((5^{\circ},0.5\mathrm{m})\), and coarse-accuracy \((10^{\circ},5\mathrm{m})\), the camera subsets for rotation averaging evaluation with respect to the different distance thresholds are obtained by:
\[\begin{split}&\mathcal{V}_{\mathrm{C\cap B}}^{\mathrm{h|m|c}}= \left\{d_{\mathbf{R}}(\mathbf{R}_{k}^{\mathrm{C}^{*}},\mathbf{R}_{k}^{\mathrm{B}})<\theta_{ \mathrm{th}}^{\mathrm{h|m|c}}\cap d_{\mathbf{c}}(\mathbf{c}_{k}^{\mathrm{C}^{*}},\mathbf{c} _{k}^{\mathrm{B}})<d_{\mathrm{th}}^{\mathrm{h|m|c}}\right\}\\ &\mathrm{for}\;v_{k}\in\mathcal{V}_{\mathrm{C\cap B}},\;\mathrm{ where}\;d_{\mathbf{c}}(\mathbf{c}_{1},\mathbf{c}_{2})=\|\mathbf{c}_{1}-\mathbf{c}_{2}\|_{2}.\end{split} \tag{14}\]
The sizes of the camera subsets \(\mathcal{V}_{\mathrm{C\cap B}}^{\mathrm{h|m|c}}\) together with some other meta-data about the new rotation averaging benchmark are listed in Table I, and in order to make a trade-off between ground-truth data magnitude and reliability, the camera subset with medium-accuracy, _i.e._\(\mathcal{V}_{\mathrm{C\cap B}}^{\mathrm{m}}\), are finally employed for rotation averaging benchmarking. The COLMAP-to-Bundler alignment median errors of rotation and location in Table I are defined as:
\[\begin{cases}e_{\mathbf{R}}^{\mathrm{C\cap B}}=\mathrm{med}\left(\{d_{\mathbf{R}}(\bm {R}_{k}^{\mathrm{C}^{*}},\mathbf{R}_{k}^{\mathrm{B}})|v_{k}\in\mathcal{V}_{\mathrm{ C\cap B}}\}\right),\\ e_{\mathbf{c}}^{\mathrm{C\cap B}}=\mathrm{med}\left(\{d_{\mathbf{c}}(\mathbf{c}_{k}^{ \mathrm{C}^{*}},\mathbf{c}_{k}^{\mathrm{B}})|v_{k}\in\mathcal{V}_{\mathrm{C\cap B}} \}\right),\end{cases} \tag{15}\]
where \(\mathrm{med}(\{x_{i}\})\) returns the median value of the set \(\{x_{i}\}\). It could be further observed from the table that: 1) in most cases, except for MDR, TOL, and YKM, more cameras could be registered by COLMAP than Bundler; 2) the disparity in camera calibration accuracy between COLMAP and Bundler is insignificant as both \(e_{\mathbf{R}}^{\mathrm{C\cap B}}\) and \(e_{\mathbf{c}}^{\mathrm{C\cap B}}\) are relatively small; 3) Different test data in the new benchmark is with different degree of conformity to accuracy (\(\frac{|\mathcal{V}_{\mathrm{th}}^{\mathrm{h|m|c}}|}{|\mathcal{V}_{\mathrm{C \cap B}}|}\)) between COLMAP and Bundler, take \(\frac{|\mathcal{V}_{\mathrm{C\cap B}}^{\mathrm{h|m|c}}|}{|\mathcal{V}_{\mathrm{ C\cap B}}|}\) for example, \(76.23\%\) co-calibrated cameras pass the median-level-accuracy cross check for MND, while for TFG, that value decreases to only \(15.24\%\).
### _Evaluation Metrics Definition_
For a particular rotation averaging method X (what X could be is described in the next section), it is firstly performed on the relative rotations from the regenerated EG in this paper to obtain the absolute camera rotations. Then, based on the rotation averaging result and the relative translation measurements, translation averaging is performed with the mainstream method, BATA5[34], to get the absolute camera locations, and the X-based motion averaging result is denoted as \(\left\{\mathbf{R}_{i}^{\mathrm{X}},\mathbf{c}_{i}^{\mathrm{X}}|v_{i}\in\mathcal{V}_{ \mathrm{X}}\right\}\). Then, the rotation \(\mathbf{R}_{\mathrm{X,C}^{*}}\) and similarity transformation \(\mathbf{T}_{\mathrm{X,C}^{*}}\) for aligning \(\left\{\mathbf{R}_{i}^{\mathrm{X}},\mathbf{c}_{i}^{\mathrm{X}}\right\}\) to the coordinate system of \(\left\{\mathbf{R}_{i}^{\mathrm{C}^{*}},\mathbf{c}_{i}^{\mathrm{C}^{*}}\right\}\) are estimated by the
methods of voting-based single rotation averaging [29] and RANSAC-based similarity estimation, respectively. The pose alignment is conducted in the same way of Eq. 13, and the aligned poses are denoted as \(\left\{\mathbf{R}_{i}^{\mathbf{X}^{\ast}},\mathbf{e}_{i}^{\mathbf{X}^{\ast}}\right\}\). Finally, the median rotational and locational errors on the camera subset of \(\mathcal{V}_{\mathrm{C\cap B}}^{\mathrm{m}}\) are served as the evaluation metrics of the new rotation averaging benchmark, which are defined as:
following advantages: 1) It can tolerate corruption as high as the information-theoretic bound; 2) It does not require a good initialization for the estimates of group elements; 3) It has a simple interpretation; And 4) under some mild conditions its global minimum exactly recovers the corruption levels.
**HARA**[30] presents a hierarchical initialization scheme that constructs a spanning tree of a rotation graph by propagating most reliable constraints first and less reliable ones later. In HARA, the hierarchy of reliability based on the number of consistent triplet constraints, as well as their level of consistency, is established. That is, a constraint is considered to be more reliable if it is strongly supported by many other constraints and less reliable if it has weaker or fewer supports. In addition, the number of valid 2D-2D correspondences could also be optionally incorporated into the hierarchy.
**NeuRoRA**[28] is built to learn patterns from the data and predict/regress the model parameters from the noisy relative rotation measurements. NeuRoRA is a two-step approach: In the first step, a graph-based network, CleanNet, is utilized to clean the EG by removing outliers and rectifying noisy measurements; And in the second step, an initialization from the cleaned EG, instantiated from a shortest path tree, is then further fine-tuned using a separate graph-based network, FineNet, to produce the final rotation averaging results.
### _Rotation Estimation Accuracy Benchmarking_
For the methods of IRLS-GM, IRLS-\(\ell_{\frac{1}{4}}\), MPLS, DESC, and HARA, we locally run the official source codes (with default parameters) released by the authors on the new 1DSfM-derived rotation averaging benchmark to produce the absolute rotation estimation results of them. For the method of NeuRoRA, follow the authors' instruction, the CleanNet and FineNet of NeuRoRA are both firstly pretrained on synthetic data and then finetuned and tested on the real-world data (with default parameters) in round-robin fashion (leave one out). And for the methods of existing IRA series (IRA, IRA++, IRAv3, and IRAv3+) and the proposed IRAv4, locally implemented source codes are employed for absolute rotation estimation. The absolute rotation estimation results of all the methods for comparison on the new benchmark are shown in Table II. The top-four results on each test data among all the comparative methods are highlighted, and in order to perform a comprehensive comparison, the rankings of all the comparative methods on each test data are averaged and shown in the last row of the table, and the top-four methods are highlighted as well. It could be seen from the table that: 1) Among all the rotation averaging methods for comparison, the proposed IRAv4 in this paper achieves overall the best performance in rotation estimation accuracy on the new 1DSfM-derived rotation averaging benchmark; 2) The deep learning-based rotation averaging method NeuRoRA is rather data/domain-sensitive and performs relatively poor in rotation estimation accuracy on the new benchmark, especially for the large-scale (PIC, ROF, and TFG) and high-noise (GDM, MDR, and USQ) test data; And 3) Compared with the results of IRA++ and IRAv3+, better performance is achieved by IRAv3 and IRAv4 in rotation estimation accuracy on the new benchmark, which demonstrates the effectiveness of the _task-specific_ epipolar-geometry graph clustering and alignment reference construction proposed in IRAv3 and IRAv4, respectively.
In addition, in order to further demonstrate the effectiveness of the proposed IRAv4, comparative experiments in rotation estimation accuracy on the original 1DSfM dataset are also conducted, and the results are shown in Table III. It could be observed that compared with the state-of-the-art method on the dataset, RAGO [19], and the existing IRA series, IRAv4 is able to achieve overall the best performance as well. And the reasons for the inconsistency of different methods in rotation estimation accuracy on the two benchmarks (_cf._ Table II and Table III) mainly lie in the differences in ground-truth rotation source (different number of cameras involved in evaluation and different ways in obtaining their ground-truth rotations).
locally run the official source code of BATA (with default parameters) released by the authors to produce the absolute location estimation results of these methods. The absolute location estimation results of all the methods for comparison on the new benchmark are shown in Table IV. And similar to that in Table II, the top-four methods on each test data and in terms of overall rankings are highlighted. By observing the above two tables, similar conclusions, the three items discussed in the last sub-section, could be drawn. It could be further observed that though with moderately strong correlation, among different rotation averaging methods, the advantage in rotation estimation accuracy would not be strictly maintained during location estimation, the third and fourth-best methods in rotation estimation accuracy on the new benchmark are IRAv3 and IRLS-\(\ell_{\frac{1}{2}}\), while those in location estimation accuracy are IRA and IRLS-GM.
### _Incorporation of the Global Bundle Adjustment_
In order to demonstrate the performance of different rotation averaging methods coupled with some certain translation averaging method (BATA) in providing accurate enough camera pose initializations for the final BA, multi-view triangulation-based scene computation and global BA-based pose optimization are performed. Specifically, the pair-wise feature matches those pass through the RANSAC-based essential matrix estimation are linked to feature tracks with the union find algorithm implemented in OpenMVG11. Then, a RANSAC-based multi-view triangulation algorithm implemented in COLMAP is performed with the reprojection error and viewing angle thresholds, and the desirable level of estimation confidence setting to \(4\mathrm{pixel}\), \(2^{\circ}\), and \(99.9\%\) to compute the initial values of the feature tracks' spatial coordinates. During the multi-view triangulation, the intrinsic parameters provided by the original 1DSfM dataset, and the camera poses estimated by different rotation averaging methods and BATA are used to formulate camera projection matrices. Finally, given the initial values of camera parameters and scene points, global BA is performed to further improve the camera pose accuracy, which is conducted by leveraging Ceres Solver with Huber loss. The rotation and location estimation accuracy of the optimized camera poses of the top-2 methods in both rotation and location estimation accuracy before global BA (IRAv4 and MPLS) on the new benchmark are shown in the last two columns of Table II and Table IV, respectively. Note that the results of them are re-ranked independently in the tables to maintain comparative fairness. It could be observed from the two tables that with the global BA, the camera pose estimation accuracy of both IRAv4 and MPLS is largely improved, and IRAv4 achieves slightly better performance in location estimation accuracy than MPLS, which demonstrates that accurate enough camera pose initializations cloud be provided by both IRAv4 and MPLS coupling BATA, and the advantage of IRAv4 over MPLS.
Footnote 11: [https://github.com/openMVG/openMVG](https://github.com/openMVG/openMVG)
Furthermore, to provide a more intuitive comparison in robustness of all the rotation averaging methods for comparison, the sparse scene reconstruction results after global BA of them on the test data of GDM of the new rotation averaging benchmark are shown in Fig. 2. As shown in the upper-left of this figure, there are two buildings with left-right symmetry located in the scene of GDM, which significantly increases the difficulty of camera pose estimation and the subsequent scene structure recovery. It could be observed from the figure that among all \(11\) rotation averaging methods involved in benchmarking, only MPLS, DESC, and the IRA series succeed to recovery the correct scene structure of GDM, which demonstrate their robustness.
## VI conclusion
In this paper, to benefit from the global reference for local-to-global rotation alignment more, an incremental parameter estimation-based task-specific connected dominating set extraction method is proposed, by which the accuracy and
Fig. 2: Sparse scene reconstruction results of all rotation averaging methods for comparison on the test data of GDM of the new rotation averaging benchmark.
robustness of the Incremental Rotation Averaging (IRA) series are further advanced. Furthermore, a new 1DSfM-derived rotation averaging benchmark, together with new ground-truth data acquisition approach, and new evaluation tasks and metrics, based on the camera calibration results of COLMAP is presented, in order to give a more comprehensive and reasonable comparison among different rotation averaging methods. Based on the new benchmark, the proposed method in this paper, IRAv4, achieves overall the best performance in both rotation and downstream location estimation, which demonstrates its effectiveness in terms of accuracy and robustness when dealing with large-scale and high-noise rotation averaging problems.
|
2309.07197 | Mitigating Adversarial Attacks in Federated Learning with Trusted
Execution Environments | The main premise of federated learning (FL) is that machine learning model
updates are computed locally to preserve user data privacy. This approach
avoids by design user data to ever leave the perimeter of their device. Once
the updates aggregated, the model is broadcast to all nodes in the federation.
However, without proper defenses, compromised nodes can probe the model inside
their local memory in search for adversarial examples, which can lead to
dangerous real-world scenarios. For instance, in image-based applications,
adversarial examples consist of images slightly perturbed to the human eye
getting misclassified by the local model. These adversarial images are then
later presented to a victim node's counterpart model to replay the attack.
Typical examples harness dissemination strategies such as altered traffic signs
(patch attacks) no longer recognized by autonomous vehicles or seemingly
unaltered samples that poison the local dataset of the FL scheme to undermine
its robustness. Pelta is a novel shielding mechanism leveraging Trusted
Execution Environments (TEEs) that reduce the ability of attackers to craft
adversarial samples. Pelta masks inside the TEE the first part of the
back-propagation chain rule, typically exploited by attackers to craft the
malicious samples. We evaluate Pelta on state-of-the-art accurate models using
three well-established datasets: CIFAR-10, CIFAR-100 and ImageNet. We show the
effectiveness of Pelta in mitigating six white-box state-of-the-art adversarial
attacks, such as Projected Gradient Descent, Momentum Iterative Method, Auto
Projected Gradient Descent, the Carlini & Wagner attack. In particular, Pelta
constitutes the first attempt at defending an ensemble model against the
Self-Attention Gradient attack to the best of our knowledge. Our code is
available to the research community at https://github.com/queyrusi/Pelta. | Simon Queyrut, Valerio Schiavoni, Pascal Felber | 2023-09-13T14:19:29Z | http://arxiv.org/abs/2309.07197v1 | # Mitigating Adversarial Attacks in Federated Learning with Trusted Execution Environments
###### Abstract
The main premise of federated learning (FL) is that machine learning model updates are computed locally to preserve user data privacy. This approach avoids by design user data to ever leave the perimeter of their device. Once the updates aggregated, the model is broadcast to all nodes in the federation. However, without proper defenses, compromised nodes can probe the model inside their local memory in search for adversarial examples, which can lead to dangerous real-world scenarios. For instance, in image-based applications, adversarial examples consist of images slightly perturbed to the human eye getting misclassified by the local model. These adversarial images are then later presented to a victim node's counterpart model to replay the attack. Typical examples harness dissemination strategies such as altered traffic signs (patch attacks) no longer recognized by autonomous vehicles or seemingly unaltered samples that poison the local dataset of the FL scheme to undermine its robustness. Pelta is a novel shielding mechanism leveraging Trusted Execution Environments (TEEs) that reduce the ability of attackers to craft adversarial samples. Pelta masks inside the TEE the first part of the back-propagation chain rule, typically exploited by attackers to craft the malicious samples. We evaluate Pelta on state-of-the-art accurate models using three well-established datasets: CIFAR-10, CIFAR-100 and ImageNet. We show the effectiveness of Pelta in mitigating six white-box state-of-the-art adversarial attacks, such as Projected Gradient Descent, Momentum Iterative Method, Auto Projected Gradient Descent, the Carlini & Wagner attack. In particular, Pelta constitutes the first attempt at defending an ensemble model against the Self-Attention Gradient attack to the best of our knowledge. Our code is available to the research community at [https://github.com/queyrusi/Pelta](https://github.com/queyrusi/Pelta).
## I Introduction
The proliferation of edge devices and small-scale local servers available off-the-shelf nowadays generated an astonishing trove of data, to be used in several areas, including smart homes, e-health, _etc_. For several of these scenarios, the data being generated is highly sensitive. While the deployment of data-driven machine learning (ML) algorithms to train models over such data is becoming prevalent, one must take special care to prevent privacy leaks. In fact, it has been shown how, without proper mitigation mechanisms, sensitive data (_i.e._, the one used by such ML during training) can be reconstructed. To overcome this problem, an increasingly popular approach is federated learning (FL) [1, 2]. FL is a decentralized machine learning paradigm, where clients share with a trusted server only their local individual updates, rather than the data used to train it, hence protecting by design the privacy of user data. The trusted FL server is known by all nodes. His role is to build a global model by aggregating the updates sent by the nodes. Once aggregated, the server broadcasts back the updated model to all clients. The nodes will update their models locally and use the following updates with a fresh batch of local data (_i.e._, for inference purposes). This approach prevents user-data from leaving the user devices, as only the local model updates are sent outside the device.
A popular model trained in FL is the transformer [3], a widespread multi-purpose deep learning (DL) architecture. Transformers create a rich and high-performing continuous representation of the whole input (_i.e._, text or images [4]), effectively used across a diverse range of tasks, yielding state-of-the-art results in several areas, _e.g._, language modeling [5, 6], translation [7], audio classification [8], computer vision tasks such as image classification with vision transformer models (ViT) [9], object detection [10], semantic segmentation [11], _etc_. They harness the _self-attention mechanism_[3], which weights the relative positions of tokens inside a single input sequence to compute a representation of that given sequence.
In the FL training context, both the fine tuning of transformer-based models pre-trained on large-scale data and the training of their lightweight counterparts such as Mobile-ViT [12] have sparked interest in industry and academia [13]. Because this family of models demands considerable efforts in design and computing resources, protecting its integrity during a collaborative training requires special attention.
Consider now the following conjectured scenario. A server broadcasts its expensive model to collaborating clients to have it finetuned over their local private data. One of the clients taps into its RAM, copies the model and computes malicious patches, specifically designed to trick the model. Without ever altering the model, he can now run a patch attack [14]: he puts adversarial stickers on objects (roadsigns for instance) that are subject to regular inferences by the FL model: the objects are then misclassified by unaware agents running the collaboratively learned model and an accident may ensue.
Alternatively, the malicious agent initiates a poisoning attack that can break a model's robustness by sending the central server updates that stem from inference on samples engineered with a trojan trigger to create an unsuspected backdoor [15] that can be activated at an inconvenient time by unaware users of the FL model. Similarly, malicious clients can have the model purposefully and repeatedly misclassify their newfound adversarial examples to severely undermine the quality of the
aggregated updates [16]. In all these scenarios, the malicious client accessing its own physical copy of the model allowed him to generate poisonous data that effectively compromised the FL model's reliability. Safeguarding against the creation of these adversaries is paramount in a framework where scaling amplifies the effectiveness of attacks to alarming levels.
In this work, we focus on a client crafting adversarial examples. Fig. 1 depicts our considered FL scenario with a compromised node which tries to craft adversarial samples (_i.e._, launching an evasion attack). Because it is the hardest to defend against, we consider the _white-box_ setting, _i.e._, the model's characteristics are completely known, and an attacker can leverage gradient computation inside the model to craft adversarial examples. For instance, in the case of vision classification, an attacker leverages the model's gradients to craft images designed to fool a classifier while appearing seemingly unaltered to humans, launching a so-called gradient-based _adversarial_ (or _evasion_) _attack_[17, 18, 19].1 By design of the FL paradigm, a large number of compromised client can probe their own device memory to launch an adversarial attack against the FL model. In a white-box scenario, these attacks exploit the model in clear at inference time, by perturbing the input and having it misclassified. Such perturbations are typically an additive mask crafted by following the collected gradients of the loss _w.r.t_ the input pixels and applying it to the input signal. Such gradients are either directly obtained by tapping into the device's memory when they are effectively computed for local usage or they can be calculated knowing model weights and activations when the device is instructed not to produce them (this is typically the case when running inferences after deployment).
Footnote 1: While we use Vision Transformers for illustration and description purposes, the mitigation mechanisms presented and implemented later are directly applicable to other classes of models including DNNs or Transformers, such as those for NLP, audio processing, _etc_.
In this paper, we propose Pelta: it is a defense that mitigates gradient-based adversarial attacks launched by a client node by leveraging hardware obfuscation at inference time to hide a few in-memory values, _i.e._, those close to the input and produced by the model during each pass. This leaves the attackers unable to complete the chain-rule of the back-propagation algorithm used by gradient-based attacks and compute the perturbation to update his adversarial sample. To this end, we rely on hardware-enabled trusted execution environments (TEE), by means of enclaves, secure areas of a processor offering privacy and integrity guarantees. Notable examples include Intel SGX [20] or Arm TrustZone [21]. TEEs can lead to a restricted white-box scenario, _i.e._, a stricter setting in which the attacker cannot access absolutely everything from the model he seeks to defeat, hence impairing his white-box attack protocols designed for the looser hypothesis. In our context, we deal specifically with TrustZone, given its vast adoption, performance [21] and support for attestation [22]. However, TrustZone enclaves have limited memory (up to 30 MB in some scenarios), making it challenging to completely shield the state-of-the-art Transformer architectures often larger than 500 MB. This constraint therefore calls for such a hardware defense to be a light, partial obfuscation of the model.
Our main contributions are as follows:
1. We show that it is possible to mitigate evasion attacks during inference in a FL context through hardware shielding.
2. We describe the operating principles of Pelta, our lightweight gradient masking defense scheme.
3. We apply Pelta to shield layers of individual and ensemble state-of-the-art models against several white-box attack including the Self-Attention Gradient Attack to show the scheme effectively provides high protection in a high white-box setting.
4. To the best of our knowledge, Pelta is the first applied defense against the Self-Attention Gradient Attack.
The rest of the paper is as follows. SSII surveys related work. SSIII presents our threat model. SSIV describes the principles of the Pelta shielding defense. We evaluate it on an ensemble model against a gradient-based attack and discuss the results in SSV. SSVI discusses general system implications of Pelta. Finally, we conclude and hint at future work in SSVII.
## II Related Work
A significant body of work exists towards defending against adversarial attacks in a white-box context [23]. Because of its few-constraints hypothesis, particular endeavour is directed towards refining adversarial training (AT) methods [24, 25]. However, recent studies show a trade-off between a model's generalization capabilities (_i.e._, its standard test accuracy) and its robust accuracy [26, 27, 28, 29]. AT can also expose the model to new threats [30] and, perhaps even more remarkably, increase robust error at times [31]. It is possible to use AT at training time as a defense in the federated learning context [32] but it creates its own sensitive exposure to a potentially malicious server. The latter can restore a feature vector that is tightly approximated when the probability assigned to the ground-truth class is low (which is the case for an adversarial example) [33]. Thus, the server may reconstruct adversarial samples of its client nodes, successfully allowing for privacy breach through an _inversion attack_, _i.e._, reconstructing elements of private data in other nodes' devices. Surprisingly, [32] also uses randomisation at inference time [34] to defend against iterative gradient-based adversarial attacks, even though much earlier work expresses worrying reserves about such practice [35].
Fig. 1: Federated learning under trusted and compromised nodes. Pelta shields against evasion attacks.
In FL, where privacy of the users is paramount, defending against inversion is not on par with the security of the model in itself and attempts at bridging the two are ongoing [36]. These attempts focus on defending the model at training time or against _poisoning_, _i.e._, altering the model's parameters to have it underperform in its primary task or overperform in a secondary task unbeknownst to the server or the nodes. On the other hand, defenses against inversion attacks have seen a surge since the introduction of FL. DarkneTZ [37], PPFL [38] and GradSec [39] mitigate these attacks with the use of a TEE. By protecting sensitive parameters, activations and gradients inside the enclave memory, the risk of gradient leakage is alleviated and the white-box setting is effectively limited, thus weakening the threat. However, Delta is conceptually different from these methods, as it does not consider the gradient of the loss with respect to the _parameters_, but with respect to the _input image_ instead. The former can be revealing private training data samples in the course of an inversion attack, while the latter only ever directs the perturbation applied to an input when conducting an adversarial attack. Other enclave-based defenses do not focus on adversarial attacks, are based on SGX (meaning looser enclave constraints) and do not deal with ML computational graphs in general [40, 41], yet present defense architectures of layers that fit our case [42]. Where those were tested only for CNN-based architectures or simple DNNs, Delta can protect a larger, more accurate Transformer-based architecture. The robustness of the two types against adversarial attacks was compared in prior studies [43, 44, 45].
In [46], authors mitigate inference time evasion attacks by pairing the distributed model with a node-specific _attractor_, which detects adversarial perturbations, before distributing it. However, [46] assumes the nodes only have black-box access to their local model. Given the current limitations on the size of encrypted memory in particular for TrustZone enclaves [21], it is currently unfeasible to completely shield models such as VGG-16 or larger. This is why Delta aims at exploring a more reasonable use case of the hardware obfuscation features of TrustZone by shielding only a subset of the total layers, as we detail more later. It could thus be used as an overlay to the aforementioned study. More generally, our proposed defense scheme does not interfere with existing software solutions for train time or inference time defenses such as randomization, quantization or encoding techniques [47]. As a result, Delta should not be regarded as a competitor algorithm when it comes to obfuscating sensitive gradients but rather as a supplementary hardware-reliant aid to existing protocols.
Overall, gradient obfuscations as a defense mechanism against evasion attacks have been studied in the past [48, 35]. Authors discuss from a theoretical perspective the fragility of relying on masking techniques. Instead, we show that it fares well protecting a state-of-the-art architecture against inference time gradient-based evasion attacks even by using off-the-shelf hardware with limited resources. We further elaborate on the strong ties to the Backward Pass Differentiable Approximation (BPDA) attack [35] in SSIV. While we show that Delta unambiguously weakens a malicious agent in the white-box setting by restricting their lever of performance, its design provides no defense capabilities against black-box attacks [49] since they operate in a setting that already assumes complete obfuscation of the model's quantities for crafting the adversarial samples.
## III Threat Model
We assume an honest-but-curious client attacker, which does not tamper with the FL process and message flow. The attacker's device is assumed to be computing the gradients of the model at inference time, which is typically the case at each inference during the training rounds of a FL scheme. The case where no gradients are produced/stored by the device is a subcase of this setting. The normal message exchanges defined by the protocol are not altered. The attacker has access to the model to run inferences, but a subset of the layers are _shielded_, under a restricted white-box setting ensured by an impregnable TEE enclave (side channel attacks are out of scope for the rest of this paper). In practice, secure communication channels are established so that only privileged users can allow data recovery from within the TEE. A layer \(l\) is shielded (as opposed to its normal _clear_ state) if some variables and operations that directly lead to computing gradient values from this layer are obfuscated, _i.e._, hidden from an attacker. In a typical deep neural network (DNN) \(f\) such that
\[f=\mathrm{softmax}\circ f^{n}\circ f^{n-1}\circ\cdots\circ f^{1}\]
with layer \(f^{i}=\sigma^{i}(W^{i}\cdot x+b^{i})\) (\(\sigma^{i}\) are activation functions), this implies, from shallow (close to the input) to deep (close to loss computation): the input of the layer \(a^{l-1}\); its weight \(W^{l}\) and bias \(b^{l}\); its output \(z^{l}\) and parametric transform \(a^{l}\). In general terms, a layer encompasses at least one or two transformations. The attacker probes its own local copy of the model to search for adversarial examples, ultimately to present those to victim nodes and replicate the misclassification by their own copy of the model.
## IV Design
We explain first the design of Delta shielding of a model, with details on the general mechanisms and the shielding approach in SSIV-B. Then, we confront Delta to the case of the BPDA [35] attack in SSIV-C.
### _Back-Propagation Principles and Limits_
Recall the back-propagation mechanism; consider a computational graph node \(x\), its immediate (_i.e._, generation \(1\)) children nodes \(\{u_{j}^{1}\}\), and the gradients of the loss function with respect to the children nodes \(\{d\mathcal{L}/du_{j}^{1}\}\). The back-propagation algorithm uses the chain rule to calculate the gradient of the loss function \(\mathcal{L}\) with respect to \(x\) as in
\[\frac{d\mathcal{L}}{dx}=\sum_{j}\left(\frac{\partial f_{j}^{1}}{\partial x} \right)^{T}\frac{d\mathcal{L}}{du_{j}^{1}} \tag{1}\]
where \(f_{j}^{1}\) denotes the function computing node \(u_{j}^{1}\) from its parents \(\alpha^{1}\), _i.e._, \(u_{j}^{1}=f_{j}^{1}\left(\alpha^{1}\right)\). In the case of Transformer
encoders, \(f_{j}^{i}\) can be a convolution, a feedforward layer, an attention layer, a layer-normalization step, _etc._ Existing shielded white-box scenarios [39] protect components of the gradient of the loss _w.r.t._ the parameters \(\nabla_{\theta}\mathcal{L}\), to prevent leakage otherwise enabling an attacker to conduct inversion attacks. Instead, we seek to mask gradients that serve the calculation of the gradient of the loss _w.r.t._ the _input_\(\nabla_{x}\mathcal{L}\) to prevent leakage enabling gradient-based adversarial attacks, as those exploit the backward pass quantities (_i.e._, the gradients) of the input to maximise the model's loss.
Because the model shared between nodes in the FL group still needs to back-propagate correct gradients at each node of the computational graph, masking only an intermediate layer is useless, since it does not prevent the attacker from accessing the correct in-memory gradients on the clear left-hand (shallow) side of the shielded layer. We thus always perform the gradient obfuscation of the first trainable parameters of the model, _i.e._, its shallowest successive layers: it is the lightest way to alter \(\nabla_{x}\mathcal{L}\) through physical masking. Specifically, using Eq. 1, where \(x\) denotes the input image (_i.e._, the adversarial instance \(x_{adv}\)), an attacker could perform any gradient-based evasion attack (a special case where a layer is not differentiable is discussed in SSIV-C). Delta renders the chain rule incomplete, by forcibly and partially masking the left-hand side term inside the sum, \(\{\partial f_{j}^{1}/\partial x\}\), hence relying on the lightest possible obfuscation to prevent collecting these gradients and launch the attack.
### Delta _shielding_
We consider the computational graph of an ML model:
\[\mathcal{G}=\left\langle n,l,E,u^{1}\dots u^{n},f^{l+1}\dots f^{n}\right\rangle\]
where \(n\) and \(l\) respectively represent the number of nodes, _i.e._, transformations, and the number of leaf nodes (inputs and parameters) _s.t._\(1\leq l<n\). \(E\) denotes the set of edges in the graph. For each \((j,i)\) of \(E\), we have that \(j<i\) with \(j\in\{1...(n-1)\}\) and \(i\in\{(l+1)...n\}\). The \(u^{i}\) are the variables associated with every numbered vertex \(i\) inside \(\mathcal{G}\). They can be scalars or vector values of any dimension. The \(f^{i}\) are the differentiable functions associated with each non-leaf vertex. We also assume a TEE enclave \(\mathcal{E}\) that can physically and unequivocally hide quantities stored inside (_e.g._, side-channel to TEEs are out of scope).
To mitigate the adversarial attack when gradients are stored in memory, the Delta shielding scheme presented in Alg. 1 stores in the TEE enclave \(\mathcal{E}\) all the children jacobian matrices of the first layer, \(\{\partial f_{j}^{1}/\partial x\}\). Note that, given the summation in Eq. 1, obfuscating only some of the children jacobians \(\partial f_{1}^{1}/\partial x\), \(\partial f_{2}^{1}/\partial x\)... would already constitute an alteration of \(\nabla_{x}\mathcal{L}\). We however chose that Delta masks all the partial factors of this summation to not take any chances. This obfuscation implies that intermediate gradients should be masked as well. Indeed, because they lead to \(\partial f_{j}^{1}/\partial x\) through the chain rule, the intermediate gradients (or _local jacobian_) \(J^{0\to 1}=\partial f^{1}\left(\alpha^{1}\right)/\partial u^{0}\) between the input \(x=u^{0}\) of the ML model and its first transformation must be masked (Alg. 1-line 9). Because one layer may carry several transforms on its local input (_e.g._, linear then \(\mathrm{ReLU}\)), local jacobians should be masked for at least as many subsequent generations of children nodes as the numbers of layers to shield. The number of generations is directly determined at the arbitrary selection step (Alg. 1-line 1) where the defender choses how far the model should be shielded, _i.e._, the deepest masked nodes. In practice, selecting the first couple of nodes likely induces enough alteration to mitigate the attack and prevent immediate reconstruction of the hidden parameters through either direct calculus (input-output comparison) or inference through repeated queries [50]. From this deep frontier, the defender may recursively mask the parent jacobians (Alg. 1-line 9, 11). Note that this step is skipped in practice when the device doesn't store any gradients.
```
Data:\(\mathcal{G}=\left\langle n,l,E,u^{1}\dots u^{n},f^{l+1}\dots f^{n}\right\rangle\); enclave \(\mathcal{E}\) AlgorithmPelta\((\mathcal{G})\)
1\(S\leftarrow\)Select(\(u^{l+1}\dots u^{n}\)) for\(u\) in \(S\)doShield\((u,\mathcal{E})\) return \(\mathsf{Shield}(u^{i},\mathcal{E})\)\(\mathcal{E}\leftarrow\mathcal{E}+\{u^{i}\}\)\(\alpha^{i}\leftarrow\left\langle u^{j}\mid(j,i)\in E\right\rangle\)// get parent vertices for\(u^{j}\) in \(\alpha^{i}\)doif\(u_{j}\) is inputthen \(J^{j\to i}\leftarrow\partial f^{i}\left(\alpha^{i}\right)/\partial u^{j}\)// local jacobian \(\mathcal{E}\leftarrow\mathcal{E}+\{J^{j\to i}\}\)\(\mathsf{Shield}(u^{j},\mathcal{E})\)// move on to parent return
```
**Algorithm 1**Pelta shielding
In adversarial attacks, the malicious user keeps the model intact and treats only the input \(x_{adv}\) as a trainable parameter to maximize the prediction error of the sample. We note that the local jacobians (Alg. 1-line 8) between a child node and their non-input parents need not be hidden because the parents are not trainable (they should be input of the model to meet the condition at Alg. 1-line 7). Such quantities are, in effect, simply not present in the back-propagation graph and could not constitute a leak of the sensitive gradient. Similarly, notice how Select supposes the deepest masked nodes be chosen _s.t._ they come _after_ every input leaf nodes (_i.e._, from subsequent generations). This condition: \(u^{i}\in S\Rightarrow i>l\), insures no information leaks towards trainable input leaf nodes. After all sensitive partials are masked by Alg. 1, such a set of obfuscated gradients is a subset of the total forward partials that would lead to a complete chain rule and is noted \(\left\{\partial f/\partial x\right\}^{L}\) assuming \(L\leq n\) is the deepest vertex number that denies the attacker information to complete the chain rule. Since all the forward partials down to \(L\) are masked, then \(\partial f^{L}/\partial x\) is protected and the resulting under-factored gradient vector (_i.e._, the _adjoint_ of \(f^{L+1}\)) is a vector in the shape of the shallowest clear layer \(f^{L+1}\) and noted \(\delta_{L+1}=d\mathcal{L}/du^{L+1}\)
A white-box setting assumes an attacker has knowledge of both the subset \(\{\partial f/\partial x\}^{L}\) of all forward partials and \(\delta_{L+1}\) to perform a regular gradient-based update on his \(x_{adv}\). However in Delta, the attacker is only left with the adjoint \(\delta_{L+1}\).
Finally, the forward pass quantities \(u^{i}\) that may lead to the unambiguous recovery of the hidden set \(\{\partial f/\partial x\}^{L}\) are masked (Alg. 1-4). This insures that arguments \(\alpha^{i}\) of the transformations \(f^{i}\) cannot be further exploited by the attacker. This could happen when one node is a linear transform of the other as in \(u^{i+1}=f^{i+1}((W,u^{i}))=W\times u^{i}\). In this case, the local jacobian \(J^{i\to i+1}\) is known to be exactly equal to \(W\). Notice that weights and biases of a DNN would be effectively masked, as they are regarded as leaf vertices of the model's computational graph for the \(f^{i}((u^{i},u^{i-1},u^{i-2}))=u^{i-1}\cdot u^{i}+u^{i-2}=Wx+b\) operation. Similarly, the outputs of the transformations, \(u^{i}=f^{i}(\alpha^{i})\), are masked by (Alg. 1-line 4). As an example, for a DNN, the exact quantities enumerated in SSIII are stored in the enclave \(\mathcal{E}\).
Overall, Delta should be construed as simply the general principle of unequivocally hiding enough parameters and gradients that are directly next to the input so that they cannot be maliciously exploited.
### _Relation with BPDA_
When training a DL model, a straight-through estimator allows a simple approximation of the gradient of a non-differentiable function (_e_.\(g\)., a threshold) [51]. In a setting discussed in [35], this idea is generalized to a so called Backward Pass Differentiable Approximation (BPDA) to illustrate how preventing a layer from being differentiable in hopes of defeating a gradient-based attack actually constitutes a fragile security measure. In BPDA, the non-differentiable layer \(f^{l}\) of a neural network \(f^{1\ldots L}(\cdot)\) would be approximated by a neural network \(g\)_s.t_\(g(x)\approx f^{l}(x)\) and be back-propagated through \(g\) instead of the non-differentiable transform. This method is what a malicious node adopts against Delta by upsampling the adjoint of the last clear layer \(\delta_{L+1}\) to bypass the shielded layers. An illustrative case for a DNN is shown in Fig.2. However, the attacker operates with two limiting factors: _(i)_ in a real-world scenario, the attacker possesses limited time and number of passes before the broadcast model becomes obsolete; _(ii)_ we hypothesize the attacker does not have priors on the parameters of the first layers of the model, therefore the adversary is effectively left without options for computing the gradient-based update other than training a BPDA of the layer. Although this attack makes a fundamental pitfall of gradient masking techniques, it is worth noting this step becomes increasingly difficult for the attacker as larger parts of the model are hidden from him since it would suppose he has training resources equivalent to that of the FL system. As a side note, we mention that there exist recent defenses against BPDA [52].
In SSV, we investigate to what extent an adversarial attack can be mitigated by Delta and whether the upsampling of the under-factored gradient (which is merely a linear transformation of the correct gradient in the case of the first transformation in many vision models) as a substitute for the masked backward process constitutes a possible last resort for the malicious node.
## V Evaluation
_Does_ Delta _mitigate white-box attacks?_ To answer this question, we conduct attacks on several models with shielded layers.
### _Evaluation Metric and Ensemble Defense Setup_
In a common classification task, the _clean accuracy_ of a model refers to its standard test accuracy to differentiate it from the context of an adversarial attack where the goal of the defender is to score high on _astuteness_ (_i_.\(e\)., _robust_ accuracy) against a set of correctly classified samples to which adversarial perturbations were added. This means that a perfectly _astute_ model would almost always correctly classify a perturbed sample if he classified it correctly when it was clean. Against adversarial attacks, we chose as defending models several state-of-the-art high clean accuracy models trained on three heavily benchmarked image datasets: CIFAR-10, CIFAR-100 [53] and the ImageNet-21K dataset [54].
#### V-A1 Individual defenders
Because of the widespread use of the attention mechanism in a large variety of machine learning tasks, we included three size variants of the Vision Transformer in our experiments. Specifically: ViT-L/16, ViT-B/16 and ViT-B/32 [9]. We also included two conventional CNNs, namely ResNet-56, ResNet-164 [55] and two Big Transfer models: BiT-M-R101x3 and BiT-M-R152x4 [56] which stem from the CNN-based ResNet-v2 architecture [55].
#### V-A2 Ensemble defender
Additionally to these individual defending models we also study the astuteness of an _ensemble_ of a ViT and a BiT. An ensemble of models consists of two or more models that determine the correct output through a decision policy. We chose an ensemble because, generally, when dealing with the image classification task, adversarial examples do not _transfer_ well between attention based and CNN based models [44]. This means that an example crafted to
Fig. 2: Overview of the Delta defense scheme over the first layers of a machine learning model against an iterative gradient-based adversarial attack. Because the attacker cannot access operations in the shallow layers, he resorts to upsample his under-factored gradient \(\delta_{L+1}\) to compute the adversarial update. The figure depicts activations as transforms for a DNN.
fool one type of model in particular will rarely defeat the other, thus highly benefiting the astuteness of the ensemble. This allows for mitigating attacks that target either one of the two specifically, by exploiting a combination of the model outputs to maximize chances of correct prediction. In this paper, we use random selection [57] as a decision policy, where, for each sample, one of the two models is selected at random to evaluate the input at test time.
**Pelta Shielding defense of the white-box.** To simulate the shielding inside the TEE enclave, we deprive the attacker of the aforementioned quantities (SSIV-B) during the individual attacks (V-A1) and during the attack against the ensemble (V-A2). To the best of our knowledge, this is the first ever attempt at mitigating the recent Self-Attention Gradient Attack (see V-B). Against the ensemble, the attacker is deprived of the sensitive quantities of the two models separately, then jointly. For the ViT models, all transforms up to the position embedding [9] are included. This means that the following steps occur inside the enclave: separation of the input into patches \(x_{p}^{n}\), projection onto embedding space with embedding matrix \(E\), concatenation with learnable token \(x_{\text{class}}\) and summation with position embedding matrix \(E_{\text{pos}}\):
\[z_{0}=\left[x_{\text{class}}\ ;x_{p}^{1}E;x_{p}^{2}E;\cdots;x_{p}^{N}E\right]+E_{ \text{pos}}\]
For the Big Transfer (BiT) models, the scheme includes the first weight-standardized convolution [56] and its following padding operation. For the ResNets, the first convolution, batch normalization and ReLU activation are masked. Notice that, for all three model types, we obfuscate either two learnable transformations or a non-invertible parametric transformation like weight-standardization, ReLU or MaxPool [58], so the attacker cannot retrieve the obfuscated quantities without uncertainty. Table I reports the estimated overheads of the shield for each setting: the theoretical memory footprints of each secured intermediate activation, weight and gradient as single-precision floating-point numbers were summed and are shown for the ImageNet dataset variants of the models in the worst case where intermediate activations and gradients inside the shield are not flushed after the back-propagation algorithm uses them to complete the pass. Assuming the most resource-intensive case where gradients are produced, the shielding of the ensemble requires less than 16 MB of TEE memory at the very worst, consistent with what typical TrustZone-enabled devices allow [21]. Notice that, because it only ever obfuscates the shallowest parts of a model, Delta is barely ever affected by the scale of larger and more complex variants, which makes it suitable for a wide variety of state-of-the-art models.
### _Attacker Setup_
Against the individual models (V-A1), we launch four iterative maximum allowable attacks and one regularization-based attack. Against the ensemble model (V-A2) we launch one iterative maximum allowable attack. We shortly introduce all six in their non-targeted version here, _i.e._, the specific class which the altered sampled is misclassified into has no importance. For the sake of conciseness, we omitted some descriptive equations that can be found in the original papers of the considered attacks.
Iterative maximum allowable attacks (Fig. 3) start at an initial point \(x^{(0)}\) which can be chosen as the origin sample \(x_{0}\) or sometimes as a randomly sampled \(x_{rand}\). The attack then iterates adversarial candidates \(x_{adv}^{(i)}\) by following an overall ascendant path of the loss function of the model within a norm constraint. This implies adversarial samples are required to stay inside an \(l_{2}\) or \(l_{\infty}\) ball centered on \(x_{0}\), _i.e._, \(||x_{adv}^{(i)}-x_{0}||_{2}\) or \(||x_{adv}^{(i)}-x_{0}||_{\infty}\leq\epsilon,\forall i\geq 0\). In the case of \(l_{\infty}\), this means that the features (in this paper, the individual pixels of the image) cannot vary more than \(\pm\epsilon\) in magnitude. Specifically, we use the following attacks in this paper:
**Fast Gradient Sign Method** - The Fast Gradient Sign Method (FGSM) [17] is a one-step gradient-based approach that finds an adversarial example \(x_{adv}\) by adding a single \(\epsilon\)-perturbation that maximizes the loss function \(\mathcal{L}\) to the original sample of label \(y\) such that \(x_{adv}=x_{0}+\epsilon\cdot\operatorname{sign}\left(\nabla_{x}\mathcal{L}(x_{ 0},y)\right)\).
**Projected Gradient Descent** - The Projected Gradient Descent (PGD) [59] is the natural multi-step variant of the FGSM algorithm that insures the resulting adversarial example \(x_{adv}\) stays within bound of a maximum allowable \(\epsilon\)-perturbation. This is done through a \(P\) operator that projects out of bound values back into the \(\epsilon\)-ball as illustrated by
\begin{table}
\begin{tabular}{l c c} \hline
**Model** & **Shielded portion** & **TEE mem. used** \\ \hline ViT-L/16 & 1.34\% & 15.16 MB \\ ViT-B/16 & 3.61\% & 11.97 MB \\ BiT-M-R101x3 & \(4.50\mathrm{e}{-3}\)\% & 65.20 KB \\ BiT-M-R152x4 & \(9.23\mathrm{e}{-3}\)\% & 322.14 KB \\ \hline \end{tabular}
\end{table} TABLE I: Estimated enclave memory cost and model portion shielded in each setting. The ensemble value sums both models in the worst case where enclaves are not flushed between evaluation of either of the two models.
Fig. 3: Schematic diagram of three gradient-based maximum allowable adversarial methods. Within a norm constraint, the attacker computes an additive perturbation of input \(x_{0}\) to cross a decision boundary of a victim model. Only PGD (red) was able to craft an adversarial example \(x_{PGD}\) here.
the last PGD step of Fig. 3. The \(i^{\text{th}}\) step is computed as \(x^{(i)}=P\left(x^{(i-1)}+\epsilon_{\text{step}}\cdot\operatorname{sign}\left( \nabla_{x}\mathcal{L}(x_{0},y)\right)\right)\), \(\epsilon_{\text{step}}\) being the step size.
**Momentum Iterative Method** - Inspired by the popular momentum method for accelerating gradient descent algorithms, the Momentum Iterative Method (MIM) [60] applies a velocity vector in the direction of the gradient of the loss function across iterations and at any step \(i>0\) takes into account previous gradients (\(g_{\mu}^{(i)}\) relies on \(g_{\mu}^{(i-1)}\)) with a decay factor \(\mu\). The additive step is reminiscent of FGSM, as the \(i^{\text{th}}\) step is computed by \(x^{(i)}=x^{(i-1)}+\epsilon_{\text{step}}\cdot\operatorname{sign}(g_{\mu}^{(i)})\).
**Auto Projected Gradient Descent** - The Auto Projected Gradient Descent (APGD) [61] proposes adaptive changes of the step size in PGD. This automated scheme allows for an exploration phase and an exploitation phase where an objective function (commonly the cross-entropy loss) is maximized. This attack includes other mechanisms like the ability to restart at a best point along the search. In the benchmark of the individual models, APGD is the most recent attack and it would also be considered the most sophisticated.
We also use a so-called _regularization-based_ attack on our individual defending models V-A1:
**Carlini and Wagner Attack** - The Carlini and Wagner Attack (C&W) [62] iteratively minimizes (typically through the gradient descent algorithm) an objective sum of two competing terms. Through various variable substitutions, one term indirectly measures the norm (typically, \(l_{2}\)) of the added perturbation. On the other hand, a so-called _regularized_ term evaluates the wrongness of the classification for an adversarial candidate \(x_{adv}\).
Lastly, we present an iterative gradient-based method against our ensemble defense:
**Self-Attention Gradient Attack** - To circumvent the ensemble defense V-A2, the attacker uses a gradient-sign attack: the Self-Attention Gradient Attack (SAGA) [44]. It iteratively crafts the adversarial example by following the sign of the gradient of the losses as follows:
\[x^{(i+1)}=x^{(i)}+\epsilon_{\text{step}}*\operatorname{sign}\left(G_{blend} \left(x^{(i)}\right)\right) \tag{2}\]
where we initialize \(x^{(0)}=x_{0}\) the initial image and \(\epsilon_{\text{step}}\) is a given set attack step size (chosen experimentally). Additionally, \(G_{blend}\) is defined as a weighted sum, each of the two terms working towards computing an adversarial example against either one of the two models:
\[G_{blend}\left(x^{(i)}\right)=\alpha_{k}\frac{\partial\mathcal{L}_{k}}{ \partial x^{(i)}}+\alpha_{v}\phi_{v}\odot\frac{\partial\mathcal{L}_{v}}{ \partial x^{(i)}} \tag{3}\]
\(\partial\mathcal{L}_{k}/\partial x^{(i)}\) is the partial derivative of the loss of the CNN-based architecture BiT-M-R101x3, and \(\partial L_{v}/\partial x^{(i)}\) is the partial derivative of the loss of the Transformer-based architecture ViT-L/16. Their prominence is controlled by the attacker through two manually set weighting factors, \(\alpha_{k}\) and \(\alpha_{v}=1-\alpha_{k}\). For the ViT gradient, an additional factor is involved: the self-attention map term \(\phi_{v}\), defined by a sum-product:
\[\phi_{v}=\left(\prod_{l=1}^{n_{l}}\left[\sum_{i=1}^{n_{h}}\left(0.5W_{l,i}^{( att)}+0.5I\right)\right]\right)\odot x^{(i)} \tag{4}\]
where \(n_{h}\) is the number of attention heads per encoder block of the ViT model, \(n_{l}\) the number of encoder blocks in the ViT model. In the ViT-L/16 for instance, \((n_{h},n_{l})\) = \((16,24)\). \(W_{l,i}^{(att)}\) is the attention weight matrix in each attention head and \(I\) the identity matrix. \(\odot\) is the element wise product.
Facing the Delta shielded setting, the attacker carries out the SAGA without the masked set \(\left\{\partial f/\partial x\right\}^{L}\) (and the adjacent quantities otherwise enabling its unambiguous reconstruction). It tries to exploit the adjoint \(\delta_{L+1}\) of the last clear layer by applying to it a random-uniform initialized upsampling kernel. This process, called _transposed convolution_[63], essentially is a geometrical transformation applied to the vector gradient at the backward pass of a convolutional layer. Although this method does not offer any guarantee of convergence towards a successful adversarial example, it allows to understand whether the adjoint can still serve as a last resort when no priors on the shielded parts are available under limited resource constraint. Individual models are attacked in a similar manner, replacing gradient terms of the shielded computations of the defender by gradients of a substitute transposed convolution. We will therefore ask: in the absence of the shielded quantities, _does upsampling constitute a last resort for the attacker?_
### _Benchmarks and results_
We select 1000 correctly classified random samples from CIFAR-10, CIFAR-100 and the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) [54], a subset of ImageNet-21K. This means that the robust accuracy over these samples is 100% if no attack is run. We evaluate the average robust accuracy of individual models against five attacks in a setting where the model is not shielded and a setting where the model is shielded. The ensemble model is evaluated against
\begin{table}
\begin{tabular}{l l} \hline
**Attack** & **Parameters (CIFAR-10 and CIFAR-100)** \\ \hline FGSM & \(\epsilon\) = \(0.031\) \\ PGD & \(\epsilon\) = \(0.031\), \(\epsilon_{\text{step}}\) = \(0.00155\), steps = 20 \\ MIM & \(\epsilon\) = \(0.031\), \(\epsilon_{\text{step}}\) = \(0.00155\), \(\mu\) = 1.0 \\ \hline APGD & \(\epsilon\) = 0.031, \(N_{\text{results}}\) = 1, \(\rho\) = 0.75, \(n_{\text{queries}}^{2}\) = 5e3 \\ C\&W & confidence = 50, \(\epsilon_{\text{step}}\) = \(0.00155\), steps = 30 \\ SAGA & \(\alpha_{2}\) = \(2.0\text{e}{-4}\) and 0.0015, \(\epsilon_{\text{step}}\) = \(3.1\text{e}{-3}\) \\
**Attack** & **Parameters (ImageNet)** \\ \hline FGSM & \(\epsilon\) = \(0.062\) \\ PGD & \(\epsilon\) = \(0.062\), \(\epsilon_{\text{step}}\) = \(0.0031\), steps = 20 \\ MIM & \(\epsilon\) = \(0.062\), \(\epsilon_{\text{step}}\) = \(0.0031\), \(\mu\) = 1.0 \\ APGD & \(\epsilon\) = \(0.062\), \(N_{\text{results}}\) = 1, \(\rho\) = 0.75, \(n_{\text{queries}}^{2}\) = \(5\text{e3}\) \\ C\&W & confidence = 50, \(\epsilon_{\text{step}}\) = \(0.0031\), steps = 30 \\ SAGA & \(\alpha_{k}\) = \(0.001\) and 0.0015, \(\epsilon_{\text{step}}\) = \(0.0031\) \\ \hline \end{tabular}
\end{table} TABLE II: Attack parameters.
the SAGA in four settings: no model is shielded, only the BiT model is shielded, only the ViT model is shielded, both models are shielded (maximum protection for the ensemble). For reference, the clean accuracy of the models over 1000 random samples from the validation set of each dataset is provided. Attack parameters are provided in Table II.
Table III shows our results for the individual models against the five attacks and Table IV shows our results for the ensemble model against the SAGA. For illustrative purposes, Fig.4 shows the generated perturbation on one sample in the four settings of the ensemble model against the SAGA.
_Does_Pelta _mitigate white-box attacks?_ We observe that the shielding greatly preserves the astuteness of individual models and of the ensemble, with up to \(98.8\%\) robust accuracy for the ensemble (\(1.2\%\) attack success rate) comparable to random uniform pixel modifications, and up to \(99.3\%\) robust accuracy for individual models. It can be noted that, generally, the size of the model has a positive influence on the astuteness after shielding across attacks; however, this effect seems largely overwhelmed by the advantage variants of ViT have over CNN-based models before and after shielding. Additionally, we see that for the ensemble defense, individual
\begin{table}
\begin{tabular}{l c c c c c c} \hline
**CIFAR-10** & \multicolumn{2}{c|}{**Baseline**} & \multicolumn{4}{c}{**Applied Shield**} \\ \hline
**Model Acc.** & **Clean** & **Random** & **None** & **ViT-L/16** & **BiT-M-R101x3** & **Ensemble** \\ ViT-L/16 & 99.4\% & 99.5\% & 28.1\% & 99.2\% & 14.1\% & 99.5\% \\ BiT-M-R101x3 & 98.8\% & 98.8\% & 25.2\% & 0.3\% & 78.9\% & 98.5\% \\ Ensemble & 99.1\% & 98.9\% & 27.2\% & 49.7\% & 46.4\% & 98.8\% \\ \hline
**CIFAR-100** & \multicolumn{2}{c|}{**Baseline**} & \multicolumn{4}{c}{**Applied Shield**} \\ \hline
**Model Acc.** & **Clean** & **Random** & **None** & **ViT-L/16** & **BiT-M-R101x3** & **Ensemble** \\ ViT-L/16 & 94.0\% & 99.4\% & 5.2\% & 99.6\% & 4.2\% & 99.8\% \\ BiT-M-R101x3 & 89.9\% & 98.3\% & 18.3\% & 0.2\% & 50.0\% & 82.2\% \\ Ensemble & 92.0\% & 98.9\% & 12.2\% & 49.5\% & 27.5\% & 90.8\% \\ \hline
**ImageNet** & \multicolumn{2}{c|}{**Baseline**} & \multicolumn{4}{c}{**Applied Shield**} \\ \hline
**Model Acc.** & **Clean** & **Random** & **None** & **ViT-L/16** & **BiT-M-R101x3** & **Ensemble** \\ ViT-L/16 & 82.6\% & 100.0\% & 6.3\% & 99.2\% & 6.1\% & 97.5\% \\ BiT-M-R152x4 & 85.6\% & 100.0\% & 15.2\% & 0.6\% & 45.5\% & 76.2\% \\ Ensemble & 84.3\% & 100.0\% & 10.8\% & 49.9\% & 25.8\% & 87.0\% \\ \hline \end{tabular}
\end{table} TABLE IV: Robust accuracy of a shielded ensemble against SAGA on 1000 correctly classified CIFAR-10, CIFAR-100 and ImageNet samples (higher values favor the defender). Baseline values show clean accuracy, astuteness against random uniform attack on the \(l_{\infty}\) ball. Applied Shield values show per-model robust accuracy against different shielding setups.
\begin{table}
\begin{tabular}{l c c c|c c c|c c|c c|c} \hline
**CIFAR-10** & \multicolumn{2}{c|}{**FGSM**} & \multicolumn{2}{c|}{**PGD**} & \multicolumn{2}{c|}{**MIM**} & \multicolumn{2}{c|}{**C\&W**} & \multicolumn{2}{c}{**APGD**} & \multicolumn{1}{c}{**Clean**} \\ \hline ViT-L/16 & 57.2\% & 99.3\% & 1.2\% & 99.2\% & 6.2\% & 98.7\% & 0.0\% & 99.1\% & 0.0\% & 90.6\% & 99.4\% \\ ViT-B/16 & 38.6\% & 91.2\% & 0.0\% & 96.2\% & 0.3\% & 97.3\% & 0.0\% & 95.8\% & 0.0\% & 88.9\% & 98.5\% \\ ViT-B/32 & 33.5\% & 92.8\% & 2.1\% & 98.0\% & 3.3\% & 97.9\% & 0.0\% & 96.9\% & 0.0\% & 89.9\% & 98.0\% \\ ResNet-56 & 23.9\% & 75.6\% & 0.0\% & 81.3\% & 0.0\% & 95.9\% & 0.0\% & 95.2\% & 0.0\% & 57.1\% & 93.0\% \\ ResNet-164 & 30.0\% & 78.7\% & 0.0\% & 82.7\% & 0.0\% & 96.3\% & 0.0\% & 96.0\% & 0.0\% & 60.0\% & 93.9\% \\ BiT-M-R101x3 & 84.8\% & 90.9\% & 0.0\% & 74.1\% & 0.0\% & 83.1\% & 0.0\% & 98.0\% & 0.0\% & 57.3\% & 98.9\% \\ \hline
**CIFAR-100** & \multicolumn{2}{c|}{**FGSM**} & \multicolumn{2}{c|}{**PGD**} & \multicolumn{2}{c|}{**MIM**} & \multicolumn{2}{c|}{**C\&W**} & \multicolumn{2}{c}{**APGD**} & \multicolumn{1}{c}{**Clean**} \\ \hline ViT-L/16 & 30.1\% & 99.2\% & 1.0\% & 98.9\% & 4.0\% & 98.0\% & 0.0\% & 99.0\% & 0.0\% & 90.0\% & 93.9\% \\ ViT-B/16 & 19.3\% & 91.1\% & 2.7\% & 93.2\% & 0.9\% & 96.9\% & 0.0\% & 97.9\% & 0.0\% & 88.3\% & 92.5\% \\ ViT-B/32 & 20.0\% & 92.9\% & 2.9\% & 92.0\% & 1.9\% & 98.4\% & 0.0\% & 96.2\% & 0.0\% & 89.0\% & 93.5\% \\ ResNet-56 & 5.2\% & 81.5\% & 0.1\% & 82.6\% & 3.3\% & 95.1\% & 0.0\% & 96.0\% & 0.0\% & 60.8\% & 70.0\% \\ ResNet-164 & 7.9\% & 83.8\% & 0.2\% & 83.7\% & 3.9\% & 97.6\% & 0.0\% & 94.2\% & 0.0\% & 62.1\% & 76.1\% \\ BiT-M-R101x3 & 3.3\% & 85.4\% & 0.0\% & 75.7\% & 0.0\% & 82.9\% & 0.0\% & 98.8\% & 0.0\% & 59.8\% & 90.2\% \\ \hline
**ImageNet** & \multicolumn{2}{c|}{**FGSM**} & \multicolumn{2}{c|}{**PGD**} & \multicolumn{2}{c|}{**MIM**} & \multicolumn{2}{c|}{**C\&W**} & \multicolumn{2}{c}{**APGD**} & \multicolumn{1}{c}{**Clean**} \\ \hline ViT-L/16 & 27.5\% & 94.0\% & 0.0\% & 93.9\% & 0.0\% & 97.4\% & 0.0\% & 99.3\% & 0.0\% & 86.5\% & 82.7\% \\ ViT-B/16 & 22.1\% & 92.1\% & 0.0\% & 92.0\% & 0.0\% & 97.4\% & 0.0\% & 99.1\% & 0.0\% & 87.1\% & 80.0\% \\ BiT-M-R101x3 & 24.2\% & 83.8\% & 0.0\% & 76.8\% & 0.0\% & 83.2\% & 0.0\% & 98.2\% & 0.0\% & 73.4\% & 79.3\% \\ BiT-M-R152x4 & 67.0\% & 85.8\% & 0.0\% & 87.1\% & 0.0\% & 93.7\% & 0.0\% & 98.0\% & 0.0\% & 67.1\% & 85.1\% \\ \hline \end{tabular}
\end{table} TABLE III: Robust accuracy of non-shielded (left) versus shielded (right) individual models against a benchmark of five white-box attacks on CIFAR-10, CIFAR-100 and ImageNet (higher values favor the defender). Clean accuracy over 1000 random validation samples is provided for illustrative purpose.
model robust accuracies are not equally protected: the ViT model benefits more from Pelta when applied only to it than BiT. We explain these results as a general advantage in robustness of Transformer-based architectures over CNNs [43] with BiT being more sensitive to targeted attacks than its counterpart, and also more sensitive to adversarial examples crafted against the ViT. We further note that in the ensemble, individual robust accuracies worsen compared to the non-shielded setting when only their counterpart is shielded. The reason for this is attributed to the fact that SAGA solely directs the sample towards augmenting the non-shielded loss, while disregarding the shielded loss, consequently resulting in the creation of adversarial samples that exclusively aim to exploit vulnerabilities in the non-shielded model.
_Does upsampling constitute a last resort for the attacker?_ Interestingly, for the ensemble, the attack success rate of the upsampling against the Pelta defense scheme sometimes surpasses that of the random uniform attack against BiT when the shield is applied only to it. We explain this behaviour as follows: contrary to the shielded layer in ViT that projects the input onto an embedding space, the last clear layer adjoint \(\delta_{L+1}\) in the BiT still carries spatial information that could be recovered _e.g._, through average upsampling. These results suggest shielding both models for optimal security, given sufficient encrypted memory available in a target TEE.
A similar remark can be made about the behavior of CNN-based models in the individual attacks. While ViT variants generally fare very well against most attacks to the exception of the most sophisticated (APGD), ResNets and BiTs have overall low astuteness, indicating the attacks behave largely better than random. This suggests that larger parts of the model should be included in the enclave of the Pelta scheme to mitigate the effectiveness of the upsampling by the attacker.
## VI System Implications
As previously mentioned, TEEs typically operate in a secure mode that is designed to provide protection against external attacks. This secure mode can add overhead and complexity to communication between the secure world and the rest of the system, which can in turn impact data throughput. For example, when data is transferred between the TEE and the non-secure world, it may need to be encrypted and decrypted, which can slow down communication. Data transferred between the secure world and the TEE traditionally requires secure communication protocols to prevent unauthorized access or tampering, thus hindering the velocity of the data transfer process, as encryption and decryption may be necessary to protect the data. Moreover, a context switch is typically required to move from one execution context to another. This context switching can introduce additional overhead and slow down the data transfer process.
Because Pelta is designed to hide sensitive data at inference time through the use of a TEE, a throughput bottleneck could be felt at two stages of the federated endeavour. The first case is the most self-explanatory: even after deployment following FL rounds, inference with Pelta still supposes parts of the model are inside a TEE and sensitive operations are carried inside the enclave. This requires context switches and establishing a secure communication channel between worlds to either feed the first computation nodes of the model with the input data or extracting the output of the last shielded layer to carry on subsequent operations with the clear and deeper segment of the model. Fortunately, these elementary TEE methods usually range from microseconds up to milliseconds at most for either SGX or TrustZone [64, 65], which is commensurate with the real-time usage of most current edge ML applications that fit into FL (text prediction, sentiment analysis, health monitoring, speech recognition etc.) as recently demonstrated [66]. As a rule, it should be kept in mind that minimizing context switches is an important optimization technique in the design of TEE applications.
The second case is the training phase of the FL scheme. During this phase, the use of an optimization algorithm puts more strain on the TEE and its communication channels. For instance, inside the enclave, gradients which were not generated during regular end-user inference are now being computed: these gradients seldom need to be read from within the enclave in order to be sent for aggregation, which adds in bandwidth overhead. However, many of these limitations are taken into account when tuning the parameters of the protocol for each FL round. For example, the frequency at which the weight updates are pulled out of the enclave to be sent to the aggregating server could be lowered to allow averaging hidden gradients over larger batches on the client nodes. Overall, training protocols of an FL scheme are expected to harness the idle state of edge devices to handle intermittent compute node availability [67]. The extra bandwidth overhead of the second case should therefore not impact user experience as much as it does the strategy of the FL training rounds.
Fig. 4: SAGA adversarial samples in four different shielding settings from a correctly classified sample.
## VII Conclusion and Future Work
In federated learning, adversarial attacks are the basis of several trojaning and poisoning attacks. However, they are difficult to defend against at inference time under the white-box hypothesis, which is the default setting in FL. We described how to mitigate such attacks by using hardware obfuscation to sidestep the white-box. We introduce a novel defense, Delta: a lightweight shielding method which relies on TrustZone enclaves to mask few carefully chosen parameters and gradients against iterative gradient-based attacks. Our evaluation shows promising results, sensibly mitigating the effectiveness of state-of-the-art attacks. To the best of our knowledge, Delta also constitute the first attempt at defending against the Self-Attention Gradient Attack.
We intend to extend this work along the following directions. Because the use of enclaves calls for somewhat costly normal-world to secure-world communication mechanisms, properly evaluating the speed of each collaborating device under our shielding scheme is needed on top of considering the aforementioned memory overheads (Table I) in order to assess the influence of Delta on the FL training's practical performance. Additionally, a natural extension to this work is to apply Delta along with existing software defenses [47] to assess their combined benefits against a sophisticated attacker.
As mentioned in SSIV-C, it should also be explored that an attacker can _(i)_ exploit commonly used embedding matrices and subsequent parameters across existing models as a prior on the shielded layers (this case being circumvented by the defender if it trains its own first parameters) or _(ii)_ to train on their own premises the aforementioned \(g\) backward approximation (which needs not be of the same architecture as the shielded layers), although recent work shows limitation for such practice [68].
|
2309.17384 | Toward Universal Speech Enhancement for Diverse Input Conditions | The past decade has witnessed substantial growth of data-driven speech
enhancement (SE) techniques thanks to deep learning. While existing approaches
have shown impressive performance in some common datasets, most of them are
designed only for a single condition (e.g., single-channel, multi-channel, or a
fixed sampling frequency) or only consider a single task (e.g., denoising or
dereverberation). Currently, there is no universal SE approach that can
effectively handle diverse input conditions with a single model. In this paper,
we make the first attempt to investigate this line of research. First, we
devise a single SE model that is independent of microphone channels, signal
lengths, and sampling frequencies. Second, we design a universal SE benchmark
by combining existing public corpora with multiple conditions. Our experiments
on a wide range of datasets show that the proposed single model can
successfully handle diverse conditions with strong performance. | Wangyou Zhang, Kohei Saijo, Zhong-Qiu Wang, Shinji Watanabe, Yanmin Qian | 2023-09-29T16:41:49Z | http://arxiv.org/abs/2309.17384v2 | # Toward Universal Speech Enhancement for Diverse Input Conditions
###### Abstract
The past decade has witnessed substantial growth of data-driven speech enhancement (SE) techniques thanks to deep learning. While existing approaches have shown impressive performance in some common datasets, most of them are designed only for a single condition (e.g., single-channel, multi-channel, or a fixed sampling frequency) or only consider a single task (e.g., denoising or dereverberation). Currently, there is no universal SE approach that can effectively handle diverse input conditions with a single model. In this paper, we make the first attempt to investigate this line of research. First, we devise a single SE model that is independent of microphone channels, signal lengths, and sampling frequencies. Second, we design a universal SE benchmark by combining existing public corpora with multiple conditions. Our experiments on a wide range of datasets show that the proposed single model can successfully handle diverse conditions with strong performance.
Wangyou Zhang\({}^{1,3}\), Kohei Saijo\({}^{2,3}\), Zhong-Qiu Wang\({}^{3}\), Shinji Watanabe\({}^{3}\), Yanmin Qian\({}^{1}\)\({}^{1}\)Shanghai Jiao Tong University, China \({}^{2}\)Waseda University, Japan \({}^{3}\)Carnegie Mellon University, USA Universal speech enhancement, sampling-frequency-independent, microphone-number-invariant +
Footnote †: 979-8-3503-0689-7/23/$31.00 ©2023 IEEE
## 1 Introduction
Speech enhancement (SE) is a task of improving the quality and intelligibility of the speech signal in a noisy and potentially reverberant environment. Broadly speaking, speech enhancement can be divided into several subtasks such as denoising, dereverberation, echo cancellation, and speech separation [1]. The first three mainly focus on single-source conditions, while speech separation tries to separate each speaker's speech from the multi-source mixture recording. In this paper, we are primarily interested in the first two subtasks. Therefore, in the remainder of the paper, "speech enhancement" refers to denoising and dereverberation.
In recent years, deep learning-based SE techniques have achieved promising performance in various scenarios. These technique can be roughly classified into three categories: masking- [2, 3, 4], mapping- [5, 6, 7, 8], and generation-based methods [9, 10, 11, 12, 13, 14]. Masking-based methods estimate a mask either in the time-frequency domain or in the time domain for eliminating noise and reverberation, while mapping-based methods directly estimate the clean-speech representation in the corresponding domain. Generation-based methods try to reconstruct the clean speech using generation techniques such as generative adversarial networks (GANs) [9, 11], diffusion models [12, 13], and resynthesis-based models [14, 10]. These approaches can provide impressive performance in a condition similar to the training setup. However, most of the existing approaches are designed only for a single input condition, such as single-channel input, multi-channel input, or input with a fixed sampling frequency. Recently, there are some attempts to address multiple input conditions with a single model. For example, the Transform-Average-Concatenate (TAC) [15, 16] method and a triple-path model [17] are proposed to handle multi-channel signals with a variable number of microphones configured in diverse array geometries. In [18], a continuous speech separation (CSS) approach is proposed to handle arbitrarily-long input with a fixed-length sliding window. Sampling-frequency-independent models are proposed in [19, 20, 21] to handle single-channel input with different sampling frequencies. Nevertheless, these approaches only consider a limited range of input conditions. To the best of our knowledge, there does not exist a _single-model_ approach proposed to handle speech enhancement for single-channel, multi-channel, and arbitrarily long speech signals with different sampling frequencies altogether.
As a step towards universal SE which can handle arbitrary input, in this paper, we aim to devise a single SE model that can handle the aforementioned input conditions without compromising the performance. We propose an unconstrained speech enhancement and separation network (USES) by carefully integrating several techniques.1 Here, "unconstrained" means the model is not constrained to be used only in a fixed input condition. This single model can accept various forms of input, including 1) single-channel, 2) multi-channel with 3) different array geometries, 4) variable lengths, and 5) variable sampling frequencies. We also empirically show that the proposed model can be trained on 8 kHz data alone and then tested on data with much higher sampling frequencies (e.g., 48 kHz). The versatility of this model further inspires us to build a universal SE benchmark to test the performance on various input conditions. We combine five commonly-used corpora (VoiceBank+DEMAND [22], DNS1 [23], CHiME-4 [24], REVER [25], and WHAMR! [26] to train a single SE model that covers a wide range of acoustic scenarios. The model is then tested on the corresponding test sets with five metrics to comprehensively demonstrate its capability of handling diverse conditions. Our experiments on various datasets show that the proposed model can successfully cope with different input conditions with strong performance. The proposed model will be released2 in the ESPnet toolkit [27]. We expect this work to attract more attention toward building universal SE models, which can also benefit many downstream speech tasks such as automatic speech recognition (ASR) and speech translation.
Footnote 1: We validate the speech separation and enhancement abilities separately.
Footnote 2: [https://github.com/espnet/espnet](https://github.com/espnet/espnet)
## 2 Proposed Model
In this section, we first describe the overall architecture of the proposed model. Then, we introduce the key components for handling each of the variable conditions that make our model versatile.
### Overview
The overall architecture of the proposed model is illustrated in Fig. 1. We base our proposed approach on a recently proposed dual-path network called time-frequency domain path scanning network (TFPSNet) [28]. It is one of the top-performing speech separation models in the time-frequency (T-F) domain, and we believe that it can achieve strong performance in speech enhancement as well. As will be shown in Section 2.2, this model is a natural fit for handling different sampling frequencies. Without loss of generality, we assume that the input signal contains \(C\) microphone channels, where \(C\) can be 1 or more. The encoder consists of a short-time Fourier transform (STFT) module and a subsequent 2D convolutional layer 1. The former converts each input channel into a complex spectrum with shape \(2\times F\times T\), where 2 denotes the real and imaginary parts, \(F\) is the number of frequencies, and \(T\) the number of frames. The latter processes each microphone channel independently and projects each T-F bin into a \(D\)-dimensional embedding for multi-path modeling. The encoded representations are then processed by channel-wise layer normalization 2 and projected to a bottleneck dimension \(N\) by a point-wise convolutional layer 3. The bottleneck features are processed by \(K\) stacked multi-path blocks 4, which outputs a single-channel representation of the same shape. The parametric rectification linear unit (PReLU) activation 5 is applied to the output, which is later projected back to \(D\)-dimensional by a point-wise convolutional layer 5. Finally, the output is converted to the complex-valued spectrum via 2D transposed convolution (TfConv, 7) and then to waveform via inverse STFT (iSTFT). We call the proposed method unconstrained speech enhancement and separation (USES) as it can be used in diverse input conditions.3
Footnote 3: While we mainly focus on speech enhancement in this paper, we also show in Section 3.3 that this model works well for speech separation.
Compared to TFPSNet, we **make modifications to the encoder and decoder**, following the observations in a recent paper [29]. Specifically, we adopt the complex spectral mapping method instead of complex-valued masking in TFPSNet, as it is shown to produce better performance [29]. Therefore, the original convolutional layers for mask estimation are replaced with a single 2D convolutional layer 6. The projection layers in the encoder and decoder are also replaced with 2D convolutional 1 and transposed convolutional 7 layers, respectively. The multi-path block 4 is mostly the same as that in TFPSNet, containing a transformer layer for frequency sequence modeling and another for temporal sequence modeling, as shown in Fig. 3 (b). The transformer layers are the same as those in [28, 30]. **The main differences include** 1) we do not include any T-F path modeling (along the anti-diagonal direction) as we found it not so helpful in the preliminary experiments; 2) we additionally insert a TAC module for channel modeling (Sec 2.3).
### Sampling-Frequency-Independent design
We follow the basic idea in [20] for sampling-frequency-independent (SFI) model design. Namely, we rely on the STFT/iSTFT to obtain consistent T-F representations across different sampling frequencies (SFs). Since the frequency response of STFT filterbanks shifts linearly for all center frequencies [31], it can be easily extended to handle different SFs. As shown in Fig. 2, if we use fixed-duration STFT window and hop sizes (e.g., 32 and 16 ms) for different SFs, the resultant spectra will have constant T-F resolution. As a result, the STFT spectra of the same signal sampled at different SFs will have the same number of frames and different numbers of frequency bins, while the resolution is always consistent. We can leverage this property to build an SFI model easily as long as the model is capable of handling inputs with two variable dimensions, time and frequency.
Interestingly, the time-frequency domain dual-path models such as TFPSNet4 and the proposed USES model _inherently satisfy this requirement_ and can be directly used for SFI modeling without any
Fig. 1: Overview of the proposed versatile SE model. The kernel size and feature maps of convolutional layers are annotated in gray.
Fig. 2: STFT with fixed-duration window and hop sizes (e.g., 32 ms and 16 ms) will generate spectra with the same frequency and temporal resolution for different sampling frequencies.
modification. This is because these models treat the SE process as decoupled frequency sequence modeling 2 and temporal sequence modeling 3, as illustrated in Fig. 1 (b), and the transformer layers can naturally process variable-length frequency sequences when different SFs are processed. In summary, the proposed model is inherently capable of SFI modeling, and all we need is to adaptively adjust the STFT/iSTFT window and hop sizes (to have fixed duration) according to the input SF.
Compared to our method, the SFI convolutional encoder/decoder design in [19] is constrained by the maximum frequency range defined in the latent analog filter, which thus limits the highest sampling frequency it can handle5. Another recently proposed SFI method in [21] requires hand-crafted subband division and always resamples the input signal to a pre-defined sampling frequency (e.g., 48 kHz). Both methods have to be trained with data of different SFs to cover the whole frequency range the model is designed for. In contrast, our proposed model can be trained with 8 kHz data alone, and then applied to much higher SFs such as 48 kHz. This also greatly speeds up the training process and reduces the memory consumption during training.
Footnote 5: Our preliminary trial also shows it is less generalizable than STFT.
### Microphone-Channel-Independent design
We adopt the well-developed TAC technique [15] to achieve channel-independent modeling. As shown in Fig. 1 (c), the basic idea of TAC is to project representations of each channel to a hidden dimension \(H\) separately 4, concatenate the channel-averaged representation with each channel's representation 5 and finally project them back to the original dimension 1 The channel number invariance is learned _implicitly_ during training. Similarly to [16], we insert the TAC module in the first \(K_{s}\) multi-path blocks for spatial modeling, and then merge the multi-channel representations into single-channel for the rest \((K-K_{s})\) multi-path blocks. Instead of averaging the intermediate representations from all channels after the first \(K_{s}\) blocks as in [16], we only take the representation at the reference microphone channel and discard the rest. This is based on the intuition that the information from different channels should be already fused together after the first \(K_{s}\) blocks. In addition, taking the reference channel allows the model to learn to produce estimates time-aligned with the reference channel, which is often preferable in practice.
### Signal-Length-Independent design
Inspired by the success of the memory transformer [32] in natural language processing for long sequences, we extend the proposed model to handle arbitrarily long input signals following a similar design. As shown in Fig. 3, we only make a minimal modification to the proposed model by **adding a group of memory tokens** [mem] of dimension \(1\times N\times 1\times G\), where \(G\) is the group size. These learnable memory tokens are simply concatenated as a prefix with the feature sequence (output of 3 in Fig. 1) along the temporal dimension via shape broadcasting. The concatenated feature is then fed into \(K\) multi-path blocks for enhancement, in which the transformer layers could _implicitly_ learn to utilize such information via sequence modeling. The first \(G\) frames in the output representation correspond to the processed memory tokens, which are regarded as a summary of the information contained in the current input signal. These new memory tokens can be then used as the prefix for processing the subsequent input segment. Thus, we can segment the long-form input into non-overlapping short segments and process each one-by-one without suffering from significant computation and memory costs. Different from CSS [18], we do not need an overlapped sliding window here, as the history information can be retrieved from the output memory tokens from the previous segment.
Furthermore, the learnable memory tokens can be extended to serve as an indicator of different input conditions, similar to the role of prompts in various recent studies [33, 34]. To verify this possibility, we design two independent groups of memory tokens ([mem] 1 and [mem2] ) for indicating denoising with and without dereverberation. As shown in Fig. 3, we apply them accordingly to the reverberant and anechoic data in the extensive SE experiments in Section 3.5.
## 3 Experiments
### Data
**Speech separation:** We evaluate the speech separation performance on the commonly-used WSJ0-2mix benchmark [35] and its spatialized (anechoic) version [36] (min mode). Each dataset consists of a 30-hour training set, a 10-hour development set, and a 5-hour test set of 2-speaker clean speech mixtures sampled at 8 kHz. The signal-to-interference ratio (SIR) ranges from -10 to 10 dB. The spatialized version contains 8 microphone channels, with the microphone arrangement sampled randomly.
**Speech enhancement in a single condition:** We train our proposed model on the 16 kHz DNS1 data [23] alone to show its capability in a single condition. Following existing SE works [3, 4], we simulate
\begin{table}
\begin{tabular}{l|c c c|c c c c} \hline
**Dataset** & **Train (hr)** & **Dev (hr)** & **Test (hr)** & **Sampling Freq.** & **\#Ch** & **T60 (ms)** & **Train. SNR (dB)** \\ \hline VoiceBank+DEMAND [22] & 8.8 & 0.6 & 0.6 & 48kHz & 1 & - & - \\ \hline DNS1 (v1) [23] & (A) 90 & (A) 10 & (R) 0.42 & 16kHz & 1 & (A) - & 0-40 \\ DNS1 (v2) [23] & (A) 2700 & (A) 300 & Same as above & 16kHz & 1 & Same as above & -5–15 \\ CHiME-4 [24] & (Simu) 14.7 & (Simu) 2.9 & (Sinu) 2.3 & 16kHz & 5 & - & \(\sim\)5 \\ & & & & & & (Simu) 250, 500, 700 & 20 \\ REVERB [25] & (Simu) 15.5 & (Simu) 3.2 & (Real) 0.7 & 16kHz & 8 & (Simu) 250, 500, 700 & 20 \\ WHAMR! [26] & (A) 58.0 & (A) 14.7 & (A) 9.0 & 16kHz & 2 & (A) - & 6-3 \\ & & (R) 58.0 & (R) 14.7 & (R) 9.0 & (R) 100–1000 & & \\ \hline \end{tabular}
\end{table}
Table 1: Detailed information of the corpora used in our SE experiments. “#Ch” denotes the number of microphone channels in the data. “T60” denotes the reverberation time. “Train. SNR” represents signal-to-noise ratio in the training data. “(Simu)” and “(Real)” denote the synthetic and recorded data, while “A” and “R” in parentheses represent anechoic and reverberant, respectively.
Figure 3: Memory token-based long sequence modeling.
\(3000\) hours of non-reverberant data in total, with \(2700\) and \(300\) hours for training and development, respectively. The SE performance is then evaluated on the non-blind test set without reverberation, which is around \(0.42\) hours. The detailed information can be found in Table 1 (2nd row).
**Speech enhancement in diverse conditions:** To better show the capability of the proposed SE model, we build a comprehensive dataset that can serve as a universal SE benchmark. The new dataset combines data from five widely-used corpora, as shown in Table 1, where DNS1 (v2) is not used here to mitigate the data imbalance problem. The total amount of training data is \(\sim\)245 hours. This dataset covers a wide range of conditions, including single-channel, multi-channel (2ch-8ch), wide-band (16kHz), full-band (48kHz), anechoic, reverberant, and variable-length input in both simulated and real-recorded scenarios.
### Model and training configurations
In all our experiments, the proposed USES model consists of \(K=6\) multi-path blocks, with a TAC module in the first \(K_{s}=3\) blocks for spatial modeling. The STFT/iSTFT window and hop sizes are always 32 and 16 ms, respectively. Following TFPSNet [28], the embedding dimension \(D\) is set to \(256\) and the bottleneck dimension \(N\) to \(64\). The transformer layers in the multi-path blocks have the same configuration as in [28]. The hidden dimension \(H\) in each TAC module is \(192\). When processing multi-channel data, we always take the first channel as the reference channel. When the memory tokens in Section 2.4 are applied, we empirically set the number of memory tokens \(G\) to 20, and divide the input signal into non-overlapping segments of \(\sim\)1s long (64 frames). The total number of model parameters is around 3.1 million. The pre-trained models and configurations will be released later in ESPnet [27] for reproducibility.
Our experiments are done based on the ESPnet toolkit [27]. The models are trained using the Adam optimizer, and the learning rate increases linearly to 4e-4 in the first \(X\) steps and then decreases by half when the validation performance does not improve for two consecutive epochs. We set \(X\) to 4000 and 25000 for speech separation and enhancement experiments, respectively. During training, we divide the samples into 4-second chunks to reduce memory costs. The batch size of all experiments is 4. We also limit the number of samples for each epoch to 8000. When training on multi-channel data, we shuffle the channel permutation of each sample and randomly select the number of channels (up to 4 channels) to increase diversity. We always apply variance normalization to the input signal and revert the variance in the model's output. For speech separation, all models are trained until convergence (up to 150 epochs) using the SI-SNR loss [37]; and for speech enhancement, we train all models for up to 20 epochs6 using the loss function proposed in [38]. The loss function is a scale-invariant multi-resolution \(L_{1}\) loss in the frequency domain plus a time-domain \(L_{1}\) loss term. We set the STFT window sizes of the multi-resolution \(L_{1}\) loss to [256, 512, 768, 1024] and the time-domain loss weight to 0.5. In each experiment, the model with the best validation performance is selected for evaluation.
Footnote 6: For DNS1 data alone, we only train for up to 5 epochs due to the large amount of data, which are enough for the model to converge.
Footnote 7: [https://github.com/microsoft/DNS-Challenge/blob/master/DNSMOS/DNSMOS/sig_bak_ovr.onnx](https://github.com/microsoft/DNS-Challenge/blob/master/DNSMOS/DNSMOS/sig_bak_ovr.onnx)
We evaluate the SE models with the metrics below: wide-band PESQ (PESQ-WB) [39], STOI [40], scale-invariant signal-to-noise ratio (SI-SNR) [37], signal-to-distortion ratio (SDR) [41], DNS-MOS (OVRL) [42], and word error rate (WER). Except for WER, a higher value indicates better performance for all metrics. The Whisper Large v2 model8[43] is used for WER evaluation.
Footnote 8: [https://huggingface.co/openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2)
Footnote 9: Different from [28], our reproduction replaces all T-F path scanning transformers with the time-path scanning transformer.
### Evaluation of speech separation performance
We first examine the effectiveness of the three components proposed in Section 2 by evaluating the speech separation performance on WSJ0-2mix, which makes the comparison with the top-performing TFPSNet [28] convenient. In addition, the datasets are not large, making it easy to investigate different setups. We train the model on 8 kHz mixture data, and evaluate the performance on both 8 and 16 kHz test data. Table 2 reports the SI-SNR improvement (SI-SNRi) of models trained on a single channel (denoted as 1ch) and on a variable number of channels (denoted as 1-2ch and 1-6ch). From the first section in Table 2, we observe that the proposed SFI approach can successfully preserve strong separation performance for the reproduced TFPSNet[9] on 16 kHz data. In comparison, first downsampling 16 kHz mixtures to 8 kHz, then applying TFPSNet trained at 8 kHz for separation, and finally upsampling the separation results to 16 kHz (denoted as "resampling" in the second row) suffer from severe SI-SNR degradation. The proposed USES model also obtains similar performance to TFPSNet on WSJ0-2mix, as our major modifications focus on the invariance to sampling frequencies, microphone channels, and signal lengths. While applying memory tokens does not change the performance significantly, it enables the model to process variable-length inputs with a constant memory cost during inference.
For experiments on the spatialized WSJ0-2mix data, we can see that the model achieves very similar performance on both 8 and 16 kHz data. Although the single-channel SI-SNR performance degrades \(\sim\)1.4 dB, the multi-channel performance becomes much better, even with only 2 input channels. Further applying the memory tokens and increasing the input channels during training can improve the performance, especially for the multi-channel case.
### Evaluation of SE performance in a single condition
Given the success of the universal properties of USES in a controlled experimental condition in Section 3.3, this subsection compares the SE performance of the proposed model with memory tokens to existing methods on the DNS1 non-blind test data. Our models are respectively trained on the simulated DNS1 v1 and v2 data without reverberation, as described in Table 1. Due to the large amount of training data and limited time, we only train the proposed model for \(\sim\)0.5 epochs on DNS1 v2 data, covering around 45% of the entire
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**Model** & **SI-SNRi (8 kHz )** & / & **16 kHz )** \\ \hline \multicolumn{4}{c}{**WSJ0-2mix test data**} \\ TFPSNet [28] & **21.1** & / & - \\ TFPSNet (reproduced) & 21.0 & / & 12.0 (resampling) \\ + SFI STFT/iSTFT & - & / & 19.7 \\ USES (1ch) & 20.3 & / & **19.8** \\ + mem tokens (1ch) & 20.9 & / & 19.3 \\ \hline \multicolumn{4}{c}{**1ch spatialized test data**} \\ USES (1-2ch) & 18.9 & / & **18.4** \\ + mem tokens (1-6ch) & **19.9** & / & 18.3 \\ \hline \multicolumn{4}{c}{**2ch spatialized test data**} \\ USES (1-2ch) & 24.6 & / & 24.2 \\ + mem tokens (1-6ch) & **36.1** & / & **35.0** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Speech separation performance on WSJ0-2mix and its spatialized version (min mode). All models are trained only on 8 kHz data, and tested on 8 kHz and 16 kHz data.
training set. For DNS1 v1 data, we train the model for 5 epochs, which is around 3.7 passes of the entire training set10. The SE performance is presented in Table 3, where we compare our proposed model with the top-performing methods on DNS1 data. We can see that although our models are only trained for very limited steps, they can still achieve very competitive performance compared to the existing SE methods. The model trained on DNS1 v2 data achieves a new state-of-the-art performance while only trained for \(\sim\)0.5 epochs. However, it should be noted that our proposed model and MFNet [8] are non-causal, while the other listed models are causal. Therefore, we cannot make a fair comparison here. Nevertheless, this result at least demonstrates the effectiveness of the proposed method in the standard SE task.
Footnote 10: The training of both models is well converged with our setup.
### Evaluation of SE performance in diverse conditions
Finally, we present the SE performance across different conditions. Here, we adopt the same architecture as in Section 3.4 as the base model, and train two groups of memory tokens as mentioned in Section 2.4 to control whether dereverberation is applied or not. During training, we always resample the data to 8 kHz to reduce memory costs, and evaluate the performance on the original data. For the VoiceBank+DEMAND 48 kHz test data, the original and enhanced audios are resampled to 16 kHz before evaluating STOI, DNSMOS, and WER. We can see in Table 4 that, in all conditions, the proposed single SE model (**USES** columns) achieves strong enhancement performance that is on par with or better than the corresponding corpus-exclusive SE model (**excl** columns). Note that the DNS1 (reverb) test data represents an unseen noisy-reverberant condition during training. In this condition, the proposed SE model achieves much better performance, which shows the benefit of training on diverse data using the proposed model. However, we can see that, on the 48 kHz VoiceBank+DEMAND test set, both SE models suffer from SDR degradation. Our investigation implies that it is caused by the greatly increased frequency bins in 48 kHz (129\(\rightarrow\)769), as the model is only trained on 8 kHz. To mitigate this mismatch, we continue training the proposed SE model for 5 more epochs (**USES\({}^{+}\)** columns) with variable SFs on the same data. We can see that increasing the SF diversity consistently improves the SE performance in different conditions, which shows the capacity of our model. The enhanced audios are available on our demo page: [https://Emrys365.github.io/Universal-SE-demo/](https://Emrys365.github.io/Universal-SE-demo/).
For ASR performance evaluation on different datasets11, we use the same Whisper Large model without external language models. The same text normalization [43] is applied to both reference transcripts and Whisper outputs. As shown in Table 4, the proposed SE model achieves similar ASR performance to the corpus-exclusive SE model in most conditions. We further evaluate the performance on two real-recorded datasets in Table 5. On the REVERB (Real) data, the SE models perform well in terms of both enhancement and ASR. On the CHiME-4 (Real) data, we notice that some microphone channels contain much noisier signals, which cannot be well processed by the TAC module inherently, as it simply averages all channels for fusion. Therefore, we only conduct single-channel SE on the reference channel (CH5) in this case. The proposed SE model achieves much better performance than the corpus-exclusive model, coinciding with the observation in Table 4. Note that on DNS1 (reverb) and CHiME-4 (Real), the SE model does not improve the ASR performance, which is a commonly observed phenomenon [44, 45, 46] due to the introduced artifacts by enhancement in mismatched conditions.
Footnote 11: For DNS1 test data, we prepare the transcription manually as the reference, which is available at github.com/Emrys365/DNS_text.
## 4 Conclusion
In this paper, we have devised a single speech enhancement model USES that can handle denoising and dereverberation in diverse input conditions altogether, including variable microphone channels, sampling frequencies, signal lengths, and different environments. Experiments on a wide range of datasets show that the proposed model can achieve very competitive performance for both speech separation and speech enhancement tasks. We further design a benchmark for evaluating the universal SE performance across various conditions, which also reveals some less-explored aspects in the SE literature such as the generalizability across different domains. We hope this contribution can attract more efforts toward building universal SE models for real-world speech applications.
\begin{table}
\begin{tabular}{l l |
2309.06289 | Quantum measurements and delays in scattering by zero-range potentials | Eisenbud-Wigner-Smith delay and the Larmor time give different estimates for
the duration of a quantum scattering event. The difference is most pronounced
in the case where de-Broglie wavelength is large compared to the size of the
scatterer. We use the methods of quantum measurement theory to analyse both
approaches, and to decide which one of them, if any, describes the duration a
particle spends in the region which contains the scattering potential. The
cases of transmission, reflection and three-dimensional elastic scattering are
discussed in some detail. | X. Gutiérrez de la Cal, M. Pons, D. Sokolovski | 2023-09-12T14:50:34Z | http://arxiv.org/abs/2309.06289v1 | # Quantum measurements and delays in scattering by zero-range potentials
###### Abstract
Eisenbud-Wigner-Smith delay and the Larmor time give different estimates for the duration of a quantum scattering event. The difference is most pronounced in the case where de-Broglie wavelength is large compared to the size of the scatterer. We use the methods of quantum measurement theory to analyse both approaches, and to decide which one of them, if any, describes the duration a particle spends in the region which contains the scattering potential. The cases of transmission, reflection and three-dimensional elastic scattering are discussed in some detail.
## I Introduction
It is only natural to expect a quantum scattering process, be it collision between two particles, or tunnelling across a potential barrier to be characterised by a particular duration. Discussion about what this duration should be, and how is ought to be measured, continues to date [1]. Traditionally, there were two schools of thought. An approach, originally due to Eisenbud and Wigner [2], and later extended by Smith to multichannel scattering [3], relies on propagation of wave packet states, and leads to time parameters expressed as energy derivative of the phase of a scattering amplitude. An alternative method, first proposed by Baz' [4], and later developed in [5], employed a spin, precessing in a small magnetic field introduced in the region of interest. The Larmor times, obtained in this manner, involved variations of scattering amplitudes in response to a small constant potential, added in the region [29; 30]. The authors of [5] concluded that the approach [3] is in general incorrect, since both methods often lead to similar, yet not identical results [5].
It is reasonable to ask whether Eisenbad-Wigner-Smith (EWS) approach is merely wrong, or if, perhaps, one deals with two different yet equally valid methods? Similar questions have been asked, e.g., in [6; 7], and more recently in [8]. The reader may also be interested in Refs.[9]-[12]
The appearance of both Eisenbud-Wigner-Smith and Larmor times may look strange to anyone used to the averages obtained with the help of a probability distribution, since neither of the two parameters look like conventional averages. The problem is most easily understood in terms of quantum measurement theory. In both cases the particle is pre-an post-selected in its initial and final (transmitted) states. In both cases one evaluates, in the standard way, an average of a variable expected to contain information about the duration spent in the barrier, a spin's component, or the final particle's position. A connection with quantum measurements is established once one notes that the probabilities used for the averaging are given by a convolution of an amplitude distribution with a kind of "apparatus function". In the Larmor case, the amplitude distribution refers to the duration of a Feynman's path spent in the barrier region, and the apparatus function is determined by the initial state of the clock (see, e.g., [13]). The EWS case is less obvious, but similar. The EWS amplitude distribution describes the range of spacial shifts with which a particle with a known momentum may emerge from the barrier, and the apparatus function is the envelope of the initial wave packet state (see, e.g., [14]). At this point one notes an important role played by the Uncertainty Principle (UP) [15]. Since tunnelling can be seen as a result of destructive interference between alternatives, the presence of the apparatus function destroys this interference and, with it, the studied transition. The only way to preserve the transition is to make the apparatus function very broad, but then the UP would forbid one to distinguish between the durations, or the shifts involved [15]. This is, indeed the case, since if the perturbation is minimised, the measured average is expressed in terms of the first moment of an _amplitude_ (and not a _probability_) distribution (also known as the "weak value" [16; 17]). In addition to being complex valued, the distribution may change sign, and the "weak value" does not faithfully represents the the range of values available to the transition [13]. For example, EWS
time, measured in this manner, can turn out to be anomalously short, even though the barrier provides only for delays, compared to free propagation [14].
In this paper we use the measurement theory techniques in order to analyse the similarities and the differences between both methods for determining the "tunnelling time" [18]. We will also show that the same approach can be applied to reflected particles, as well as in the case of potential scattering, which was the subject originally studied in [2]. As an illustration, we consider particles scattered by a zero-range potential [19], chosen for two main reasons. Firstly, the disagreement between the Larmor and the Eisenbud-Wigner-Smith approaches is is most pronounced in the ultra-quantum case where the particle's de Broglie wavelength exceeds the size of the scatterer. Secondly, in both cases the amplitudes distributions, on which our analysis is based, have a particularly simple form, and the narrative can be abbreviated accordingly.
The rest of the paper is organised as follows. In Sect.II we describe two methods for measuring the duration \(\tau\) a classical particle spends in a region containing the scattering potential. In Sect.III we briefly review the Larmor clock method and show that it predicts a zero delay whenever the size of the scatterer vanishes. In Sect.IV we show that following the centre of mass of the scattered state leads to a different kind of "quantum measurement". An inaccurate measurement of this kind determines a Eisenbud-Wigner-Smith time delay, which does not vanish for a zero-range potential. Sections V and VI analyse the centre-of-mass delay in transmission across zero-range barrier or well. In Sect. VII we extend the analysis to reflected particles. In Sect. VIII we consider elastic scattering by a zero-range spherically symmetric potential. Section IX contains our conclusions.
## II Two ways to measure a classical duration
In classical mechanics, a particle always moves along a trajectory \(x^{cl}(t)\), and the amount of time \(\tau\) it spends in a region \([a,b]\), containing a potential barrier (or a well) \(V(x)\) (see Fig. 1), is a well defined and useful concept. One way to measure it is to couple the particle to a clock, which runs only while the particle is in the region. This can be achieved by equipping the particle with a magnetic moment, introducing a magnetic field in \([a,b]\), and dividing the angle of the precession \(\phi\) by the Larmor frequency \(\omega_{L}\). A simpler version of the Larmor clock is a pointer with position \(f\) and momentum \(\lambda\), coupled to the particle while it is in the region. The full Hamiltonian of the system, therefore, is
\[H(x,p,f,\lambda)=p^{2}/2m+V(x)+\lambda\Theta_{[a,b]}(x), \tag{1}\]
where \(x\) and \(p\) are the particle's position and momentum, respectively, and \(\Theta_{[a,b]}(x)\) is unity if \(x\) lies inside the interval \([a,b]\), and 0 otherwise. Solving the Hamilton's equations for \(\lambda=0\) one easily finds the final pointer's reading \(f\) equal to the sought duration,
\[f(t)-f(0)=\int_{0}^{t}\Theta_{[a,b]}(x^{cl}(t^{\prime}))dt^{ \prime}= \tag{2}\] \[\int_{a}^{b}m^{1/2}dx/\sqrt{2[E-V(x)]}\equiv\tau(E),\]
where \(E>V(x)\) is the particle's energy.
The same quantity can be determined without the help of a clock, simply by comparing the current positions of the particle moving with and without the potential, and define (we reserve the subscript 0 for free motion)
\[\delta x^{cl}=x^{cl}(t)-x_{0}^{cl}(t) \tag{3}\]
where \(x_{0}^{cl}(t)\) is the trajectory with \(V(x)=0\) (see Fig. 2). Indeed, if both particles are launched simultaneously with equal initial momenta \(p=\sqrt{2mE}\) from the same initial position, \(x_{I}<a\), we have \(x_{0}^{cl}(t)=vt+x_{I}\), where \(v=p/m\). Both particles cross the region \([a,b]\) since \(E>V(x)\). By the time the particle slowed down by a barrier reaches \(x=b\), its faster free moving counterpart will lie ahead by \(|\delta x^{cl}|=v(\tau-\tau_{0})\), where \(\tau_{0}=(b-a)/v\) is the duration the free particle spends in the region. The difference in positions can, therefore, be used to evaluate the \(\tau\) defined earlier in Eq.(2),
\[\tau(E)=\tau_{0}(E)-\delta x/v, \tag{4}\]
Figure 1: A classical particle is coupled to a clock. Eq (1), which runs only while it is inside the potential. The duration the particle has spent in \([a,b]\) can be read off the pointer’s position [cf. Eq.(2)].
A particle crossing a potential well, \(V(x)<0\), spends in \([a,b]\) a shorter duration, \(\delta x^{cl}>0\), and \(\tau(E)<\tau_{0}(E)\). Note that since both particles move freely for \(x>b\), the distance between them remains the same once the region containing the potential is crossed. Note also that this is also a kind of a measurement, where the role of the pointer is now played by the particle itself. Both approaches can be generalised to quantum scattering, albeit with different results, and we will consider them separately.
## III The quantum Larmor time
The quantum analogue of Eq.(2) has often been discussed before (see, e.g., [20]), and we will describe it here only briefly. For a quantum particle, the transition amplitude between the states \(|\psi\rangle\) and \(|\phi\rangle\) is given by Feynman path integral (we are using \(\hbar=1\)) \(A(\phi,\psi,t)=\sum_{paths}\phi^{*}(x^{\prime})\exp[iS(x,x^{\prime},t)]\psi(x)\), where \(S\) is the classical action functional, and \(\sum_{paths}=\int dx^{\prime}\int dx\int_{(x,0)}^{(x^{\prime},t)}Dx(t)\) includes summation over all virtual paths connecting \((x,0)\) with \((x^{\prime},t)\), as well as integration over the initial and final position. The classical expression in Eq.(2) can be promoted to a functional on the Feynman's paths, \(t_{[a,b]}[x(t)]=\int_{0}^{t}\Theta_{[a,b]}(x(t^{\prime}))dt^{\prime}\), and
\[A(\phi,\psi,t|\tau)=\sum_{paths}\delta\left(t_{[a,b]}[x(t)]- \tau\right)\times \tag{5}\] \[\phi^{*}(x^{\prime})\exp(iS(x,x^{\prime},t)\psi(x)\]
yields the probability amplitude to complete the transition, while spending \(\tau\) seconds in the chosen region \([a,b]\). Expressing the Dirac delta \(\delta(t_{[a,b]}[x(t)]-\tau)\) as a Fourier integral allows one to rewrite (5) as
\[A(\phi,\psi,t|\tau)=(2\pi)^{-1}\int_{-\infty}^{\infty}d\lambda \exp(i\lambda\tau)\times \tag{6}\] \[A_{V(x)+\lambda\Theta_{[a,b]}(x)}(\phi,\psi,t),\]
where \(A_{V(x)+\lambda\Theta_{[a,b]}(x)}(\phi,\psi,t)\) is the same transition amplitude, but in the modified potential \(V(x)+\lambda\Theta_{[a,b]}(x)\). Note that there is a kind of uncertainty relation: to know \(\tau\) one needs to make the potential in \([a,b]\) uncertain. In the case of transmission (T) across the barrier [cf. Fig.1] the transition is between the same positive momentum states, \(|\psi\rangle=|\phi\rangle=|p\rangle\), over a long time, \(t\rightarrow\infty\), and \(A(\phi,\psi,t)\) is just the barrier's transmission amplitude, \(T(p,V)\). Thus, quantum transmission is characterised by a range of durations, each endowed with an amplitude
\[A_{T}(p,\tau)=(2\pi)^{-1}\int_{-\infty}^{\infty}d\lambda\exp(i \lambda\tau)\times \tag{7}\] \[T(p,V+\lambda\Theta_{[a,b]}).\]
In its quantum version, the clock (1) may be prepared in an initial state \(|G\rangle\), e.g., a Gaussian \(G(f)=\langle f|G\rangle=C\exp(-f^{2}/\Delta f^{2})\), centred at \(f=0\). The pointer would be displaced by \(\tau\), \(G(f)\to G(f-\tau)\), if the value of \(\tau\) were unique. With many such values, the final state of the clock is given by a sum over all displacements,
\[\Phi(f)=\int_{0}^{\infty}G(f-\tau)A_{T}(p,\tau)d\tau. \tag{8}\]
* Equation (8) defines the measurement briefly discussed in the Introduction.
* Equation (2) defines \(\tau\) as the _net duration_ spent by the particle in the barrier region.
* Since \(T(p,V)=\int A_{T}(p,\tau)d\tau\), different durations interfere and cannot be told apart without a clock.
* The amplitude of finding the particle in a final state \(|p\rangle\), and the clock's pointer in \(|f\rangle\) is the same as that of finding the particle in \(|p\rangle\) provided the durations spent in the barrier were restricted to a range \(f-\Delta f\lesssim\tau\lesssim f+\Delta f\). One can say that \(\tau\) has been measured to an accuracy \(\Delta f\).
The clock is the more accurate the smaller is \(\Delta f\). It is also more perturbing, and sending \(\Delta f\to 0\) would quench transmission, causing the particle to be reflected. In the opposite limit \(\Delta f\rightarrow\infty\), an individual clock reading \(f\) provides little information. However, using (8) one can calculate the mean pointer reading (see Appendix A)
\[\langle f\rangle\equiv\frac{\int f|\Phi(f)|^{2}df}{\int|\Phi(f)|^{2}df} \frac{}{\Delta f\rightarrow\infty}{\rm Re}[\overline{\tau}_{[a,b]}(p)]. \tag{9}\]
Figure 2: The same duration can be evaluated by launching two classical particles, one with and one without the potential, and comparing their positions once they have crossed the region \([a,b]\) [cf. Eq.(4)].
where \(\overline{\tau}_{[a,b]}(p)\) is the "complex time" of Ref.[21],
\[\overline{\tau}_{[a,b]}(p)=\frac{\int_{0}^{\infty}\tau A_{T}(p,\tau)d \tau}{\int_{0}^{\infty}A_{T}(p,\tau)d\tau}= \tag{10}\] \[-i\partial_{\lambda}\left[\ln T(p,V+\lambda\Theta_{[a,b]})\right] |_{\lambda=0}.\]
Evaluated with an alternating complex valued distribution, the "weak value" [17]\(\overline{\tau}_{[a,b]}(p)\) does not have the properties of a physical time interval, in agreement with the Uncertainty Principle [15] which forbids knowing the duration \(\tau\) in precisely the same sense it forbids knowing the slit chosen by the particle in a double-slit experiment [13].
Our main interest here is in the delay (if any), experienced by a particle scattered by a zero-range potential,
\[V(x)=U\Theta_{[a,b]}(x),\quad(b-a)\to 0, \tag{11}\] \[U\rightarrow\infty,\quad U(b-a)=\Omega=const.\]
As the region becomes ever more narrow, \(a\to b\), \(t_{[a,b]}[x(t)]\) can only tend to zero for any smooth path \(x(t)\). However, Feynman's paths are notoriously irregular [22], and a more rigorous justification will be given next. If the amplitude distribution for a free particle, \(V(x)=0\), \(A_{0}(p,\tau)\) is known, the distribution for a rectangular potential \(V(x)=U\Theta_{[a,b]}(x)\), barrier or well, takes a particularly simple form,
\[A_{T}(p,\tau)=\exp(-iU\tau)A_{0}(p,\tau). \tag{12}\]
\(A_{0}(p,\tau)\) can be computed by closing the integration contour in Eq.(7) (with \(V=0\)) in the upper half of the complex \(\lambda\)-plane, where \(T(p,\lambda\Theta_{[a,b]})\) has poles [30]. Only one pole contribution survives in the limit \((b-a)\to 0\), and using (12) one finds [30]
\[A_{T}(p,\tau)\underset{a\to b}{\longrightarrow}\tau_{0}^{-1} \exp[-i\Omega\tau/(b-a)]\times \tag{13}\] \[\exp(-\tau/\tau_{0}),\]
where \(\tau_{0}=m(b-a)/p=(b-a)/v\). Since \(|A_{T}(p,\tau)|\) tends to \(\delta(\tau)\), a measurement by a Larmor clock will always yield a zero duration for a very narrow potential [cf. Eqs. (7)-(10)]. Next we ask whether the same is true if one tries to deduce the same duration from the final position of a transmitted particle.
## IV The Eisenbud-Wigner-Smith (phase) time
To obtain a quantum analogue of the procedure, described by Eq.(4), one can replace classical particles by wave packets (WP), and evaluate the distance between their centres of masses (COMs) as shown in Fig.3.
At \(t=0\), both particles are prepared in the same state with a mean momentum \(p\), a width \(\Delta x\), and the COM at some \(x_{I}<0\), \(|x_{I}|>>\Delta x\). After scattering, the transmitted and the freely propagating state both lie to the right of the barrier. They are given by
\[\psi_{T}(x,t)=\int T(k,V)A(k,p)\times \tag{14}\] \[\exp(ikx-iE_{k}t)dk,\quad E_{k}=k^{2}/2m,\]
and
\[\psi_{0}(x,t)=\int A(k,p)\exp(ikx-iE_{k}t)dk= \tag{15}\] \[\exp(ipx-iE_{p}t)G_{0}(x,t),\]
where \(G_{0}(x,t)\) is an envelope of a width \(\Delta x\),, initially peaked around \(x=x_{I}\). For the separation between their COMs, we have
\[\delta x_{com}^{T}\equiv\left\langle x(t)\right\rangle_{T}-\left \langle x(t)\right\rangle_{0}, \tag{16}\] \[\left\langle x(t)\right\rangle_{T,0}\equiv\frac{\int x|\psi_{T,0}| ^{2}dx}{\int|\psi_{T,0}|^{2}dx},\]
where, by Ehrenfest's theorem [23],
\[\left\langle x(t)\right\rangle_{0}=pt/m+x_{I}\equiv vt+x_{I}. \tag{17}\]
Throughout the paper we will consider what the authors of [6] called "completed events", i.e., the situation where both wave packets move freely to the right of the barrier, and the integration limits in Eq.(16) can be extended to \(\pm\infty\). There is no simple way to use Feynman path integral, as it was done
Figure 3: A quantum analogue of Fig.2: classical particles are replaced by wave packets, whose centres of mass are used as the reference points. The advancement, or delay, of the transmitted particle results from the interference between all virtual spatial shifts provided by the potential [cf. Eq.(18)].
in Eq.(5). However, expressions similar to Eqs.(6)-(10) are readily obtained by rewriting Eq.(14) as a convolution
\[\psi_{T}(x,t)=e^{ipx-iE_{p}t}\times \tag{18}\] \[\int_{-\infty}^{\infty}G_{0}(x-x^{\prime},t)\eta_{T}(x^{\prime},p )dx^{\prime},\]
\[\eta_{T}(x^{\prime},p)=\frac{e^{-ipx^{\prime}}}{2\pi}\int_{-\infty}^{\infty}T( k,V)e^{ikx^{\prime}}dk, \tag{19}\]
which conveniently separates the information of free motion, contained in \(G_{0}\), from the properties of the scattering potential which determine \(\eta_{T}(x^{\prime},p)\). If the spreading of the wave packet can be neglected, \(G_{0}(x-x^{\prime},t)\approx G_{0}(x-x^{\prime},t=0)\) and Eq. (4) defines the measurement different from the one described by Eq.(8).
* Noting that \(\exp(ipx)\eta_{T}(x^{\prime},p)\sim\exp[ip(x-x^{\prime})]\) allows one to _define_\(x^{\prime}\) as the spatial shift with which a particle with momentum \(p\) emerges from the barrier (whatever this might mean).
* Since \(T(p,V)=\int\eta_{T}(x^{\prime},p)dx^{\prime}\) different shifts interfere and cannot be told apart _a priori_.
* However, the amplitude of finding a particle, prepared at \(t=0\) in a wave packet state (15), at a location \(x\) is the same as that of finding a particle, prepared in a state \(|p\rangle\), in the same state \(|p\rangle\), provided the shifts imposed by the barrier were restricted to a range \(x-\Delta x\lesssim x^{\prime}\lesssim x+\Delta x\). Replacing the plane wave \(|p\rangle\) with a wave packet (15) of a width \(\Delta x\) and a mean momentum \(p\) allows one to measure \(x^{\prime}\) to accuracy \(\Delta x\).
If the measurement is accurate, \(\Delta f\to 0\), one recovers the result for the free motion, \(\psi_{T}(x,t)\approx\psi_{0}(x,t)\), since the high momenta which dominate the transmission are unaffected by the presence of the scattering potential. Tunnelling, as one may expect, is thereby destroyed. In the classical limit \(\hbar\to 0\), \(E(p)>V(x)\), the rapidly oscillating \(\eta_{T}(x^{\prime},p)\) develops a stationary region around \(x^{\prime}=\delta x^{cl}=x^{cl}(t)-x_{0}^{cl}(t)\) [cf. Eq.(3)] and the classical result is recovered [25].
The benefits of converting a transmission problem into a quantum measurement one are most evident when discussing the properties of the so-called phase time (see, e.g., [26]). If the \(G_{0}(x,0)\) is very broad (the spreading can be neglected), interference between different shifts is not destroyed, and the Uncertainty Principle allows one to determine only a "complex shift", \(\overline{x^{\prime}}\), similar to the "complex time", Eq. (10). Like \(\tau_{[a,b]}(p)\) in Eq.(10), it is obtained by averaging \(x^{\prime}\) with an alternating complex valued distribution (19),
\[\overline{x^{\prime}}_{T}(p) = \frac{\int_{-\infty}^{\infty}x^{\prime}\eta_{T}(x^{\prime},p)dx^{ \prime}}{\int_{-\infty}^{\infty}\eta_{T}(x^{\prime},p)dx^{\prime}}\] \[=i\partial_{p}\left[\ln T(p,V)\right].\]
Using Eq.(16) one finds (see Appendix A)
\[\delta x_{com}^{T} \xrightarrow[\Delta x\to\infty]{}\ \ \mbox{Re}\left[\overline{x^{\prime}}_{T}(p) \right]. \tag{21}\]
where \(\mbox{Re}\left[\overline{x^{\prime}}_{T}(p)\right]\) does not even have to lie in the region where \(\eta_{T}(x^{\prime},p)\neq 0\). One may be tempted to convert the spatial delays into temporal ones using \(\delta\tau=-x^{\prime}/v\). This is justified in a classically allowed case [cf. Eq.(3)] but is unwarranted in general. Replacing \(\delta x\) in the classical Eq.(4) by its quantum analogue (16) yields a "phase time" estimate for the duration spent in the barrier region,
\[\tau^{phase}(p)=(b-a)/v-\mbox{Re}\left[\overline{x^{\prime}}_{T}( p)\right]/v= \tag{22}\] \[\frac{b-a}{v}+\frac{1}{v}\frac{\partial\varphi_{T}(p,V)}{\partial p},\]
where \(\varphi(p,V)\) is the phase of the transition amplitude, \(T(p,V)=|T(p,V)|\exp[i\varphi_{T}(p,V)]\), and \(v^{-1}\partial\varphi_{T}(p,V)/\partial p=\partial\varphi_{T}(p,V)/\partial E\) is the Eisenbud-Wigner-Smith time delay.
The phase time (22), related to the "weak value" of the spacial shift \(x^{\prime}\) in Eq.(20) has the same problem as its Larmor counterpart (9). It can be anomalously short in tunnelling, even though the barrier only delays the particle relative to free propagation [25]. It does not grow as expected if the barrier width is increased [27].
It is, however, different in one important aspect. While the Larmor time vanishes for a zero-range potential, \(\tau^{phase}(p)\) remains finite even as \((b-a)\to 0\). Next we study how an why this happens
## V Gaussian wave packets in a zero-range potential
The transmission amplitude for a zero-range potential \(V(x)=\Omega\delta(x)\) (11) is well known to be (below we put the mass \(m\) to unity)
\[T(k,\Omega)=1-\frac{i\Omega}{k+i\Omega}. \tag{23}\]
For \(\Omega<0\) its single pole in the complex plane of the momentum lies on positive imaginary \(k\)-axis, where
it corresponds to a bound state. For \(\Omega>0\) the pole moves to the negative \(k\)-axis, and closing in Eq. (19) the contour of integration in the upper or lower half-plane, we obtain
\[\eta_{T}(x^{\prime},p)=\begin{cases}\delta(x^{\prime})-\Theta(-x^{\prime})| \Omega|\exp(-ipx^{\prime}+|\Omega|x^{\prime}),\\ \mbox{if}\quad\Omega>0\\ \delta(x^{\prime})-\Theta(x^{\prime})|\Omega|\exp(-ipx^{\prime}-|\Omega|x^{ \prime}),\\ \mbox{if}\quad\Omega<0,\end{cases} \tag{24}\]
where \(\Theta(x)\equiv 1\) for \(x>0\), and \(0\) otherwise.
Equations (24) provide a useful insight into how a potential acts on the incident particle. An incoming plane wave is multiplied by the transmission amplitude \(T(p,\Omega)\) and, for a barrier, \(\Omega>0\), we have
\[\exp(ipx)\to T(p,\Omega)\exp(ipx)= \tag{25}\] \[\exp(ipx)-\Omega\int_{-\infty}^{0}\exp(\Omega x^{\prime})\times\] \[\exp[ip(x-x^{\prime})]dx^{\prime}.\]
Instead of providing a _temporal_ delay for the transmitted particle (it could do so, e.g., by changing \(\exp(ipx-iE_{p}t)\) into \(\exp[ipx-iE_{p}(t-\tau_{p})]\)), a barrier acts as an "interferometer", which splits the incoming plane wave into components with different phase shifts, corresponding to possible _spatial_ delays, \(x^{\prime}<0\). These delays are still present when the width of the barrier goes to zero, provided the strength of the potential increases accordingly [cf. Eq.(11)]. For a well, a similar expression contains additional plane waves, spatially _advanced_ relative to free propagation. One classical feature survives in this purely quantum case. In some sense, a barrier tends to "delay" the particle, where a well tends to "speed it up".
To quantify these effects one can look at the motion of wave packets. As always, it is convenient to consider Gaussian states with a mean momentum \(p\) and a coordinate width \(\Delta x\), (\(\Delta k=2/\Delta x\))
\[A(k,p)=2^{-1/4}\pi^{-3/4}\Delta k^{-1/2}\times \tag{26}\] \[\exp\left[-(k-p)^{2}/\Delta k^{2}-i(k-p)x_{I}\right],\] \[G_{0}(x,t)=[2\Delta x^{2}/\pi\sigma_{t}^{4}]^{1/4}\exp[-(x-vt-x_{ I})^{2}/\sigma_{t}^{2}],\] \[\sigma_{t}\equiv(\Delta x^{2}+2it/m)^{1/2},\] \[|G_{0}(x,t)|=[2/\pi\Delta x_{t}^{2}]^{1/4}\exp[-(x-vt-x_{I})^{2}/ \Delta x_{t}^{2}],\] \[\Delta x_{t}\equiv(\Delta x^{2}+\Delta k^{2}t^{2}/m^{2})^{1/2}.\]
The analysis is even simpler in the dispersionless case, \(E_{k}=ck\), where the free amplitude undergoes no spreading,
\[\tilde{G}_{0}(x,t)=[2/\pi\Delta x^{2}]^{1/4} \tag{27}\] \[\times\exp[-(x-ct-x_{I})^{2}/\Delta x^{2}].\]
and the similarity between Eqs.(18) and (8) is yet more evident. The Larmor clock's pointer state is displaced as a whole without spreading [cf. Eq.(8)] because the kinetic energy, \(\lambda^{2}/2\mu\) is omitted both in the classical Hamiltonian (1) and in its quantum counterpart (usually, by assuming the pointer's mass \(\mu\) to be large). A WP with no kinetic energy would not propagate at all, but making \(E_{k}\) linear, rather than quadratic in \(k\) has a similar effect.
## VI Centre-of-mass delay for transmission
Consider again Eq.(16). According the Heisenberg's Uncertainty Principle, a particle can have either a well defined position, or a well defined momentum. Therefore, much depends on the coordinate width \(\Delta x\) of the initial Gaussian WP, as well on the dispersion law.
The dispersionless case (18) is simpler, since the envelope (27), however narrow, is displaced without distortion (see Appendix B). As \(\Delta x\to 0\), only the \(\delta\)-term in Eq.(24) needs to be taken into account, \(\psi_{T}(x,t)\approx\psi_{0}(x,t)\), and \(\delta x_{com}\to 0\). The contribution from the smooth part of \(\eta_{T}\) vanishes, because \(\int|\tilde{G}_{0}(x)|^{2}dx=1\) for any \(\Delta x\), and \(\int\tilde{G}_{0}(x)dx\sim(2\pi\Delta x^{2})^{1/4}\to 0\). (A similar situation occurs in quantum measurements, where a singular \(\delta\)-term in an amplitude distribution is responsible for Zeno effect [28].) This is an expected result, since for \(|k|\to\infty\), \(T(k,V)\to 1\), and the potential has no effect on most of the momenta contained in the initial wave packet. In the opposite limit, \(\Delta x\to\infty\), \(\Delta k\to 0\) the singular term in Eq.(24) can be neglected, and the "complex shift" in (20) is determined only by the smooth part of \(\eta_{T}(x^{\prime},p)\),
\[\overline{x^{\prime}}_{T}(p)=\frac{-\Omega}{p^{2}+\Omega^{2}}+i \frac{\Omega^{2}}{p(p^{2}+\Omega^{2})}\equiv \tag{28}\] \[\mbox{Re}\left[\overline{x^{\prime}}_{T}\right]+i\mbox{Im}\left[ \overline{x^{\prime}}_{T}\right].\]
Its real part, \(-\Omega/(p^{2}+\Omega^{2})\), yields the distance between the COMs of the two WPs. The imaginary part of \(\overline{x^{\prime}}_{T}(p)\) is related to so the called "momentum filtering" effect, whereby the mean momentum of the transmitted WP is increased because higher momenta are transmitted more easily. Its significance is best illustrated in the case where dispersion
is present, as we will discuss next.
For \(E_{k}=k^{2}/2m\) we can consider scattering "completed" when the broadened transmitted WP lies sufficiently far to the right of the potential. The time needed for this can be estimated by following the motion of a free wave packet. We want the initial Gaussian WP placed as close as possible to the potential, e.g., at \(x_{I}=-K\Delta x\), \(K>1\). We also want to measure the COMs as soon as the scattering is completed, e.g., when the COM of the freely propagating WP lies several widths away on the other side of the barrier, e.g., at \(x_{F}=K\Delta x_{t}\), where \(\Delta x_{t}\) is defined in the last of Eqs.(26). For a mean momentum \(p\) the required time is easily found to be \(t(p,\Delta k,K)=2mp\Delta xK/(p^{2}-K^{2}\Delta k^{2})\). (Note that we cannot simply send \(\Delta x\to 0\), \(\Delta k\rightarrow\infty\), as was possible for \(E_{k}=ck\)). In the limit \(\Delta x\rightarrow\infty\) one finds (see Appendix C)
\[\delta x_{com}^{T}\approx\mbox{Re}\left[\overline{x^{\prime}}_{T}(p)\right]+ \tag{29}\]
\[\frac{\mbox{Im}\left[\overline{x^{\prime}}_{T}(k)\right]\Delta k^{2}}{2m}t(p,\Delta k,K)\]
The last fraction is clearly the excess mean velocity obtained through the momentum filtering, absent for \(E_{k}=ck\) where all momenta propagate with the same velocity \(c\). The last term in Eq.(29) behaves as \(\sim\Delta k\sim 1/\Delta x\), and was omitted in Eq.(21). It may need to be retained for not-too-broad WPs, as is shown in Fig.4 for \(K=3\).
Finally, we find the COM of the transmitted WP delayed by a zero-range barrier (\(\delta x_{T}\approx\mbox{Re}\left[\overline{x^{\prime}}_{T}\right]<0\) if \(\Omega>0\)), and advanced by a zero-range well (\(\delta x_{T}\approx\mbox{Re}\left[\overline{x^{\prime}}_{T}\right]>0\) if \(\Omega<0\)). This is not what happens in general, e.g., for a rectangular [14] or an Eckart barrier [25], where the COM of the greatly reduced transmitted WP is actually advanced. The reason for this is that in all such cases \(T(k,V)\) has a large (infinite) number of poles in the complex \(k\)-plane, there are many exponential terms in the r.h.s. of Eqs.(24), and the resulting \(\eta_{T}(x^{\prime},p)\) has a complicated form [25]. Although it vanishes for \(x^{\prime}>0\), Gaussian envelopes in Eq.(16) may interfere constructively in a small region of \(x>x_{0}+vt\), and cancel each other elsewhere. This behaviour cannot, of course, be reproduced in the much simpler case studied here.
## VII Centre-of-mass delay for reflection
A similar analysis can be applied in the case of a particle, reflected (R) by a potential \(V(x)\), contained between \(x=a\) and \(x=b\). The reflected WP is given by
\[\psi_{R}(x,t)=\int R(k,V)A(k,p)\times \tag{30}\] \[\exp(-ikx-iE_{k}t)dk,\]
where \(R(k,V)\) is the reflection amplitude, satisfying \(|T(p,V)|^{2}+|R(p,V)|^{2}=1\). It can also be written in a form similar to (18)
\[\psi_{R}(x,t)=e^{-ipx-iE_{p}t}\times \tag{31}\] \[\int_{-\infty}^{\infty}G_{0}(-x-x^{\prime},t)\eta_{R}(x^{\prime}, p)dx^{\prime},\]
where
\[\eta_{R}(x^{\prime},p)=\frac{e^{-ipx^{\prime}}}{2\pi}\int_{-\infty}^{\infty}R( k,V)e^{ikx^{\prime}}dk, \tag{32}\]
and \(G_{0}(-x-x^{\prime},t)\) (note \(x\rightarrow-x\)) is the envelope of the mirror image of the free WP with respect to the origin, which is the same (except for a minus sign) as the envelope of a WP reflected by an infinite potential wall at \(x=0\). One can still compare positions of the centres of mass, with and without the potential, by defining
\[\delta x_{com}^{R}=\left\langle x(t)\right\rangle_{R}+vt+x_{I}, \tag{33}\] \[\left\langle x(t)\right\rangle_{R}\equiv\frac{\int x|\psi_{R}|^{2 }dx}{\int|\psi_{R}|^{2}dx},\]
Figure 4: Centre-of-mass delay for transmission by a zero-range barrier, \(\Omega>0\), with and without dispersion vs. WP’s width \(\Delta x\). Shown by the dashed line is the delay in the limit \(\Delta x\rightarrow\infty\) [cf. Eq.(29)]. The inset shows the initial and final wave packets for \(m\Omega\Delta x=50\) and \(E_{k}=k^{2}/2m\). The parameters used are: \(\Omega/c=mc/p=1\), \(x_{I}/\Delta x=3\), and \(m\Omega^{2}t(p,\Delta k,3)=6m\Omega\Delta x/[1-36(m\Omega\Delta x)^{2}]\).
There is one complication not encountered in the case of transmission. Consider a potential \(V_{s}(x)=V(x-s)\), obtained by displacing the original barrier or well by a distance \(s\). Such a displacement has no effect on the transmission amplitude, \(T(p,V_{s})=T(p,V)\), but the reflection amplitude acquires an extra phase, \(R(k,V_{s})\rightarrow\exp(2iks)R(k,V_{s})\), and \(\eta_{R}(x^{\prime},p)\) changes into \(\eta_{R}(x^{\prime}-2s,p)\). In other words, one needs to decide where to put the potential before making the comparison with free propagation. The ambiguity can be resolved by always placing the left edge of the potential at the origin, \(a=0\) (see Fig.5), and considering the reflected particle _delayed_ by the potential if \(\delta x_{com}^{R}>0\), or _advanced_ by it, if \(\delta x_{com}^{R}<0\).
With this agreed, a classical reflected particle can only be delayed, since it either bounces off the edge of the potential at \(x=0\), or has to travel further to the right before making a U-turn. In the quantum case it is not always the case, as we will show next. Reflection amplitude of a zero-range potential \(V(x,\Omega)=\Omega\delta(x)\) is given by
\[R(k,\Omega)=\frac{-i\Omega}{k+i\Omega}, \tag{34}\]
and
\[\eta_{R}(x^{\prime},p)=\begin{cases}-\Theta(-x^{\prime})|\Omega|\exp(-ipx^{ \prime}+|\Omega|x^{\prime}),\\ \text{if}\quad\Omega>0\\ -\Theta(x^{\prime})|\Omega|\exp(-ipx^{\prime}-|\Omega|x^{\prime}),\\ \text{if}\quad\Omega<0.\end{cases} \tag{35}\]
[Note the absence of a \(\delta\)-term, since \(R(k,\Omega)\to 0\) as \(|k|\rightarrow\infty\).] According to our convention, a reflected particle with a momentum \(p\) would be delayed by a zero-range barrier (as it would be in a classical case), and advanced by a zero-range well (a purely quantum effect, since there is no reflection from a well in the classical limit).
As was shown in the previous Section, without spreading [cf. Eq.(27)], a narrow wave packet crosses a barrier or a well almost without reflection. Inserting (27) and (35) into Eq.(31) for the small reflected part we find (the upper sign is for a barrier) \(|\psi_{R}(x,t)|^{2}\sim\Theta(\pm x\mp ct\mp x_{I})\Delta x\exp[-2|\Omega|(\pm x \mp ct\mp x_{I})]\), and
\[\delta x_{com}^{R}\overrightarrow{\Delta x\to 0}1/2\Omega, \tag{36}\]
as shown in Fig. 6.
In the opposite limit of a broad WP we find
\[\delta x_{com}^{R}\begin{array}{c}\xrightarrow[\Delta x\rightarrow\infty] \quad\text{Re}\left[\overrightarrow{x^{\prime}}_{R}(p)\right]=\frac{\Omega}{ p^{2}+\Omega^{2}},\end{array} \tag{37}\]
which is valid both with and without dispersion [cf. Eqs.(26) and (27)]. In all cases the reflected particle is delayed if \(\Omega>0\), and advanced if \(\Omega<0\).
Figure 5: The position of a quantum particle, reflected by a potential \(V(x)\), is compared with that of a free particle launched in the opposite direction from \(-x_{I}>0\). The particle is said to be _delayed_ by the potential if its COM lies to the right of the COM of the freely propagating WP, or _advanced_ if the opposite is true [cf. Eq.(33)].
Figure 6: Centre-of-mass delay for reflection by a zero-range barrier, \(\Omega>0\), in the absence of dispersion, \(E_{k}=ck\), [cf. Eq.(27)] vs. WP’s width \(\Delta x\). Shown by the dashed line is the delay in the limit \(\Delta x\rightarrow\infty\) [cf. Eq.(37)]. The inset shows the small reflected WP (solid) and its limiting form for \(\Delta x\to 0\) (dashed). The parameters used are: \(\Omega/c=mc/p=1\), \(m\Omega x_{I}=-50\), and \(m\Omega ct=100\).
## VIII Centre-of-mass delay in elastic scattering
Before concluding we revisit the case of a particle scattered by a short-range spherically symmetric potential \(V(r)\) contained between \(r=0\) and \(r=b\). (One can also think of a collision between two particles interacting via \(V(r)\), \(r=|\vec{r}_{1}-\vec{r}_{2}|\)). For a zero angular momentum, \(L=0\), one can prepare a specially symmetric wave packet \(\psi(r,t=0)=\int A(k)\exp[-ik(r-r_{I})]dk=\exp(-ipr)G_{0}(r,t=0)\), which converges on the scattering potential. The state \(\psi(r,t)\) satisfies a radial Schrodinger equation with a boundary condition \(\psi(r=0,t)=0\), and one has a previously studied case of reflection, with an additional infinite wall added at \(r=0\) (see Fig.7).
Proceeding as before, we write
\[\psi(r,t)=e^{ipr-iE_{p}t}\times \tag{38}\] \[\int_{-\infty}^{\infty}G_{0}(r-r^{\prime},t)\eta(r^{\prime},p)dr^{ \prime},\] \[\eta(r^{\prime},p)=\frac{e^{-ipr^{\prime}}}{2\pi}\int_{-\infty}^ {\infty}S(k,V)e^{ikr^{\prime}}dk,\]
where \(S(k,V)\), \(|S(k,V)|=1\), is the scattering matrix element. For a zero-range potential, obtained in the limit \(V(r)=U\Theta_{[0,b]}\), \(b\to 0\), \(U\rightarrow\infty\), \(Ub^{2}\to const\)[5], \(S(k,V)\) is given by
\[S(k,\alpha)=-\frac{k+i\alpha^{-1}}{k-i\alpha^{-1}}=-\left[1+\frac{2i\alpha^{-1 }}{k-i\alpha^{-1}}\right], \tag{39}\]
where \(\alpha\) is the scattering length [5], positive for a well, \(U<0\), and negative for a barrier, \(U<0\). The amplitude distribution \(\eta(r^{\prime},p)\) becomes
\[\eta(r^{\prime},p)=\begin{cases}-\delta(r^{\prime})+2\Theta(-r^{\prime})| \alpha|^{-1}\times\\ \exp(-ipr^{\prime}+r^{\prime}/|\alpha|),\text{if}\quad\alpha<0\\ -\delta(r^{\prime})+2\Theta(r^{\prime})|\alpha|^{-1}\times\\ \exp(-ipr^{\prime}-r^{\prime}/|\alpha|),\text{if}\quad\alpha>0,\end{cases} \tag{40}\]
In the limit of a broad nearly monochromatic WP we find
\[\delta r_{com}\xrightarrow[\Delta x\to 0]{}\text{Re}\left[\overline{r^{ \prime}}\right]=\frac{2\alpha}{1+k^{2}\alpha^{2}} \tag{41}\]
where the real valued "complex shift"
\[\overline{r^{\prime}}=\frac{\int r^{\prime}\eta(r^{\prime},p)dr}{\int\eta(r^ {\prime},p)dr}=-\frac{\partial\varphi(p,\alpha)}{\partial p} \tag{42}\]
equals the Eisenbud-Wigner-Smith time delay, multiplied by the particle's velocity \(v\). There is no momentum filtering [cf. Eqs.(28) and (29)], since all momenta are perfectly reflected at the origin. As in the previous examples, a zero-range well (\(\alpha>0\)) advances the scattered particle, while a zero-range barrier delays it.
## IX Conclusions
Classically, one can evaluate the duration spent by a particle in a given region of space either by measuring it directly by means of a clock, or by measuring the spatial shift, the distance between the particle and its freely moving counterpart. Quantally, both quantities are distributed, and each value is endowed with a complex valued probability amplitude, rather than with a probability itself. One faces the usual dilemma. An accurate measurement perturbs the particle, and destroys the studied transition. An inaccurate one may leave the interference almost intact, but the sought value must remain indeterminate due to the Uncertainty Principle [15]. The authors of [5] are correct in saying that the Larmor clock measurements are related to the duration spent in the region [cf. Eq.(5)]. Measurements relying of the transmitted particle's position determine something else, best described in terms of the virtual shifts imposed on the particle by the potential [cf. Eq.(25)]. Indeed, the range of possible durations shrinks as the region becomes smaller [cf. Eq.(13)], while the range of shifts does not [cf. Eqs.(24), (35) and (40)]. What the authors of [5] appeared to have
Figure 7: A quantum particle with zero angular momentum, \(L=0\), is scattered by a spherically symmetric potential \(V(r)\). The boundary condition at the origin is equivalent to putting an infinite wall at \(r=0\).
failed to notice is that a weakly perturbing Larmor clock can only measure a "complex time" [21], essentially the first moment of an alternating amplitude distribution [cf. Eq.(10)]. The Uncertainty Principle warns against treating this quantity, or its parts, as physical time intervals [13]. The case of a zero-range potential is, however, somewhat special. There is a single available duration, \(\tau=0\) [cf. Eq.(13)], and no interference to destroy, no matter how accurate the clock is. Both quantum and classical particles cannot spend a finite duration in an infinitely small volume.
Something different occurs if one uses the centre of mass of a broad wave packet as a "pointer" set up to measure the shift experienced by the particle in a potential. The measured quantity is, indeed, the said shift, yet the measurement is highly inaccurate, and there is a range of possible values even in the zero-range limit (21). The observed displacement of the "pointer", \(\delta x_{com}\) coincides with the real part of a "complex shift" [cf. Eq.(20)]. One cannot know which of the possible shifts actually occurred, just as one never knows which of the two holes was chosen by an electron in the double-slit experiment [15].
There is a difference in observing a phenomenon, and explaining it. An experimentalist is perfectly entitled to say "I see that with a very narrow high barrier in place, a particle arrives at the detector on average \(\left|\delta x_{com}\right|\)/\(v\) seconds later than would do without the barrier." He/she cannot, however, say that this happens because the particle has spent extra time in the barrier region. Another experimentalist may want to check this claim by using a Larmor clock, and find this time to be zero.
In brief, we have used the measurement theory's techniques to explain what makes the two approaches to the tunnelling time problem so different. Our analysis can also be applied to other problems, such as reflection of particles, and three dimensional elastic scattering.
## Acknowledgement
We thank Grant PID2021-126273NB-I00 funded by MCIN/AEI/ 10.13039/501100011033 and by "ERDF A way of making Europe". We acknowledge financial support from the Basque Government Grant No. IT1470-22. MP acknowledges support from the Spanish Agencia Estatal de Investigacion, Grant No. PID2019-107609GB-I00.
## Appendix A
Consider, in general, complex valued functions \(\eta(x^{\prime})\) and \(G(x^{\prime})\), and evaluate an average
\[\left<x\right>\equiv\frac{\int dxx\left|\int dx^{\prime}G(x-x^{\prime})\eta(x ^{\prime})\right|^{2}}{\int dx\left|\int dx^{\prime}G(x-x^{\prime})\eta(x^{ \prime})\right|^{2}} \tag{16}\]
in the limit where \(G(x)\) is very broad, and can be approximated by
\[G(x-x^{\prime})\approx G(x)-G^{\prime}(x)x^{\prime} \tag{17}\]
for all relevant \(x^{\prime}\). To the leading order in \(G^{\prime}(x)=\partial_{x}G(x)\) on finds
\[\left<x\right>\approx\int x|G(x)|^{2}dx+\mbox{Re}[\overline{x^{ \prime}}]+2\mbox{Im}[\overline{x^{\prime}}] \tag{18}\] \[\times\int x\mbox{Im}\left[G^{*}(x)G^{\prime}(x)\right]dx\]
where the last term vanishes if \(G(x)\) is real. In particular, if \(G(x)\) replaced by \(\tilde{G}_{0}(x,t)\), and \(\eta(x^{\prime})=\eta_{T}(x^{\prime},p)\) is given by Eq.(19), in the limit \(\Delta x\rightarrow\infty\) we have [the spreading of the initial WP can be neglected if the WP is very broad, \(\Delta x=2/\Delta p\to 0\), \(G_{0}(x,t)\approx G_{0}(x-vt-x_{I})\)]
\[\left<x\right>_{T}\approx vt+x_{I}+\mbox{Re}[\overline{x^{\prime}}_{T}]. \tag{19}\]
Subtracting \(\left<x\right>_{0}\) given by Eq.(17) yields Eq.(21).
## Appendix B
Consider, for simplicity, a symmetric short range potential, \(V(x)=V(-x)\), \(V(x)=0\) for \(|x|>a\). Prepare an initial wave packet to the left of the potential
\[\left<x|\psi(0)\right>=\int_{-\infty}^{\infty}A(k)\exp(ikx)dk \tag{20}\]
Define the scattering states \(|\phi(k)_{L}\rangle\) where the particle is incident from the left,
\[\left<x|\phi_{L}(k)\right>=\begin{cases}(2\pi)^{-1/2}[\exp(ikx)+R(k)\exp(-ikx)] \\ \mbox{for}\quad x<-a,\\ (2\pi)^{-1/2}T(k)\exp(ikx)\\ \mbox{for}\quad x>a.\end{cases} \tag{21}\]
Define the scattering states \(|\phi(k)_{R}\rangle\) where the particle is incident from the right,
\[\left<x|\phi_{R}(k)\right>=\left<-x|\phi_{L}(k)\right> \tag{22}\]
Expand \(|\psi_{0}\rangle\) in the complete orthogonal set \(\{|\phi_{L}(k)\rangle,|\phi_{R}(k)\rangle\}\),
\[|\psi_{0}\rangle=\int_{0}^{\infty}[\langle\phi_{L}(k)|\phi_{0}\rangle+\langle\phi _{R}(k)|\phi_{0}\rangle]dk. \tag{47}\]
Using \(T(-k)=T^{*}(k)\), \(R(-k)=R^{*}(k)\), \(|T(k)|^{2}+|R(k)|^{2}=1\), and recalling that as time progresses, for the evolved state \(|\psi(t)\rangle\) we find,
\[\langle x|\psi(t)\rangle = \int_{-\infty}^{\infty}T(k)A(k)\exp(ikx)\] \[\times \exp(-iE_{k}t)dk,\quad x>a,\]
and
\[\langle x|\psi(t)\rangle=\int_{-\infty}^{\infty}R(k)A(k)\times \tag{48}\] \[\exp(-ikx)\exp(-iE_{k}t)dk+\] \[\int_{-\infty}^{\infty}A(k)\exp(ikx)\times\] \[\exp(-iE_{k}t)dk,\quad x<-a.\]
The last term in (48) is the freely propagating initial WP. It will eventually leave the \(x<-a\) region, and can be neglected. The Fourier transforms \((k\to x)\) (48) and (48) can be rewritten as Eqs.(18) and (31), which are valid even if the initial WP has components with \(k<0\)
## Appendix C
Suppose \(\psi(x)=\int F(k)\exp(ikx)dk\), and we want to evaluate \(\langle x\rangle\equiv\int x|\psi(x)|^{2}dx/\int|\psi(x)|^{2}dx\). Since \(x\psi(x)=-i\int F(k)\partial_{k}[\exp(ikx)]dk=i\int\exp(ikx)\partial_{k}F(k)dk\) we find
\[\langle x\rangle=-\frac{\int\mbox{Im}[F^{*}(k)\partial_{k}F(k)]dk}{\int|F(k) |^{2}dk} \tag{49}\]
For a complete transmission \(F(k)=T(k)A(k)\exp(-iE_{k}t)\) and we obtain
\[\langle x\rangle_{T}=x_{I}+\langle v(k)\rangle_{T}t-\left\langle\frac{ \partial\varphi_{T}(k,V)}{\partial k}\right\rangle_{T}, \tag{50}\]
where \(\left\langle f(k)\right\rangle_{T}\equiv\frac{\int f(k)|T(k)|^{2}|A(k)|^{2} dk}{\int|T(k)|^{2}|A(k)|^{2}dk}\), \(v(k)=\partial_{k}E_{k}\), and \(T(k,V)=|T(k,V)|\exp(i\varphi_{T}(k,V))\). Furthermore, evaluating \(\left\langle v(k)-p/m\right\rangle_{T}\) for \(E_{k}=k^{2}/2m\) and a Gaussian WP (26) yields
\[\left\langle v(k)-p/m\right\rangle=\left\langle\frac{\partial|T(p,V)|}{ \partial p}\right\rangle\Delta k^{2}/2m= \tag{51}\]
\[\left\langle\mbox{Im}\left[\overrightarrow{x^{\prime}}_{T}(k)\right] \right\rangle\Delta k^{2}/2m.\]
It is readily seen that in the limit \(\Delta x\rightarrow\infty\), \(|A(k)|^{2}\rightarrow\delta(p)\) and Eq.(50) agrees with Eq.(51)
For reflection we have \(\exp(ikx)\rightarrow\exp(-ikx)\), and \(T(k,V)\to R(k,V)=|R(k,V)|\exp(i\varphi_{R}(k,V))\). A similar calculation yields
\[\left\langle x\right\rangle_{R}=-x_{I}-\left\langle v(k)\right\rangle_{R}t+ \left\langle\frac{\partial\varphi_{R}(k,V)}{\partial k}\right\rangle_{R}, \tag{52}\]
where \(\left\langle f(k)\right\rangle_{R}\equiv\frac{\int f(k)|R(k)|^{2}|A(k)|^{2}dk}{ \int|R(k)|^{2}|A(k)|^{2}dk}\). We note that similar but not identical equations were obtained, e.g., in [6].
|
2309.15593 | Exciton-Polariton Condensates: A Fourier Neural Operator Approach | Advancements in semiconductor fabrication over the past decade have catalyzed
extensive research into all-optical devices driven by exciton-polariton
condensates. Preliminary validations of such devices, including transistors,
have shown encouraging results even under ambient conditions. A significant
challenge still remains for large scale application however: the lack of a
robust solver that can be used to simulate complex nonlinear systems which
require an extended period of time to stabilize. Addressing this need, we
propose the application of a machine-learning-based Fourier Neural Operator
approach to find the solution to the Gross-Pitaevskii equations coupled with
extra exciton rate equations. This work marks the first direct application of
Neural Operators to an exciton-polariton condensate system. Our findings show
that the proposed method can predict final-state solutions to a high degree of
accuracy almost 1000 times faster than CUDA-based GPU solvers. Moreover, this
paves the way for potential all-optical chip design workflows by integrating
experimental data. | Surya T. Sathujoda, Yuan Wang, Kanishk Gandhi | 2023-09-27T11:47:26Z | http://arxiv.org/abs/2309.15593v2 | # Exciton-Polariton Condensates: A Fourier Neural Operator Approach
###### Abstract
Advancements in semiconductor fabrication over the past decade have catalyzed extensive research into all-optical devices driven by exciton-polariton condensates. Preliminary validations of such devices, including transistors, have shown encouraging results even under ambient conditions. A significant challenge still remains for large scale application however: the lack of a robust solver that can be used to simulate complex nonlinear systems which require an extended period of time to stabilize. Addressing this need, we propose the application of a machine-learning-based Fourier Neural Operator approach to find the solution to the Gross-Pitaevskii equations coupled with extra exciton rate equations. This work marks the first direct application of Neural Operators to an exciton-polariton condensate system. Our findings show that the proposed method can predict final-state solutions to a high degree of accuracy almost 1000 times faster than CUDA-based GPU solvers. Moreover, this paves the way for potential all-optical chip design workflows by integrating experimental data.
## 1 Introduction
The rapid advancement of exciton-polariton condensates [1] has led to the emergence of a wide range of all-optical devices, such as switches [2; 3; 4; 5; 6; 7; 8], analogue simulators [9; 10], neuromorphic computing [11; 12; 13; 14; 15], transistors [16; 17; 18], etc. The microcavity exciton-polariton (hereafter polariton) [19] system consists of two main strongly coupled components: quantum well (QW) excitons and photons trapped in microcavity. The former are electron-hole pairs bound by Coulomb interactions, are frequently observed in semiconductor QWs, and the latter are generated from the distributed Bragg reflectors (DBRs) aiming to create a stopband in the refractive spectrum of the microcavity structure [19], as illustrated in Figure 1. In this system, the exciton lifetime significantly surpasses that of photons in microcavity. Thus, these high-quality DBRs, engineered to extend the photon lifetime, ensure the preservation of the strong-coupling condition, where the light-matter coupling strength surpasses the decay rate of any component within the polaritonic system [20].
Polaritons, often described as quasiparticles with characteristics that are half-light and half-matter, have an impressively low effective mass due to their photonic components. This mass is approximately five orders of magnitude less than that of a bare electron. Thus, the necessary temperature for condensation is approximately \(10\,\mathrm{K}\) for inorganic semiconductor materials [1], which contrasts
significantly with atomic condensates such as Rubidium-87, which require temperatures around \(170\,\mathrm{nK}\)[21]. Notably, using organic materials, polariton condensation can be realized even at room temperature [22, 23, 24]. In polariton condensates, due to their excitonic component, the predominant repulsive interaction between polaritons results in a blue-shifting effect on the potential and gives rise to rich nonlinear effects [25, 26]. In essence, it is the photonic (light-like) component of the polaritons that confers their notably light effective mass suitable for condensates, while the excitonic (matter-like) component is responsible for the observed nonlinear effects.
The emergence and adoption of all-optical devices necessitates the development of precise and adaptable simulation tools. Just as Electronic Design Automation (EDA) played a pivotal role in the evolution of chip design, there is a pressing need for emulators that can capture the rich of nonlinear characteristics inherent in optical devices. A case in point is the groundbreaking development of an optically-activated transistor switch. Rooted in advancements in polariton condensates, this transistor was initially conceptualized for operations at cryogenic temperatures [16]. However, strides in research have broadened its applicability, making it feasible even under ambient conditions [17, 18]. These rapid advancements emphasize the need for comprehensive simulations in the design of large-scale logic gates. Hence, crafting a robust theoretical framework becomes imperative, one that can effectively allow manipulation of the potential profile via specific input configurations.
To this aim, a robust solution to the Gross-Pitaevskii equation (GPE) which is used to theoretically describe the condensates is required [27]. While GPU-based GPE solvers tailored for both uniform [28, 29, 30, 31] and non-uniform meshes [32] currently exist, we aim to adopt an even more efficient machine-learning (ML) based solver that is intended to notably accelerate the computational process, especially in the context of designing extensive transistor networks.
Thus far, a wide variety of ML architectures have been proposed to approximate solutions to general classes of PDEs. These range from convolution-based methods, such as variants of the U-Net architecture [33], to operator-learning methods such as Deep Operator Networks (DeepONets [34]), Graph Neural Operators [35], Multipole Graph Neural Operators [36], Fourier Neural Operators [37] and Physics-informed Neural Operators [38]. Though convolution-based methods have shown promise regarding accuracy of future state predictions, they fail to scale in compute efficiency to larger systems, even with recent advancements [39]. Operator-learning methods overcome this bottleneck by learning mappings between infinite-dimensional spaces, allowing them to predict solutions at different discretiations at a similar speed.
In this work, we study the application of the Fourier Neural Operator (FNO) architecture to approximate solutions to the GPE. We are interested in this specific variant of Neural Operator as its mathematical formulation relates very closely to the Split-step Fourier Method (SSFM), which is the numerical method used to solve the GPE in this instance. Moreover, FNOs have shown widespread success in application to many other areas of physics and engineering [40, 41, 42, 43].
The structure of this paper is outlined as follows: In Section 2, we provide an introduction to exciton-polariton condensates and explain the physical quantities employed within the GPE. Section 3 introduces the Neural Operator architecture, detailing its compatibility with the GPE, especially when integrated with the rate equation. This section further elaborates on data generation and preparation methods. Our findings are presented in Section 4, future work in Section 5 and the general conclusions are given in Section 6.
Figure 1: Sketch of the distributed Bragg reflectors (DBRs) microcavity integrated with quantum wells (QWs). The DBR is comprised of \(6\) bilayers of alternating low (depicted in light blue) and high (shown in dark green) refractive index materials. Also, 6 QWs (represented in black) with their respective barriers (in gray) are depicted. The substrate, located at the base of the structure, is presented in dark yellow.
## 2 Background
### Exciton-polariton condensates
Polariton condensates, being an non-equilibrium process, start with nonresonant excitation (see the right column of Figure 2). The hot electron-hole plasma initially undergo rapid cooling, primarily driven by emissions mediated by the longitudinal-optical phonons, forming excitions at the high momentum of the lower polariton (LP) branch. Subsequent interactions between excitons and acoustic phonons, as well as exciton-exciton scatterings, lead polaritons to a transitional 'bottleneck region' within the LP branch, as illustrated in Figure 2. It is worth noting that the terminology 'polariton' instead of 'exciton' is used here because at very high position of the LP branch the photonic components of the polaritons are barly present and then the quasiparticles arrive all the way to the bottleneck region where more photons are coupled. By the end of the process, parametric scattering, together with momentum conservation, takes place, which results in a segment of the polaritons to transition to a higher-momentum state, while another segment descends into the lower-momentum branch of the LP forming condensates. Ballistic polariton flow (see the left column of the Figure 2) is activated once the system is above the condensate threshold [44]. These condensates represent large-scale coherent states, distinct from the initial nonresonant inputs [1].
Technological advances, such as the use of spatial light modulators and precise semiconductor fabrications, allow the manipulation of the pump profile and rich nonlinear condensate outputs, leading to engineering of quantum fluids of light [45]. Extensive studies have been conducted on the ability of nonresonant optical techniques to manipulate the motion of condensate polaritons, such as customized momentum distributions [46], condensate amplifiers [44; 47], waveguides [48; 49; 50; 51], directional superfluids near equilibrium [52]. The ability to manipulate the condensate flow through reservoir engineering shows its potential for quantum computing [53].
Figure 2: Comparison of pump profiles and wavefunction density with scattering process illustration. Left column: The upper panel shows the nonresonant pump profile featuring three Gaussian spots, while the lower panel shows the wavefunction density of the condensates at the final time. Three white dashed lines indicate the central positions of the pump regions and align with their corresponding locations on the condensate density map. Right column: Depiction of the scattering process, tracing the transition from the hot electron-hole plasma phase, through the reservoir cooling phase, to the scattering in the condensates. Only the lower polariton branch of the polariton energy mode is shown here.
### Gross-Pitaevskii equation
The dynamics of polariton condensates can be described by the GPE coupled with the rate equation of the exciton reservoir denoted as \(\mathcal{N}\)[27]:
\[i\hbar\frac{\partial}{\partial t}\Psi=\bigg{\{}-\frac{\hbar^{2}}{2m}\nabla^{2}+ \alpha|\Psi|^{2}+G\Big{[}\mathcal{N}+\frac{\eta}{\Gamma}P(\mathbf{r})\Big{]}+i \frac{\hbar}{2}\big{[}R\mathcal{N}-\gamma\big{]}\bigg{\}}\Psi, \tag{1}\]
\[\frac{\partial}{\partial t}\mathcal{N}=-\Big{[}\Gamma+R|\Psi|^{2}\Big{]} \mathcal{N}+P(\mathbf{r}), \tag{2}\]
where \(m\) is the effective mass of the polariton, \(\alpha\) and \(G\) stands for, respectively, polariton-polariton and polariton-reservoir interaction, \(R\) denotes the scattering rate from the reservoir to the condensates, \(\eta\) refers to the ratio of the dark excitons, and \(\gamma\) (\(\Gamma\)) is the decay rate of the polariton (reservoir). The detunning between the exciton and the photon mode can greatly alter the interaction terms with the relationship \(\alpha=g|\chi|^{4}\) and \(G=2g|\chi|^{2}\), and \(g=g_{0}/N\) where \(g_{0}\) is the exciton-exciton interaction, \(N_{\text{QW}}\) the number of QWs, and \(|\chi|^{2}\), representing the presentage of exciton of which the polariton consists, is the Hopfield coefficient [54] of the excitonic branch. In this work, the continuous-wave (CW) pump, denoted by \(P(\mathbf{r})\), is used to replenish the reservoir due to the losses from the system. The nonlinear term \(|\psi|^{2}\) appearing in both the pump-to-reservoir transition [see Equation (2)] and the superfluid in the condensates [see Equation (1)], produce the rich nonlinear characteristic induced from the pump to the condensate.
In the case of CW excitation under a weak pumping regime, the approximate value of \(|\psi|^{2}\) tends towards zero. In this situation, the rate of reservoir with respect to time maintains a steady state, or in mathematical terms, \(\partial\mathcal{N}/\partial t=0\). The determination of threshold power, denoted at \(p^{\text{th}}\), is possible through an analysis of the right-hand side (r.h.s.) of Equation (1) where \(R\mathcal{N}=\gamma\) serves as a representative of the equilibrium state between gain and loss. Therefore, the threshold power \(p^{\text{th}}=\gamma\Gamma/R\) is obtained. This suggests that when the population of polaritons exceeds the condensation threshold \(p^{\text{th}}\), a detectable density value manifests itself. The real potential of Equation (1) denoted \(V\) in the stationary state of the system, therefore, is
\[V(\mathbf{r})=\alpha|\psi|^{2}+G\Big{(}\frac{1}{\Gamma+R|\psi|^{2}}+\frac{\eta}{ \Gamma}\Big{)}P(\mathbf{r}). \tag{3}\]
The real potential is composed of two main components: one originating from the pumping region [first term on the r.h.s of Equation (3)] and the other stemming from the interactions among the polaritons outside this region [second term on the r.h.s of Equation (3)]. When the pumping power is below the threshold, the direct contribution of the potential goes directly into the pumping profile.This relationship is represented as \(V(\mathbf{r})=(1+\eta)(G/\Gamma)P(\mathbf{r})\). The spatial profile chosen for the demonstration of \(N_{G}\) Gaussian spots. That is
\[P(\mathbf{r})=\sum_{i}^{N_{G}}p_{i}G_{i}(\mathbf{r}), \tag{4}\]
where \(p_{i}\) stands for strength of each spot and the normalized Gaussian function \(G_{i}(\mathbf{r})\), with full width at half maximum (FWHM) denoted \(\sigma\), is defined as
\[G_{i}(\mathbf{r})=\frac{1}{2\pi\sigma}\exp{\left(\frac{-|\mathbf{r}-\mathbf{r}_{i}|}{2 \sigma^{2}}\right)}. \tag{5}\]
Note that \(\mathbf{r}_{i}\) represents different location of spots.
## 3 Methodology
### Fourier Neural Operators
The numerical solution to Equations (1) and (2) is derived using the SSFM, detailed in Appendix A. A natural ML analog to this classical method is the FNO architecture [37]. More generally, Neural Operators [55] are a class of models which learn mappings between two infinite-dimensional spaces from a finite set of input-output pairs. Many variants of the Neural Operator architecture have been applied to approximate solutions to Partial Differential Equations, such as in [40; 41; 42; 43]. The Neural
Operator architecture consists of a lifting operation \(\mathcal{P}\), followed by iterative updates using a Kernel Integral Operator \(\mathcal{K}\), and a final projection operator \(\mathcal{Q}\), as defined in Equation 6.
\[\mathcal{G}_{\theta}:=\mathcal{Q}\circ\sigma_{T}(W_{T-1}+\mathcal{K}_{T-1}+b_{T- 1})\circ...\circ\sigma_{1}(W_{0}+\mathcal{K}_{0}+b_{0})\circ\mathcal{P} \tag{6}\]
Here \(\sigma\) corresponds to a non-linearity and \(W\) and \(b\) correspond to the weights and biases of the Kernel Integral Layer, respectively. \(\mathcal{P}\) and \(\mathcal{Q}\) are point-wise fully local projection and lifting operators. The choice of the Kernel Integral Operator \(\mathcal{K}\) delineates the class of the Neural Operator. Specifically, the FNO (see Figure 3) uses the Kernel Integral Operator defined by Equation 7 below.
\[(\mathcal{K}_{t}(v_{t}))(x)=\mathcal{F}^{-1}(R_{\phi}\cdot\mathcal{F}(v_{t-1}) )(x)\qquad\forall x\in\mathbb{R}^{n} \tag{7}\]
Here \(\mathcal{F}\) and \(\mathcal{F}^{-1}\) correspond to the Fourier and Inverse Fourier Transforms and \(R_{\phi}\) corresponds to the Fourier Transform of a periodic function arising from the definition of a Kernel Integral Operator given in [37]. This object is parameterised by a linear transformation of the top \(k\) modes pertaining to the given layer, which acts as a hyperparameter in the model.
The natural choice of the FNO architecture for approximating the solution to Equations 1 and 2 is due to the inductive bias that arises from the SSFM - FNO correspondence stated below.
**Theorem 1**.: _(**SSFM-FNO Correspondence**) Suppose that \(\sigma\in(TW)\) is a Tauber-Wiener function, \(X\) is a Banach Space, \(K\subset X\) is a compact set, \(V\) is a compact set in \(C(K)\), \(\Psi_{t}\) is a nonlinear continuous operator representing the solution of the first-order Split-step Fourier Method at time \(t\), then for any \(\epsilon>0\), there are a positive integer \(n\), \(m\) points \(x_{1},...,x_{m}\in K\), and real constants \(c_{i}\), \(\theta_{i}\), \(\xi_{ij}\), \(i=1,...,n\), \(j=1,...,m\), such that for_
\[R_{\phi}:=\sum_{i=1}^{n}c_{i}\sigma\left(\sum_{j=1}^{m}\xi_{ij}u(x_{j})+\theta _{i}\right), \tag{8}\]
\[\left|\Psi_{t+\Delta t}(\Psi_{t})(x)-\mathcal{F}^{-1}(R_{\phi}\cdot\mathcal{ F}(v_{t}))(x))\right|<\epsilon \tag{9}\]
Figure 3: Architecture of the Fourier Neural Operator: The process begins with the input \(a(x)\) which undergoes a lifting operation, denoted as \(\mathcal{P}\). This is followed by \(4\) consecutive Fourier layers. Subsequently, a projector \(\mathcal{Q}\) transforms the data to the desired target dimension, resulting in the output \(u(x)\). The inset provides a detailed view of the structure of a Fourier layer. Data initially flows to the layer as \(\nu(x)\) and is bifurcated into two branches: one undergoes a linear transformation \(W\), and the other first experiences a Fourier transformation, from which the 32 lowest Fourier modes are kept and the other higher modes are filtered out by undergoing a transformation \(R\), and ends with an inverse Fourier transformation with these left modes. The two data streams then converge, followed by the application of an activation function \(\sigma\).
_holds for all \(u\in V\)._
_Proof._ See Appendix B.
### Data Generation
The training and testing datasets are constructed based on varying pump profiles, \(P(\mathbf{r})\), as elucidated in Equation (4). This profile is characterized by four Gaussian spots, represented by \(N_{G}=4\). Out of these, three spots have their power adjusted to exceed the threshold, specifically \(p_{i}=1.6\,p^{\text{th}}\), while the power of the remaining spot is set below the threshold at \(p_{i}=0.5\,p^{\text{th}}\). Each spot's spatial profile is represented as \(G_{i}(\mathbf{r})\). Moreover, the boundary condition that the condensate density vanishes near the computation frame is also applied.
These profiles are stochastically determined within a square region that measures \(64\,\mu\text{m}\times 64\,\mu\text{m}\) out of the entire configuration with \(128\,\mu\text{m}\times 128\,\mu\text{m}\). The region where Gaussian stops stay is smaller than the full grid, to make sure that they are still far from the region where the boundary condition is applied. Care has also been taken to ensure that the Gaussian spots are non-overlapping. Additionally, every pump profile is unique, and among the spots with power exceeding the threshold, each one is distinct from the others, thereby eliminating any potential redundancy. Given \(0.5\,\mu\text{m}\) resolution per pixel per dimension, the total datasets for pump configuration is with size \(256\times 256\times 1246\) where \(256\) represents each square map size per dimension and \(1246\) is the number of different pump configurations (of which 250 configurations are used for testing and 996 for training respectively). The datasets for the density map is with size \(256\times 256\times 2\times 1246\) where \(2\) refers to the density at the initial and final time.
It is worth mentioning that the systems of all the datasets are chosen with system only at stationary state with single energy mode, which means that the results with multiple energy modes are excluded. In multimode cases, the wavefunction density changes at different times, which can be found in experiments [56; 57].
For better emulating the experiment \(\sigma\approx 0.85\,\mu\text{m}\), standing for FWHM of each Gaussian spot equaling to \(2\,\mu\text{m}\), is chosen. The simulation is based on InGaAs QWs [58] with slightly negatively detuned cavities. The parameters are the following: \(m=0.28\,\text{meV}\,\text{ps}^{2}\,\mu\text{m}^{-2}\), \(|\chi|^{2}=0.4\), \(N_{\text{QW}}=6\), \(g_{0}=0.01\,\text{meV}\,\mu\text{m}^{2}\), \(hR=10g\), \(\eta=2\), and \(\gamma^{-1}=\Gamma^{-1}=5.5\,\text{ps}\).
## 4 Results
In Figure 4, we present model predictions for 3 representative test cases. Note that we have carefully chosen three distinct pump profiles to visualize model performance over varying inputs. As we see from the second row of the figure, the model is highly accurate in predicting the steady-state solution for the GPE. It is not only able to predict accurate values at pump locations and their locality but it is also able to predict direction of flow for the wave fronts as well as the scattering from the barrier produced from the spot below the threshold. Where the model exceeds expectations most is in the detail to which it also captures the interference pattern for different pump configurations. We see an almost identical pattern, including the parity of fringes among spots, between the predictions and simulation ground truths. The parity of these fringes are responsive to the distance between spots (see experiment in [57]), which also indicates that our model is capable of capturing these details.
Looking at error plots between predictions and ground truth values, we see that the highest errors occur very close to the pump locations. This is due to the high degree of nonlinearity which is exhibited near this region and the information loss that occurs from the Fast Fourier Transform cutoff modes in the FNO architecture. We see other smaller errors around the edges of wave fronts which could potentially be tackled using physics-informed gradient losses, which we intend to test in future work.
We present a full overview of cell averaged absolute difference errors in Figure 5 for all 250 test cases. The above 3 test predictions correspond to cases 210, 74 and 87 respectively. The cell averaged errors on the test set range between \(1.667\times 10^{-3}\), for case 184, and \(9.282\times 10^{-3}\), for case 142 (see Figure 6). We observe a bimodal distribution in the errors with two distinct bands. Empirically, the lower band corresponds to pump configurations where distance between the pumps is smaller, leading to better interference predictions. The higher band corresponds to pump configurations where
Figure 4: Comparison of the prediction from the Fourier Neural Operator approach. The columns, left to right, show different pump configurations. The rows, from top to bottom, are normalized pump profiles, predictions, normalized ground truths, and errors.
at least one pump is far away from the other two, leading to worse interference pattern predictions. This is evident in the results from Figure 7 where pumps are very far apart as opposed to Figure 8 where pumps are closer together.
We found that, when we trained the FNO with a cutoff of 16 Fourier modes for both spatial directions, which is conventional, that the model was not able to capture nonlinearities in the system to a great extent and that it would take longer to converge. We also observed that predictions were accurate in a large square sub-region of the grid but performance dropped outside this region. This was likely due to the fact that with a \(256\times 256\) grid, in order to capture the finer details, we would need to also include higher frequency modes of the Fourier expansion. To this avail, we increased the cutoff to 32 Fourier modes in both directions and observed a significant improvement in performance. For larger grid sizes, there is a higher probability that the distance between the randomly generated pump locations will be greater. This would require us to increase further the number of Fourier modes to retain similar levels of accuracy or employ dimensionality reduction techniques.
With both the FNO model and numerical solver run in parallel computing using a GPU, the time to predict solutions for 250 cases using the CUDA-based numerical method took \(3463.06\,\mathrm{s}\), whereas the FNO model took \(3.83\,\mathrm{s}\).
## 5 Future Work
In this work, we use numerical datasets for training and prediction. However, in the future we hope to directly use experimentally collected data to train the model on ground truth physics. Pump profiles and wavefunction density maps shown in Figure 4 can be obtained directly from spatial light modulator and photoluminescence spectroscopy, respectively. Due to the improvement in semiconductor fabrication, clear interference patterns, which are very close to those simulated shown here, can be found from inorganic semiconductor materials [57; 59]. Furthermore, with the help of a streak camera, photoluminescence can be captured at the picosecond level, which makes it possible to make predictions of a time-resolved condensate formation on the basis of purely experimental data.
Various general ML methods have been proposed to incorporate underlying physics-based losses and information into the model to aid the learning task, such as in [60; 61; 62; 63]. In this work, we have taken a purely data-driven approach to training, however, we believe that incorporating additional physics-informed loss terms will strictly increase the rate of convergence and accuracy. This is especially appealing given that we have a strong theoretical understanding of the underlying system.
In the future, we aim to also propose a novel Neural Operator architecture entirely, tailored to closer align to the computational procedure of the SSFM. As shown in Appendices A and B, the FNO can be seen as a first-order approximation to the SSFM. We aim to instead to take structure from the BCH formula at second-order and embed this in the model. This will better guide convergence dynamics in the weight landscape, which in theory will produce more accurate solutions at a faster rate.
## 6 Conclusions
In the present study, we explored the potential of the FNO in the context of polariton condensates. Our findings, as detailed in Section 4, demonstrate a notable alignment with the simulation data with an approximate \(1000\times\) speed up in solution generation compared to CUDA-based GPU solvers. This research paves the way for the conceptualization and development of advanced large-scale all-optical devices. Furthermore, this method draws parallels with the principles of EDA traditionally used in chip design. It introduces an innovative avenue to meet the growing demand for fast and reliable solutions in the realm of all-optical chip design.
## 7 Acknowledgements
We are greatly indebted to Prof. Pavlos Lagoudakis for giving feedback on this manuscript. |
2306.00055 | Geometric Phases Characterise Operator Algebras and Missing Information | We show how geometric phases may be used to fully describe quantum systems,
with or without gravity, by providing knowledge about the geometry and topology
of its Hilbert space. We find a direct relation between geometric phases and
von Neumann algebras. In particular, we show that a vanishing geometric phase
implies the existence of a well-defined trace functional on the algebra. We
discuss how this is realised within the AdS/CFT correspondence for the eternal
black hole. On the other hand, a non-vanishing geometric phase indicates
missing information for a local observer, associated to reference frames
covering only parts of the quantum system considered. We illustrate this with
several examples, ranging from a single spin in a magnetic field to Virasoro
Berry phases and the geometric phase associated to the eternal black hole in
AdS spacetime. For the latter, a non-vanishing geometric phase is tied to the
presence of a centre in the associated von Neumann algebra. | Souvik Banerjee, Moritz Dorband, Johanna Erdmenger, Anna-Lena Weigel | 2023-05-31T18:00:01Z | http://arxiv.org/abs/2306.00055v2 | # Geometric Phases Characterise Operator Algebras and Missing Information
###### Abstract
We show how geometric phases may be used to fully describe quantum systems, with or without gravity, by providing knowledge about the geometry and topology of its Hilbert space. We find a direct relation between geometric phases and von Neumann algebras. In particular, we show that a vanishing geometric phase implies the existence of a well-defined trace functional on the algebra. We discuss how this is realised within the AdS/CFT correspondence for the eternal black hole. On the other hand, a non-vanishing geometric phase indicates missing information for a local observer, associated to reference frames covering only parts of the quantum system considered. We illustrate this with several examples, ranging from a single spin in a magnetic field to Virasoro Berry phases and the geometric phase associated to the eternal black hole in AdS spacetime. For the latter, a non-vanishing geometric phase is tied to the presence of a centre in the associated von Neumann algebra.
Keywords:AdS/CFT, Entanglement, von Neumann algebras
## 1 Introduction
The quantisation of gravity is one of the important unanswered questions in theoretical physics. One promising way to approach this problem is provided by the AdS/CFT correspondence [1; 2; 3] as an explicit realisation of the holographic principle [4; 5]. This correspondence consists of a duality between a theory of gravity in an asymptotically AdS spacetime in \(D\) dimensions and a conformal field theory (CFT) without gravity living on its \(D-1\) dimensional asymptotic boundary. An example for this duality that will be of particular interest for this paper is the eternal black hole in AdS spacetime, dual to the two CFTs entangled in the thermofield double (TFD) state [6]. These two CFTs are visualised as living on the left and the right asymptotic boundaries of the eternal black hole, as
depicted in its Kruskal-Szekeres diagram in fig. 1. The entangled TFD state is built from energy eigenstates of the two CFTs, such that each of the CFTs is thermal with identical temperature fixed by the mass of the black hole. The interior of the eternal black hole is interpreted as a non-traversable wormhole connecting the two boundaries [7; 8].
This two-sided geometry is an ideal configuration for analysing the quantisation of a gravity theory, yielding a precise structure of its Hilbert space, the bulk Hilbert space of AdS/CFT. It contains the states corresponding to excitations in both the exterior and the interior of the black hole. A general goal of AdS/CFT, to which also this paper aims at contributing, is to characterise the similarities and the differences between the bulk Hilbert space and the Hilbert space of the dual CFT. This will provide new insights into information processing in a theory of quantum gravity, and a means of contrasting this to the case of a local quantum theory. In this paper, we use the concept of _geometric phases_ both to reveal the structure of von Neumann (vN) algebras in the given context, and to characterise missing information about the system considered.
To motivate our modus operandi, let us consider the two-sided eternal black hole geometry as in fig. 1. In order to ensure a well-defined variational principle, suitable boundary conditions have to be imposed on the metric. The left and right asymptotic symmetry groups \(G_{L/R}\) are defined as the subset of all bulk diffeomorphisms that leave these boundary conditions invariant. The full asymptotic symmetry is then given by \(G_{L}\times G_{R}\). However, the bulk geometry has an isometry group that corresponds only to the diagonal subgroup \(G_{D}\) of \(G_{L}\times G_{R}\). The moduli space of classical bulk solutions \(\mathcal{G}_{M}\) is therefore obtained by quotienting \(G_{L}\times G_{R}\) with \(G_{D}\). The parameters \(g\in\mathcal{G}_{M}\) are interpreted as the bulk degrees of freedom. For every choice of parameters \(g\), the quantised small fluctuations of
Figure 1: Visualisation of the duality between the eternal black hole in AdS spacetime and the two boundary CFTs entangled in the TFD state [6; 7; 8]. On the left-hand side, the eternal black hole in an AdS spacetime is shown in global coordinates. The dashed lines represent the black hole horizons. The wavy lines are the future and past singularities. The two wedges attached to the singularities represent the black hole interior. At the left and right boundaries of the AdS spacetime, marked in blue and green respectively, the left and right boundary CFTs are defined. The left and right asymptotic symmetry groups \(G_{L/R}\) constitute global symmetries of the respective CFT. The eternal black hole geometry is invariant under the diagonal subgroup \(G_{D}\) of the full asymptotic symmetry group \(G_{L}\times G_{R}\). The dual description of the eternal black hole is depicted on the right-hand side. The two CFTs, defined on the blue and green planes that represent the left and right asymptotic boundaries, are entangled in the TFD state.
\(g\) provide Hilbert spaces \(\mathcal{H}_{g}\). These Hilbert spaces are fibres \(F\) in a fibre bundle, with the base manifold \(B\) given by \(\mathcal{G}_{M}\). As visualised on the left-hand side of fig. 2, mathematically, a fibre bundle is defined in terms of four elements \((E,B,\pi,F)\), where \(E\) is the total space, \(B\) is the base manifold, \(\pi\) is a projection from the total space to the base manifold and \(F\) are the fibres.
Following the approach of geometric quantisation [9; 10], the fully quantised Hilbert space is defined as the space of all sections of the bundle. A section corresponds to a choice of local coordinates in the base manifold. Since the quantisation procedure outlined above naturally leads to a fibre bundle structure, an immediate question is whether this bundle is non-trivial. A bundle is trivial when there exists a global section, in which case the full base manifold can be covered by a single coordinate patch. A non-trivial bundle, on the other hand, only allows local sections. Such a non-trivial bundle is quantitatively described by a non-vanishing holonomy that measures whether the endpoints of a closed path initially defined in the base manifold differ by an amount when uplifting the path in the fibre direction. This is visualised by the right panel of fig. 2. In physics, the holonomy is more commonly referred to as geometric phase or Berry phase [11; 12]. For the eternal black hole described above, a non-vanishing holonomy is induced by the event horizon. The associated discontinuity of the time-like Killing vector corresponds to a topological defect. From the perspective of the gravitational path integral, this leads to a non-exact symplectic form which is a manifestation of wormhole-like behaviour [13]. Explicit examples for the physical consequences of non-exact symplectic forms were discussed in [14; 15]. The topological probes provide a mathematically precise and quantitative description of the thermal behaviour of the black hole, since in this case the holonomy is related to the mass, and correspondingly the temperature of the black hole [16]. At the same time, the black
Figure 2: Left panel: a visualisation of the concept of a fibre bundle. At each point \(x_{i}\) of the base manifold, fibres \(F_{x_{i}}\) are attached. For principal fibre bundles, on which we focus in this paper, the fibres are isomorphic to some group \(G\). Right panel: a path, closed in the base manifold, may no longer be closed when uplifted in the fibre direction. The mismatch of the endpoints of the uplifted path, marked in green, is the holonomy. In physics, the holonomy is more commonly referred to as geometric phase or Berry phase [11; 12]. In this work, we use this concept to calculate geometric phases both for small and large spin systems. We connect these results to holonomies of the fibre bundle obtained by quantisation of the moduli space of classical solutions in AdS/CFT.
hole temperature is a consequence of the information hidden behind the horizon.
This is an example from gravity for the remarkable more general connection between geometric phases and hidden information.1 This connection also holds in generic quantum systems without gravity, for example in a system as simple as two coupled spins. This was discussed in [14] for the case that one of the two coupled interacts with an external magnetic field. It was shown that the factorisation properties of the projective (i.e. pure-state) Hilbert space are captured by the Berry phase. In a further development [15], it was shown that a similar connection holds beyond simple quantum mechanical systems and applies to two-dimensional CFTs as well. In all of these cases, the geometric phases provide a description of wormhole-like structures in the path integral, thus leading to an interesting parallel between information processing in theories with and without gravity.
Footnote 1: A possible relation between geometric phases and hidden information was pointed out to us by Sir Michael Berry during a zoom seminar given by J.E., hosted by GGI Florence in April 2022.
In a complementary development, in recent years there has been substantial progress in quantifying the information processing in a black hole spacetime through the understanding of its operator algebra [17; 18; 19; 20]. The categorisation of this von Neumann (vN) algebra determines the process of information scrambling inside a black hole. The main aim of the present work is to find a precise connection _between the topological probes discussed above and the defining features of the operator algebra._ In particular, we show how the non-existence of a global section in the fibre structure influences the operator algebra acting on the Hilbert space of a generic quantum system.
We study this influence by carefully analysing the definition of the trace on a vN algebra. The trace is defined as a linear functional \(f\), acting on operators \(a,b\) of an algebra \(\mathcal{A}\), that is cyclic in its argument,
\[f(ab)=f(ba). \tag{1}\]
In quantum theory, such a linear functional is provided by the expectation value for a particular state \(|\psi\rangle\). In this paper, we show that an arbitrary linear functional \(f^{\prime}\) is proportional to the geometric phase \(\Phi_{G}\) of the state \(|\psi^{\prime}\rangle\) when evaluated on the commutator of two operators \(a,b\in\mathcal{A}\), \(\Phi_{G}\propto f^{\prime}([a,b])\). Therefore, a state \(|\psi\rangle\) with vanishing geometric phase provides a linear functional \(f\) that is a well-defined trace, satisfying the required cyclicity property (1).
We first obtain this relation using the simple example of two entangled spin \(1/2\) particles in a magnetic field. The corresponding operator algebra is of type I.2 Such an algebra always has an irreducible representation. On the other hand, algebras of type II and type III never have an irreducible representation. Moreover, a well-defined trace functional only exists for algebras of type I and type II. As mentioned before, the operator algebra for the two-spin system is of type I, so the trace is well-defined. In the approach proposed in this paper, this is reflected by the fact that \(f([a,b])\) is proportional to the geometric phase. This phase is given by the symplectic volume of entanglement orbits. These orbits are submanifolds
of the projective Hilbert space. Each orbit is associated to a fixed value of entanglement entropy [24].
The proportionality relation between \(f([a,b])\) and the geometric phase becomes even more important when applied to algebras of types II and III. In order to exemplify this, we generalise the aforementioned two-spin model to two copies of infinitely many pairwise entangled spins. We show that if the geometric phases associated with all of these pairs vanish, the associated algebra is of type II, and it is of type III otherwise. While this is a useful connection in any generic quantum theory, we show that there is a further interesting interpretation of these geometric phases for holographic theories. In particular, for time-shifted TFD states in CFT, which are dual to eternal black holes in AdS spacetime, we show that the geometric phase of [14] is directly related to the centre of the vN algebra. This algebra of type III with non-trivial centre was found to describe the eternal black hole in the large \(N\) limit [17; 18]. The type II algebra with trivial centre emerges as a consequence of including \(1/N\) corrections to the type III algebra [19]. The connection between the TFD state geometric phase of [14] and the centre of the algebra of the corresponding holographic CFT exemplifies the proposed _differentiation between type II and III algebras in terms of the geometric phase_. The proposed discrimination between type II and type III algebras based on the geometric phase is a novel, universal and fundamental aspect, valid for any quantum theory, with or without gravity.
While the examples mentioned above relate geometric phases to properties of vN algebras quantifying the information processing in an entangled system, geometric phases can be more broadly viewed as quantifying hidden information in any quantum system. This is the second line of thought we develop in this paper giving a fresh perspective on applying geometric phases to both entangled and non-entangled systems. In both cases, the geometric phases can be thought of as signatures of missing information. This is due to the physical observer accessing only one coordinate patch of the full base manifold. The geometric phase indicates whether the total space \(E\) of the fibre bundle is more than simply the product of the base manifold \(B\) with the fibre \(F\). Provided that the bundle is non-trivial as described above, these phases thus capture if information about the total space is missing when treating the total space as a product in a local description, \(E\stackrel{{\rm loc}}{{=}}B\times F\). We demonstrate this connection using several examples, starting with a single spin in a magnetic field. Geometrically, this example realises the Hopf fibration [25], where the base space is given by \(S^{2}\) while the fibres are \(S^{1}\). However, the total space \(S^{3}\) is not a simple product of base space and fibre, \(S^{3}\neq S^{2}\times S^{1}\), which leads to a non-vanishing geometric phase. As a second example, we also study the Virasoro Berry phase [26] based on Virasoro coadjoint orbits [27] and pinpoint that the missing information is due to non-aligned frames of reference, i.e. to time-like Killing vectors pointing in different directions. This connection between geometric phase and missing information extends also to entangled systems. We discuss this again using Virasoro Berry phases, this time in the presence of an eternal black hole. Moreover, we study gauge Berry phases [14; 15] and modular Berry phases [28; 29; 30; 31; 32]. Although these three kinds of Berry phases are related to different forms of bulk diffeomorphisms, we find that in all three cases, the origin of the geometric phase is tied to the freedom of different local observers to choose their individual time (or modular time) coordinates.
Our results for the eternal black hole in holography imply an interesting connection between missing information and von Neumann algebras. The non-vanishing geometric phase arises from different coordinate patches for which a local observer in one of them does not have information about the other. At the same time, this geometric phase is tied to the existence of a non-trivial centre for the von Neumann algebra of the eternal black hole. This centre is related to the black hole mass and thus to its temperature.
The paper is organised as follows. We start by briefly reviewing the geometric interpretation of entanglement developed in [24] for general bipartite quantum systems. Next in sec. 2.1.1, for illustrative purposes of this construction we explicitly calculate the geometric phase of the coupled two-spin system of [14]. Subsequently in sec. 2.1.2, we derive the entanglement temperature \(T_{\rm ent}\) for this two-spin system and relate it to the geometric phase. This temperature is defined such that the entanglement entropy mimics the thermal entropy of a system at temperature \(T_{\rm ent}\). With these results at hand, we turn to studying operator algebraic properties of spin systems in sec. 2.2. We first analyse the operator algebra associated with the two-spin system of [14] for illustrative purposes. In particular, we discuss the definition of the trace and its relation to the geometric phase and the entanglement temperature. Motivated by these results, we generalise our findings to two collections of infinitely many spins with an infinite amount of shared entanglement in sec. 2.2.2. We explicitly show how the values of the geometric phases determine whether a trace can be defined. This also enables us to relate the type of the operator algebra to the values of the geometric phases. Finally in sec. 2.3, after briefly reviewing the results of [17; 18; 19] in sec. 2.3.1, we discuss how our result is realised in holography for the eternal black hole in AdS in sec. 2.3.2. We then move on to study how geometric phases can generically be understood as signalling missing information. We discuss this first for the simple example of a single spin in sec. 3.1.1 and second for Virasoro Berry phases in CFT in sec. 3.1.2. We continue to discuss this for entangled systems in sec. 3.2, starting with Virasoro Berry phases in the presence of a black hole in sec. 3.2.1. We then move on to gauge Berry phases in sec. 3.2.2 and modular Berry phases in sec. 3.2.3. For the convenience of the reader, we have included a brief review of some aspects of vN algebras in app. A. In app. B, we provide details on the construction of [24].
## 2 Geometric Phase and von Neumann Algebras
In the past years, the concept of geometric phases has played an important role in holography. Different realisations include modular Berry phases [28; 29; 30; 31; 32], Virasoro Berry phases [26] and geometric phases related to wormhole physics [14; 15]. In a complementary development, vN algebras have proven useful to obtain new results about holography [17; 18; 19; 33] and also for black holes in de Sitter and flat spacetimes [34; 35].
Motivated by these results, in this paper our goal is to establish a direct connection between geometric phases and vN algebras. As simple examples we study spin systems, in the simplest version consisting of only two spins. In particular, we consider a system of two coupled spins in which one of them also couples to an external magnetic field. For this configuration, the geometric phase of the ground state was computed in [14]. We show,
first for this two-spin system, that only a state with vanishing geometric phase consistently defines a trace functional on the associated operator algebra. We then generalise this result to operator algebras of type II and type III using two infinite collections of spins. We find that the non-existence of the trace for a type III algebra is due to the fact that every state of the underlying Hilbert space has non-vanishing geometric phase. On the other hand, the Hilbert space underlying a type II algebra does have a state with vanishing geometric phase. We finally discuss how this result is realised in holography for the eternal black hole, combining insights from [19] and [14].
### Entanglement and Geometric Phase in Bipartite Quantum Systems
We begin our analysis by discussing a relation between geometric phases and entanglement entropy in simple spin systems. A geometric interpretation of entanglement for pure states in \(d\times d\)-dimensional bipartite quantum systems was established in [24]. Since this construction is crucial for the analysis in our paper, we will refer to it as the _SZK construction_ throughout this paper. We restrict the present discussion to pure states since defining a trace on an algebra requires using state vectors, i.e. pure states in the language of quantum mechanics.3 Our goal in this section is to formulate geometric phases in such systems using the SZK construction [24]. In the following, we only briefly review the construction and state the main results which are immediately important for the discussion of this work. We include technical details of this construction in app. B.
Footnote 3: As an aside, we point out that a similar analysis for mixed states may be found in [36].
The starting point is the Schmidt decomposition of pure states. This decomposition states that for every pure state \(|\psi\rangle\) of a bipartite system, a base transformation can be found such that in the new basis, the coefficients \(0\leq\kappa_{i}\leq 1\) of the base vectors are the square roots of the eigenvalues of the reduced density matrix,
\[|\psi\rangle=\sum_{i,j=1}^{d}a_{ij}|i,j\rangle=\sum_{i=1}^{d}\kappa_{i}|i, \tilde{i}\rangle. \tag{1}\]
Alternatively, regarding the \(a_{ij}\) as entries of the coefficient matrix \(a\), Schmidt decomposition may be phrased as diagonalisation of \(a\). The corresponding eigenvalues \(\kappa_{i}\) are known as the Schmidt coefficients of the state.4 Since they are the square roots of the eigenvalues of the reduced density matrix of each of the \(d\)-dimensional subsystems, knowledge of all Schmidt coefficients uniquely fixes the value of the entanglement entropy between the two subsystems.
Footnote 4: Note that, depending on the taste of the authors, sometimes the entries \(\tilde{\kappa}_{i}\) of the reduced density matrix themselves are called Schmidt coefficients, while the state \(|\psi\rangle\) is written using \(\sqrt{\tilde{\kappa}_{i}}\), as in [24]. This is of course only a matter of convention.
Given a pure state \(|\phi\rangle\), there is no measurement that distinguishes between \(|\phi\rangle\) and \(e^{i\alpha}|\phi\rangle\). Therefore, these states are physically equivalent. This leads to the definition of the projective Hilbert space \(\mathcal{H}_{P}\). A state in \(\mathcal{H}_{P}\) is represented by a ray. A ray consists of all physically equivalent states \(\lambda|\phi\rangle\) with \(\lambda\in\mathds{C}\) and \(|\lambda|=1\). While the canonical Hilbert space of an \(N\)-dimensional quantum system is \(\mathds{C}^{N}\), the projective Hilbert space of only physically distinct states is given by the complex projective space in one dimension less, \(\mathds{C}\)P\({}^{N-1}\).
The SZK construction [24] is based on analysing the Schmidt coefficients and their multiplicities to find the symmetries of a pure state for a fixed value of the entanglement entropy of a \(d\times d\)-dimensional bipartite quantum system. Based on these symmetries, particular submanifolds of the projective Hilbert space \(\mathds{C}\mathrm{P}^{d^{2}-1}\) are associated to particular values of entanglement. These submanifolds are constructed by quotienting local unitary transformations of the bipartite quantum system, described by \(\mathrm{U}(d)\times\mathrm{U}(d)\), with the symmetries of the pure state for a given value of the entanglement entropy. Constructed as homogeneous spaces, the submanifolds can be understood as orbits of the local unitary transformations \(\mathrm{U}(d)\times\mathrm{U}(d)\) and are referred to as entanglement orbits.5
Footnote 5: As a word of caution, we point out that the phrase ‘orbit’ will also appear later on in sec. 3 in the context of coadjoint orbits of the Virasoro group. These two concepts of orbits are not to be confused with each other.
Let us give some examples of such entanglement orbits for a system of two qudits, i.e. two \(d\)-level systems. In total, there are \(d\) Schmidt coefficients that may take values between \(0\) and \(1\). As discussed above, the projective Hilbert space of each isolated qudit is \(\mathds{C}\mathrm{P}^{d-1}\). A natural object to construct for a system of two qudits is then the Cartesian product of these two spaces.6 Clearly, this construction will not account for all states of the two-qudit system since the dimension of the space obtained by the Cartesian product is always smaller than the dimension of the projective Hilbert space of the full two-qudit system,
Footnote 6: This construction is known as the Segre variety. See e.g. [37; 38] for a detailed discussion of this construction in relation to (multipartite) entanglement.
\[\dim(\mathds{C}\mathrm{P}^{d^{2}-1})-\dim(\mathds{C}\mathrm{P}^{d-1}\times \mathds{C}\mathrm{P}^{d-1})=2(d-1)^{2}>0\,. \tag{2}\]
Note that \(\dim(\cdot)\) refers to the real dimension. It can be shown that the product space is the orbit which geometrically describes all separable states \(|\psi\rangle_{\mathrm{sep}}\) of vanishing entanglement [24],
\[|\psi\rangle_{\mathrm{sep}}\in\mathds{C}\mathds{P}^{d-1}\times\mathds{C} \mathds{P}^{d-1}. \tag{3}\]
Taking the Cartesian product implies that for separable states, all Schmidt coefficients but one are zero, with the remaining Schmidt coefficient being equal to \(1\). The \(d\) different Schmidt coefficients correspond to the \(d\) different possible product states in the two-qudit system, i.e. \(|i,\vec{i}\rangle\) with \(1\leq i\leq d\) in the representation of (1). Geometrically, this reflects that there are \(d\) ways to embed \(\mathds{C}\mathrm{P}^{d-1}\times\mathds{C}\mathrm{P}^{d-1}\) into \(\mathds{C}\mathrm{P}^{d^{2}-1}\).
Entangled states are obtained by forming linear superpositions of product states. Such states will not be described by the Cartesian product space. In particular, maximally entangled states \(|\psi\rangle_{\mathrm{max}}\) belong to an orbit of the form
\[|\psi\rangle_{\mathrm{max}}\in\mathds{1}\times\frac{\mathrm{SU}(d)}{\mathds{Z }_{d}}. \tag{4}\]
Here, all Schmidt coefficients are equal and non-zero. This orbit has an interesting geometric interpretation as well, as first pointed out in [39]. The dimension of the orbit in this case is exactly half the dimension of the full projective Hilbert space,
\[\dim\left(\mathds{1}\times\frac{\mathrm{SU}(d)}{\mathds{Z}_{d}}\right)=d^{2}-1 =\frac{1}{2}\dim(\mathds{C}\mathrm{P}^{d^{2}-1})\,. \tag{5}\]
Moreover, the symplectic form of \(\mathds{C}\mathrm{P}^{d^{2}-1}\) vanishes identically when restricted to the orbit (4). This, together with (5), characterises a Lagrangian submanifold, whose precise definition is as follows. A Lagrangian submanifold \(L\) of some manifold \(M\) is defined as a submanifold of \(M\) with \(i)\) the symplectic form \(\omega_{M}\) of \(M\) vanishing when restricted to \(L\), \(\omega_{M}|_{L}=0\), and \(ii)\) half the dimension of \(M\), \(\dim(L)=\frac{1}{2}\dim(M)\).7
Footnote 7: As an example, a line in \(\mathds{R}^{2}\) is a Lagrangian submanifold. As required, \(\dim(\mathds{R})=\frac{1}{2}\dim(\mathds{R}^{2})\). The symplectic form of \(\mathds{R}^{2}\), \(\omega=\mathrm{d}x\wedge\mathrm{d}y\), vanishes when either \(x\) or \(y\) are held fixed. As a slightly less simple example, the symplectic manifold of Hamiltonian mechanics spanned by \(p_{i}\) and \(q_{i}\) has Lagrangian submanifolds by fixing either all positions \(q_{i}\) or all momenta \(p_{i}\).
The intermediate orbits between maximal and vanishing entanglement generically have a more complicated structure. Moreover, with growing value of \(d\), there is a large variety of different intermediate geometries, all of which are obtained by considering the corresponding symmetries of states \(|\psi\rangle_{\mathrm{intermediate}}\) with intermediate value of entanglement. For instance, if all Schmidt coefficients for a two-qudit state are different but non-zero, the orbit is given by
\[|\psi\rangle_{\mathrm{intermediate}}\in\frac{\mathrm{U}(d)}{\mathrm{U}(1)^{d}} \times\frac{\mathrm{SU}(d)}{\mathds{Z}_{d}}. \tag{6}\]
The interested reader may consult [24] for a detailed list of the intermediate submanifolds up to \(d=4\).
The structure of the submanifolds can be interpreted as a fibre bundle [24], with the base space given by the first factors in (3), (4) and (6). The base space consists of all density matrices with the same spectrum. As examples, the base space of the intermediate submanifold (6) consists of density matrices where each eigenvalue appears only once. Such matrices are described by the quotient space \(\frac{\mathrm{U}(d)}{\mathrm{U}(1)^{d}}\). For maximal entanglement, the spectrum of the density matrix is maximally degenerate. The base space of the submanifold (4) contains only one object, which is the density matrix proportional to the identity. The fibres of the submanifold arise from pure states. Given a pure state, the corresponding reduced density matrix is obtained by tracing over one of the qudits. Each density matrix has an entanglement spectrum given by the values of the Schmidt coefficients. For a given class of density matrices with the same spectrum, the fibre consists of all pure states leading to a density matrix of that class. For each such submanifold, the connection of the bundle gives rise to a symplectic form. The corresponding symplectic volume has the interpretation of a geometric phase. In the following, we show this for an explicit example, using the approach of [40] for defining the connection.
#### 2.1.1 Example for Two Spins
We consider the coupled two-spin model of [14]. The dynamics of the spins \(\vec{S}_{i}=\frac{1}{2}\vec{\sigma}_{i}\), where the first spin is under the influence of an external magnetic field, are described by the Hamiltonian
\[H=J\vec{S}_{1}\cdot\vec{S}_{2}-2\mu_{B}BS_{1,z} \tag{7}\]
with coupling strength \(J\) and magnetic interaction strength \(\mu_{B}B\). This system can be interpreted as a hydrogen atom with hyperfine coupling \(J\) between the proton and electron spins, \(\vec{S}_{2}\) and \(\vec{S}_{1}\) respectively, in a magnetic field. In first approximation, the Zeeman coupling to the proton spin can be neglected and only the coupling of the electron spin \(\vec{S}_{1}\) to the magnetic field remains. For \(J>0\), the ground state of this system is given by
\[|\psi_{0}\rangle=-\frac{\sin\frac{\alpha}{2}-\cos\frac{\alpha}{2}}{\sqrt{2}}| \!\uparrow\!\downarrow\rangle-\frac{\sin\frac{\alpha}{2}+\cos\frac{\alpha}{2}} {\sqrt{2}}|\!\downarrow\uparrow\rangle, \tag{8}\]
where \(\tan\alpha=2\mu_{B}\frac{B}{J}\) and \(|i\rangle\) with \(i=\uparrow,\downarrow\) refer to the first and second spin. From (8), we will obtain the simplest possible realisation of the general geometric SZK construction [24] reviewed above, for \(d=2\). In this example, the projective Hilbert space is given by \(\mathds{C}\mathds{P}^{3}\). To analyse the entanglement properties of (8), we first perform the Schmidt decomposition. As mentioned around (1), this essentially amounts to a base transformation for the second spin such that (8) can be written as \(|\psi_{0}\rangle=\sum_{i=\uparrow,\downarrow}\kappa_{i}|i,\tilde{i}\rangle\), where \(\kappa_{i}\) are the Schmidt coefficients and \(|\tilde{i}\rangle\) is the transformed basis. The new basis is straightforwardly found as \(|\tilde{\uparrow}\rangle=|\downarrow\rangle\) and \(|\tilde{\downarrow}\rangle=|\!\uparrow\rangle\). The Schmidt coefficients are given by
\[\kappa_{\uparrow}=\sqrt{\frac{1-\sin\alpha}{2}}\quad\text{and}\quad\kappa_{ \downarrow}=\sqrt{\frac{1+\sin\alpha}{2}}\,. \tag{9}\]
With this result, the entanglement entropy between the two spins in the ground state (8) is easily computed as
\[S_{\text{EE}}=-\sum_{i=\uparrow,\downarrow}\kappa_{i}^{2}\ln\kappa_{i}^{2}= \sin\alpha\ln\frac{1-\sin\alpha}{\cos\alpha}-\ln\frac{\cos\alpha}{2}. \tag{10}\]
Following the general construction discussed in sec. 2.1, depending on the value of \(\alpha\), the state (8) is part of either \(\mathds{C}\mathds{P}^{1}\times\mathds{C}\mathds{P}^{1}\), \(\mathds{C}\mathds{P}^{1}\times\mathds{R}\mathds{P}^{3}\) or \(\mathds{1}\times\mathds{R}\mathds{P}^{3}\),8 all of which are submanifolds of \(\mathds{C}\mathds{P}^{3}\). The explicit construction of these spaces (for a generic two-spin state) is provided in app. B. These spaces are all entanglement orbits that appear in this case. For \(d>2\), there are more intermediate orbits, as pointed out in sec. 2.1. In the limit of large \(J\), where \(\alpha\to 0\), the ground state (8) is equal to one of the Bell states, i.e. the maximally entangled states. The entanglement entropy is given by the maximal value \(S_{\text{EE, max}}=\ln 2\), corresponding to the orbit \(\mathds{1}\times\mathds{R}\mathds{P}^{3}\), cf. (4). In the opposite limit of small \(J\), where \(\alpha\to\frac{\pi}{2}\), the ground state (8) becomes a product state \(|\psi_{0}\rangle\sim|\!\downarrow\uparrow\rangle\). The entanglement entropy (10) vanishes and the corresponding orbit is the Cartesian product \(\mathds{C}\mathds{P}^{1}\times\mathds{C}\mathds{P}^{1}\). For all other values of \(\alpha\), the orbit is given by \(\mathds{C}\mathds{P}^{1}\times\mathds{R}\mathds{P}^{3}\), with the radius of \(\mathds{C}\mathds{P}^{1}\) depending on \(\alpha\). Note that while all orbits of intermediate entanglement have this geometry, they are still different orbits, distinguished by the value of \(\alpha\). In particular, the geometric phase of each such orbit will be different, as we show by explicit computation in the following.
Footnote 8: Note that \(\frac{\mathrm{SU}(d)}{\mathbb{Z}_{d}}\) is particularly simple for \(d=2\), \(\frac{\mathrm{SU}(2)}{\mathbb{Z}_{2}}=\frac{\mathrm{SO}(3)}{\mathbb{Z}_{2}}= \mathds{R}\mathds{P}^{3}\). For general \(d\), the orbit of maximal entanglement is not simply a real projective space since \(\mathds{R}\mathds{P}^{d^{\prime}}=\frac{\mathrm{SO}(d^{\prime})}{\mathbb{Z}_ {2}}\neq\frac{\mathrm{SU}(d)}{\mathbb{Z}_{d}}\). While relations between \(\mathrm{SO}(d^{\prime})\) and \(\mathrm{SU}(d)\) exist, e.g. for \(d^{\prime}=6\) and \(d=4\), the real projective space requires a quotient by \(\mathbb{Z}_{2}\), not by \(\mathbb{Z}_{d>2}\).
We now compute the geometric phase given by the symplectic volume of the entanglement orbit. The first step is to define a connection on the orbit, following [40]. To do so, we pick some point \(P\) in the orbit, described by the square root of the reduced density matrix. A particularly convenient starting point is to choose the diagonal form of the reduced density matrix with the Schmidt coefficients (9) as diagonal entries. For the two-spin system of the present discussion, this point is given by
\[P=\begin{bmatrix}\sqrt{\frac{1-\sin\alpha}{2}}&0\\ 0&\sqrt{\frac{1+\sin\alpha}{2}}\end{bmatrix}. \tag{11}\]
Equivalently, \(P\) may be understood as interpreting the Schmidt decomposed version of the ground state (8) as an operator acting only on the first spin. This is achieved by the linear transformation \(|i,\tilde{i}\rangle\mapsto|i\rangle\langle\tilde{i}|\). Given such a \(P\), we then use an arbitrary SU(2) transformation \(u\) to transport \(P\) to an arbitrary point \(Q\) in the orbit via
\[Q=uP,\quad u=e^{-\mathrm{i}\frac{\phi}{2}\sigma_{z}}e^{-\mathrm{ i}\frac{\theta}{2}\sigma_{y}}e^{\mathrm{i}\frac{\phi}{2}\sigma_{z}}. \tag{12}\]
Since \(\mathrm{tr}\left(Q^{\dagger}Q\right)=\mathrm{tr}\left(P^{\dagger}P\right)= \mathrm{tr}(\rho_{\mathrm{red}})=1\), differentiating once w.r.t. the parameters \(\phi\) and \(\theta\) of the SU(2) transformation yields
\[\mathfrak{Re}\big{[}\,\mathrm{tr}\left(Q^{\dagger}\mathrm{d}Q \right)\big{]}=0,\quad\mathfrak{Im}\big{[}\,\mathrm{tr}\left(Q^{\dagger} \mathrm{d}Q\right)\big{]}\neq 0. \tag{13}\]
Using the standard definition of the connection, the non-vanishing imaginary part can be utilised to define
\[A=\mathrm{i}\,\mathrm{tr}\left(Q^{\dagger}\mathrm{d}Q\right)= \frac{\sin\alpha}{2}(1-\cos\theta)\mathrm{d}\phi. \tag{14}\]
The corresponding field strength of the connection, which is also the symplectic form on the orbit, is then straightforwardly obtained as
\[\Omega=\mathrm{d}A=\frac{\sin\alpha}{2}\sin\theta\mathrm{d} \theta\wedge\mathrm{d}\phi. \tag{15}\]
Performing the integral of \(\Omega\) over the full two-sphere, i.e. computing the symplectic volume, we find
\[V_{\mathrm{symp}}=\int\Omega=\frac{\sin\alpha}{2}\,V(S^{2})=2 \pi\sin\alpha=\Phi_{G}, \tag{16}\]
We note that this matches the result for the geometric phase obtained in [14], where the expectation value of the gauge connection in the ground state (8) was used. Interestingly, the value of \(\alpha\), determining the entanglement between the two-spins by virtue of (10), also fixes the symplectic volume of the entanglement orbit. We will comment further on this observation shortly.
The volume (16) characterises the number of states within a particular entanglement orbit. For instance, in the case of maximal entanglement, there is only one state resulting in
a reduced density matrix proportional to the identity, which is the Bell state. Correspondingly, the geometric phase is minimal (i.e. it vanishes). For intermediate entanglement, the reduced density matrices of two states such as
\[|\psi_{+}\rangle=\frac{a|\!\uparrow\downarrow\rangle+b|\!\downarrow\uparrow \rangle}{\sqrt{a^{2}+b^{2}}}\quad\text{and}\quad|\psi_{-}\rangle=\frac{b|\! \uparrow\downarrow\rangle+a|\!\downarrow\uparrow\rangle}{\sqrt{a^{2}+b^{2}}}, \quad\text{where}\quad a\neq b, \tag{17}\]
have the same eigenvalues and hence the same spectrum. However measurements of the same operator yield different results,
\[\langle\psi_{\pm}|\sigma_{z}\otimes\mathds{1}_{2}|\psi_{\pm}\rangle=\pm\frac{ a^{2}-b^{2}}{a^{2}+b^{2}}. \tag{18}\]
These measurement results become parametrically more and more distinct when moving towards product states, say \(a=1\) and \(b=0\). Correspondingly, the symplectic volume is larger for small entanglement since there are more states contained in the fibre. The reduced density matrices are general elements of SU(2) and, for vanishing entanglement, become projectors, \(\rho_{\text{prod}}^{2}=\rho_{\text{prod}}\). Note in particular that, for maximal entanglement, the base manifold only contains the unity element as indicated in (4). Geometrically, the factor \(\sin\alpha\) in (16) is proportional to the radius of the base manifold which, for non-maximal entanglement, is \(\mathds{C}\mathds{P}^{1}\). This is visualised in fig. 3.
The observation that larger entanglement corresponds to smaller volume, as shown in fig. 3, may seem somewhat counter-intuitive. A larger volume, i.e. more states in the sense of the previous paragraph, might naively be interpreted as enabling more entanglement between those states. This interpretation does however not coincide with the entanglement we are discussing here. In particular, the volume of the entangle
Figure 3: The three differently coloured spheres shown in this plot represent the geometry of the base manifold for three different values of the entanglement. The blue sphere has maximal volume corresponding to \(\alpha=\frac{\pi}{2}\) and vanishing entanglement. The green sphere is the base manifold for an intermediate value of entanglement determined by the value of \(\alpha\) (for the plot we used \(\alpha=\frac{\pi}{8}\)). The small black sphere represents the states of (almost) maximal entanglement (in the plot, \(\alpha\sim 10^{-3}\)).
the number of states with the same amount of entanglement in each state. The volume does not measure any potential entanglement between states in the same orbit. Mathematically, this can be understood as follows. Acting with \(u\otimes\mathds{1}\), \(u\) defined in (12), on \(|\psi_{\pm}\rangle\) and computing the reduced density matrix for the first spin results in an expression depending on \(\theta\). While the eigenvalues are of course unaltered under this unitary transformation, the \(\theta\) dependence reflects that there are many states with the same entanglement. In the case \(a=b\) however, where \(|\psi_{\pm}\rangle\) reduce to one of the Bell states, the \(\theta\) dependent terms cancel exactly.
In agreement with this interpretation, we pointed out below (4) that the orbit of maximal entanglement is a Lagrangian submanifold of the projective Hilbert space. Since the symplectic form of the projective Hilbert space vanishes when restricted to the Lagrangian submanifold, also the symplectic volume vanishes. We will explain in more detail in sec. 2.2 how this observation leads to a relation between a well-defined trace on the algebra and the value of the geometric phase. Within the coupled two-spin system discussed previously, the vanishing symplectic volume arises in the limit of large \(J\), for which \(\alpha=\arctan\bigl{(}2\mu_{B}\frac{B}{J}\bigr{)}\to 0\) in (16). For generic values of \(J\) and \(\mu_{B}B\) of (7), the symplectic volume does not vanish. This can also be understood in terms of entanglement thermodynamics, as we discuss in the next section.
#### 2.1.2 Entanglement Temperature
Here we establish a relation between the geometric phase \(\Phi_{G}\) given in (16) and the effective entanglement temperature \(\beta_{\text{ent}}\) induced by the non-vanishing entanglement (10). The entanglement temperature arises from formally interpreting the entanglement entropy as a thermal entropy. For the ground state (8), the reduced density matrix of the first spin9 is given by
Footnote 9: An analogous argument may be given for the second spin. The only differences are a few signs appearing in intermediate steps of the computation.
\[\rho_{\text{red}}=P^{2}=\begin{bmatrix}\frac{1-\sin\alpha}{2}&0\\ 0&\frac{1+\sin\alpha}{2}\end{bmatrix}. \tag{19}\]
To study the entanglement thermodynamics, we define the modular Hamiltonian by
\[\rho_{\text{red}}=\frac{e^{-H_{\text{mod}}}}{\text{tr}\,e^{-H_{ \text{mod}}}}. \tag{20}\]
Since the reduced density matrix (19) is diagonal, we make an ansatz for the modular Hamiltonian using the third Pauli matrix,
\[H_{\text{mod}}=h\sigma_{z}, \tag{21}\]
where \(h\) is an open coefficient to be fixed in the following. Inserting this ansatz into (20) and comparing with (19), we obtain an expression for the coefficient \(h\) in terms of the Schmidt coefficients,
\[h=\frac{1}{2}\ln\frac{\kappa_{\downarrow}^{2}}{\kappa_{\uparrow }^{2}}=\frac{1}{2}\ln\frac{1+\sin\alpha}{1-\sin\alpha}. \tag{22}\]
Inverting the result for the geometric phase (2.16), we express \(h\) by \(\Phi_{G}\),
\[h=\frac{1}{2}\ln\frac{2\pi+\Phi_{G}}{2\pi-\Phi_{G}}. \tag{2.23}\]
The coefficient \(h\) may be interpreted as a magnetic field expressed in units of the temperature. Due to the form of the modular Hamiltonian \(H_{\text{mod}}=h\sigma_{z}\), it can be understood as the Hamiltonian describing a single spin interacting with a magnetic field, with \(h\) being the magnetic energy of the spin in the magnetic field, \(\mu_{B}B\), measured in terms of the 'thermal' energy \(k_{B}T_{\text{ent}}\).10 Therefore we find that
Footnote 10: Unless stated otherwise, in the following we will work in units where \(k_{B}=1\).
\[h=\frac{\mu_{B}B}{k_{B}T_{\text{ent}}}=\frac{1}{2}\ln\frac{2\pi+ \Phi_{G}}{2\pi-\Phi_{G}}\quad\to\quad\beta_{\text{ent}}=\frac{k_{B}}{2\mu_{B}B }\ln\frac{2\pi+\Phi_{G}}{2\pi-\Phi_{G}}. \tag{2.24}\]
The value of the entanglement temperature is therefore determined by the geometric phase. In particular, in the limit of vanishing entanglement where \(\Phi_{G}\to 2\pi\), the logarithm diverges, so the entanglement temperature goes to zero. Furthermore, for maximal entanglement, \(\Phi_{G}=0\). In this case, the logarithm vanishes and accordingly, the entanglement temperature diverges. So we find that by tuning the value of the geometric phase, we cover the full temperature range. Note in particular that the limits \(\beta_{\text{ent}}\to 0\) and \(\beta_{\text{ent}}\to\infty\) exactly reproduce the above behaviour of the entanglement properties when considering the TFD state
\[|\text{TFD}\rangle=\frac{1}{\sqrt{Z}}\sum_{n}e^{-\beta\frac{E_{ n}}{2}}|n_{L}\rangle|n_{R}^{*}\rangle\quad\text{with}\quad Z=\sum_{n}e^{-\beta E _{n}}, \tag{2.25}\]
which is used to describe thermal systems. For \(\beta\to\infty\) (i.e. \(T\to 0\)), the superposition of energy eigenstates within (2.25) is dominated by the lowest energy, i.e. the ground state. Therefore, the low temperature limit reduces the TFD state to a product state with vanishing entanglement,
\[\lim_{\beta\to\infty}|\text{TFD}\rangle=|0_{L}\rangle|0_{R}\rangle\,. \tag{2.26}\]
In the opposite limit \(\beta\to 0\) (i.e. \(T\to\infty\)), all of the exponentials in the TFD state \(\sim e^{-\beta\frac{E_{n}}{2}}\) tend to 1. Therefore, in the high temperature limit the TFD state goes to a maximally entangled state
\[\lim_{\beta\to 0}|\text{TFD}\rangle=\frac{1}{\sqrt{\sum_{n}1}}\sum_{n}|n_{ L}\rangle|n_{R}^{*}\rangle. \tag{2.27}\]
This is not a mere coincidence and has an explanation in terms of the geometric interpretation of entanglement analysed above. In calculating the Schmidt decomposition of (2.8), we have written the state in a way very close to the TFD state,
\[|\psi_{0}\rangle=\sqrt{\frac{1-\sin\alpha}{2}}|\!\uparrow\! \tilde{\uparrow}\rangle+\sqrt{\frac{1+\sin\alpha}{2}}|\!\downarrow\!\tilde{ \downarrow}\rangle. \tag{2.28}\]
Expressing this Schmidt decomposed version of (8) by the entanglement temperature, we find
\[|\psi_{0}\rangle=\frac{1}{\sqrt{1+e^{-\beta_{\rm ent}2\mu_{B}B}}} \Big{[}|\!\downarrow\!\tilde{\downarrow}\rangle+e^{-\beta_{\rm ent}\mu_{B}B}|\! \uparrow\!\tilde{\uparrow}\rangle\Big{]}, \tag{29}\]
which is the TFD state for the two-spin system. This makes manifest the above discussed behaviour of the TFD state in the limits of vanishing and infinite temperature for the two-spin system of [14].
### The Algebraic Perspective
The above results illustrate how geometric phases arise in bipartite quantum systems. We found that the geometric phases characterise their entanglement properties. In terms of entanglement thermodynamics, this is explained by the entanglement temperature being determined by the geometric phase.
We note that the SZK construction [24], as already mentioned in sec. 2.1, is valid for any dimension \(N\) of the bipartite quantum system in question, and thus also in the large \(N\) limit. Therefore the results of the preceding section also hold for large \(N\). This motivates us to consider also states of type II and type III vN algebras in relation to geometric phases within the SZK construction.
In particular, we combine the insight that maximally entangled states always have a vanishing geometric phase by the reasoning of [24] with the fact that, in the language of vN algebras, maximally entangled states can be used to consistently define a trace on the algebra (see e.g. [21]). In the following, we make this connection explicit by showing that only states with vanishing geometric phase can be used to consistently define a trace.
In our analysis, we will not only encounter the more familiar vN algebras of type I, but in particular also the more complicated versions of type II and type III. For ease of the reader unfamiliar with this topic, we have included a brief review of aspects of these algebras important for the following discussion in app. A. More details can be found in reviews such as [21; 22; 23]. For illustrative purposes, in sec. 2.2.1 we discuss the two-spin model from above w.r.t. its algebraic properties. Already at this stage, we will see the relation between the trace on the algebra and the geometric phase. In sec. 2.2.2, we then take the large \(N\) limit and discuss collections of infinitely many spins. In this discussion, we will encounter the algebras of type II and type III.
#### 2.2.1 Illustrative Example: Algebraic Perspective for the Two-spin System
We will now discuss how the value of the geometric phase affects the possibility of consistently defining a trace on the algebra. While our ultimate goal will be to discuss this for algebras of type II and type III, for illustrative purposes it is useful to first make use of the two-spin system again, discussed in sec. 2.1.1. Of course, in this case we are not in a type II or type III scenario: the algebras acting on either of the single spin Hilbert spaces are 2-dimensional and therefore of type I\({}_{2}\). The combined algebra, acting on the two spin Hilbert space, is of type I\({}_{4}\). The two type I\({}_{2}\) algebras commute with each other. Since we are in a type I setting, we know that by definition there must exist a trace on both single
spin algebras \({\rm I}_{2}\). However, the precise definition will turn out to resemble the cases of type II and type III algebras in close analogy, as we show in the following.
As briefly discussed in the introduction in (1), a trace on an algebra \({\cal A}\) is defined as a linear functional that is cyclic in its argument. Clearly, an expectation value in some state \(|\phi\rangle\) defines a linear functional \(f(\cdot)=\langle\phi|\cdot|\phi\rangle\) in that it satisfies
\[f(ca) =cf(a)\quad\text{for}\quad c\in\mathds{C},a\in{\cal A}\quad\text {and} \tag{30}\] \[f(a+b) =f(a)+f(b)\quad\text{for}\quad a,b\in{\cal A}. \tag{31}\]
However, the cyclic property does not hold for generic states \(|\phi\rangle\). We may test this explicitly for the two-spin system (7) by defining a linear functional \(f_{0}\) as the expectation value in the ground state (8),
\[f_{0}(a_{L})=\langle\psi_{0}|a_{L}|\psi_{0}\rangle. \tag{32}\]
Here, \(a_{L}\) is any operator of the algebra of the first spin.11 We borrowed the subscript \(L\) in analogy to holographic systems to which we make contact in later sections.
Footnote 11: In what follows, the same reasoning also applies for operators of the algebra of the second spin. We only choose operators of the first spin for concreteness and ease of notation.
Since the algebra of operators of the first spin is simply \({\rm I}_{2}\), it can be represented by the matrix algebra of Hermitian \(2\times 2\) matrices. The Pauli matrices in combination with the identity matrix provide a convenient basis to parametrise any such matrix. Correspondingly, we may parametrise arbitrary operators as
\[a_{L}=a_{L,n}\sigma_{n},\quad b_{L}=b_{L,n}\sigma_{n},\quad n\in\{0,x,y,z\}, \quad a_{L,n},b_{L,n}\in\mathds{R}, \tag{33}\]
where we denote \(\sigma_{0}=\mathds{1}_{2}\). With this parameterisation, we can test whether \(f_{0}\) defined in (32) is cyclic in its argument by evaluating it on the commutator of \(a_{L}\) and \(b_{L}\). Explicitly, we find
\[f_{0}([a_{L},b_{L}])=2{\rm i}\sin\alpha\big{(}a_{L,y}b_{L,x}-a_{L,x}b_{L,y} \big{)}. \tag{34}\]
If this linear functional is to define a trace, the right hand side has to vanish for any two arbitrary operators \(a_{L}\) and \(b_{L}\). Therefore, vanishing of the bracket is not sufficient. We do however have a different way of making the right hand side vanish by setting \(\alpha=0\). Thinking back to the discussion in sec. 2.1.1, this means that the state \(|\psi_{0}\rangle\) is maximally entangled. Explicitly, in the limit \(\alpha\to 0\), the ground state of the two-spin system (8) reduces to one of the Bell states,
\[\lim_{\alpha\to 0}|\psi_{0}\rangle=\frac{|\uparrow\downarrow\rangle-| \downarrow\uparrow\rangle}{\sqrt{2}}\,. \tag{35}\]
So we find that a trace on the algebra \({\rm I}_{2}\) can only be consistently defined by a state which has maximal entanglement. This argument can be generalised to \({\rm I}_{d}\) and even other types of algebras, as we will discuss shortly. First however, note that due to the normalisation of (8), the trace we have defined by \(f_{0}\) has the somewhat unusual property that the
trace of the identity is equal to 1, \(f_{0}(\mathds{1}_{2})=1\), while usually one normalises the trace such that the trace of the identity is equal to the dimension. In fact, for finite dimensional systems, defining a linear functional cyclic in its argument actually only defines an object proportional to the (usual) trace. In the present case of finite dimensional matrix algebras, the proportionality factor is easily fixed by demanding that \(\text{tr}(\mathds{1}_{d})=d\). We may achieve this by a simple rescaling of \(f_{0}\),
\[\text{tr}_{\mathds{1}_{2}}(\cdot)=2\lim_{\alpha\to 0}\langle\psi_{0}|\cdot|\psi_{0} \rangle\,. \tag{36}\]
As mentioned before, the above reasoning can be generalised straightforwardly to higher dimensional systems, such as two qudits. Each of the qudits has a Hilbert space on which an algebra of type \(\mathds{I}_{d}\) acts, with the two copies of \(\mathds{I}_{d}\) commuting. Any state in the combined Hilbert space can be represented in Schmidt decomposition
\[|\psi_{d}\rangle=\sum_{i=1}^{d}\lambda_{i}|i_{L},i_{R}\rangle\,, \tag{37}\]
and any operator of the 'left' qudit algebra as
\[F_{d}=F_{d,L}\otimes\mathds{1}_{R}=\sum_{m,n,k=1}^{d}f_{mn}|m_{L},k_{R} \rangle\langle n_{L},k_{R}|\,, \tag{38}\]
with \(f_{mn}=f_{nm}^{*}\) to ensure that \(F_{d}\) is Hermitian. Equivalently, in the spirit of above, \(F_{d,L}\) may be represented as \(F_{d,L}=\tilde{f}_{0}\mathds{1}_{d}+\sum_{i=1}^{d}\tilde{f}_{i}\gamma_{i}\), where \(\gamma_{i}\) span the Lie algebra of \(\text{SU}(d)\). Using (38), a straightforward calculation then shows that
\[f_{d}(F_{d}) =\langle\psi_{d}|F_{d}|\psi_{d}\rangle=\sum_{i=1}^{d}\lambda_{i}^ {2}f_{ii}\quad\text{and} \tag{39}\] \[f_{d}([F_{d},G_{d}]) =\langle\psi_{d}|[F_{d},G_{d}]|\psi_{d}\rangle=\sum_{i,j=1}^{d} \lambda_{i}^{2}(f_{ij}g_{ji}-f_{ji}g_{ij}), \tag{40}\]
where \(G_{d}\) is defined analogously to \(F_{d}\). As long as \(\lambda_{i}\neq\lambda_{j}\) for any two Schmidt coefficients, \(f_{d}\) does not satisfy the conditions required to consistently define a trace. In particular, \(f_{d}\) is not cyclic in its argument. If however \(\lambda_{i}=\lambda_{j}=\frac{1}{\sqrt{d}}\) for every \(i,j\), i.e. if \(|\psi_{d}\rangle\) is maximally entangled,
\[\langle\psi_{d}|F_{d}|\psi_{d}\rangle =\frac{1}{d}\sum_{i=1}^{d}f_{ii}\quad\text{and} \tag{41}\] \[\langle\psi_{d}|[F_{d},G_{d}]|\psi_{d}\rangle =0. \tag{42}\]
As before in (36), by rescaling with \(d\), the expectation value in the state \(|\psi_{d}\rangle\) with \(\lambda_{i}=\frac{1}{\sqrt{d}}\) for all \(i=1,...,d\) matches the usual notion of the trace in that it satisfies \(f_{d}(\mathds{1}_{d})=d\).
We have discussed above in (34) how the entanglement properties of a state in a two-spin system are directly related to the possibility of defining a trace on the algebra,
in that the state must be maximally entangled. By our above analysis in sec. 2.1, these states have vanishing geometric phase. To make this explicit, we rewrite the result (34) for \(f_{0}([a_{L},b_{L}])\) in terms of the geometric phase. Inverting (16), we find
\[f_{0}([a_{L},b_{L}])\propto\Phi_{G}\,. \tag{43}\]
Since only the maximally entangled states define a trace and those states belong to an orbit with vanishing symplectic volume following our discussion at the end of sec. 2.1.1, the trace can only be defined using states with vanishing geometric phase. This is consistent with our generalisation to qudits with type I\({}_{d}\) algebras since, as discussed in sec. 2.1, the orbit of states with maximal entanglement is always a Lagrangian submanifold of the projective Hilbert space and thereby always has vanishing symplectic volume.
Whether \(f_{0}\) consistently defines a trace or not can also be understood by the entanglement thermodynamics and in particular the value of the entanglement temperature derived in sec. 2.1.2. In fact, (34) can be expressed as
\[f_{0}(a_{L}b_{L}-b_{L}a_{L})\propto\big{(}1-e^{-\beta_{\rm ent}\mu_{B}B}\big{)}, \tag{44}\]
using (24). In the case of maximal entanglement where \(\beta_{\rm ent}=0\), the r.h.s. vanishes, consistent with our above result. The r.h.s of (44) is reminiscent to what happens when discussing the trace for type II and type III algebras by use of the thermofield double state for infinite dimensional spin systems, see e.g. [22]. As mentioned before (44) and discussed in detail in sec. 2.1.2, the entanglement temperature and the corresponding TFD state description (29) of the two-spin ground state (8) are in one-to-one correspondence to the geometric phase (16) showing up on the r.h.s. of (43). Motivated by this similarity in structure, we expect a result similar to (43) when discussing algebras of type II and type III explicitly in sec. 2.2.2.
Before we generalise the above result to algebras of type II and type III, we give an at least intuitive reason to why the value of the geometric phase appears when discussing the trace, following the results on the complex quantum geometric tensor of [41; 42]. In quantum mechanical state space, the complex quantum geometric tensor is used to define the metric and the symplectic form on state space [41]. The metric and the symplectic form are given as the real and imaginary part of the complex quantum geometric tensor, respectively. From the symplectic form, the geometric phase is found by integration. Therefore, the symplectic form is identified with the Berry curvature. In [42], the notion of this tensor was generalised to field theory by a path integral approach. By considering small variations of parameters \(\lambda_{a}\) that the Lagrangian depends on, the authors define deformation operators \(O_{a}\). The Berry curvature is then (schematically) found to be calculated by an expectation value of the commutator of the deformation operators
\[F_{ab}\propto\langle\Omega|[O_{a},O_{b}]|\Omega\rangle, \tag{45}\]
where \(|\Omega\rangle\) is the ground state.12 By our above results, if \(|\Omega\rangle\) is such that it defines a trace,
the Berry curvature (and thereby also the geometric phase) has to vanish by the cyclicity property of the trace.
Of course, the approach of [42] is based on field theory which typically has an operator algebra of type III\({}_{1}\), as mentioned in the brief review in app. A. However, as we will show in the following, the relation between the existence of a trace and the value of the geometric phase has a straightforward generalisation to algebras of type II and type III.
#### 2.2.2 Generalisation to Infinitely Many Spins
We are now ready to generalise our above results to two infinite collections of qubits, as considered in the original construction of the (hyperfinite)13 type II and type III vN algebra factors [43; 44; 45].
Footnote 13: Hyperfinite refers to the fact that these are constructed as limits of finite dimensional systems. vN algebras appearing in physics can typically be treated as such a limit.
For infinite-dimensional systems, the partition function \(Z\) generically diverges in the thermodynamic limit since it is a sum of infinitely many positive numbers. Correspondingly, the thermal density matrix \(\rho_{\text{th}}=\frac{e^{-\beta H}}{Z}\) and the underlying Hilbert space are ill-defined. Nevertheless, a proper treatment of physical systems in the thermodynamic limit may still be performed by involving the thermofield double state [46]. This state is essentially a purification of the thermal system, achieved by doubling the degrees of freedom and introducing a second copy14 of the original system. With this state, it is possible to obtain a well-defined Hilbert space by following the GNS construction [47; 48]. In this procedure, the TFD state takes the role of the cyclic and separating vector. This in particular also works for the TFD state as the dual description of the eternal black hole in AdS spacetime [6] used to define the operator algebras acting on the Hilbert space of the eternal black hole [17; 18; 19].
Footnote 14: To be precise, the two copies are related by time reversal. However in most discussions of the TFD state, the system at hand is time reversal symmetric. In what follows, our systems will possess the same symmetry.
We now analyse the construction of the hyperfinite factors of type II and type III w.r.t. its geometric phases. For more details on the construction itself, a review may be found in [21]. The cyclic and separating vectors used in defining these algebras can be interpreted as the TFD state for two copies of an infinite collection of spins with the temperature given as the entanglement temperature, in the same sense as the TFD state in sec. 2.1.2 for the two-spin system. As before, we will refer to these copies as 'left' and 'right' in the following. Within states, we always treat the left copy as the first entry within \(\ket{\cdot}\).
We start by considering two spins, one each from the left and right copy, to be in a state with non-vanishing entanglement. A convenient way to denote this state is given by
\[\ket{\lambda}=\frac{1}{\sqrt{1+\lambda}}\big{(}\ket{\downarrow\downarrow}+ \sqrt{\lambda}\ket{\uparrow\uparrow}\big{)}\quad\text{with}\quad 0< \lambda\leq 1, \tag{46}\]
where the value of \(\lambda\) is in one-to-one correspondence to the amount of entanglement between the spins: \(\lambda=0\) corresponds to vanishing entanglement with \(\ket{\lambda=0}\) a product state and
\(\lambda=1\) corresponds to maximal entanglement with \(|\lambda=1\rangle\) a Bell state. Since we have two collections of infinitely many spins, in generalising the above state we consider the left and right spins to be pairwise entangled in the same way as (46). Each such qubit pair is in a state \(|\lambda_{n}\rangle\), with the index \(n\) indicating the state of the \(n\)th spin pair. To keep the discussion general, we consider each of the spin pairs to share a different non-vanishing amount of entanglement. This is achieved by imposing that \(0\neq\lambda_{n}\). Combining all of these pairwise entangled states, the full state is given by the tensor product of all of them,
\[|\Psi\rangle=\lim_{N\to\infty}\bigotimes_{n=1}^{N}|\lambda_{n}\rangle=\lim_{N \to\infty}\bigotimes_{n=1}^{N}\frac{1}{\sqrt{1+\lambda_{n}}}\big{(}|\!\downarrow \downarrow\rangle_{n}+\sqrt{\lambda_{n}}|\!\uparrow\uparrow\rangle_{n}\big{)}. \tag{47}\]
This state provides a cyclic and separating vector that may be used to construct, for generic values of \(\lambda_{n}<1\), algebras of type III and corresponding Hilbert spaces on which the algebras act. As first shown in [45] and mentioned in app. A, it depends on the convergence properties of the sequence of \(\lambda_{n}\) which subclass of type III is constructed. If the sequence converges to some value \(0\leq\lambda^{*}<1\), the algebra is of type III\({}_{\lambda^{*}}\). If it converges to \(0\) sufficiently fast, the algebra is actually not of type III, but of type I\({}_{\infty}\). The reason is that for sufficiently fast convergence, the partition function is actually finite and the Hilbert space for a single copy of the spin collection can be defined. As explained in more detail e.g. in [22], such a sufficiently fast convergence is present e.g. if \(\lambda_{n}\sim n^{-\gamma}\), \(\gamma>1\), for large \(n\). If the sequence does not converge but rather alternates between (at least) two values \(0<\lambda^{*},\tilde{\lambda}^{*}<1\), the algebra is of type III\({}_{1}\).
In view of discussing the existence of a trace, we point out that for the state (46), a geometric phase and an entanglement temperature can be defined in the same way as discussed in sec. 2.1.1 and sec. 2.1.2, respectively. The state (46) is already written in the Schmidt decomposition. The Schmidt coefficients can be read off directly and are given by
\[\kappa_{\uparrow,n}=\frac{\sqrt{\lambda_{n}}}{\sqrt{1+\lambda_{n}}}\quad \text{and}\quad\kappa_{\downarrow,n}=\frac{1}{\sqrt{1+\lambda_{n}}} \tag{48}\]
for every of the spin pairs. As shown in sec. 2.1.2, the entanglement temperature follows from the Schmidt coefficients as
\[\frac{\mu_{B}B_{n}}{k_{B}T_{\text{ent}}}=h_{n}=\frac{1}{2}\ln\frac{1}{\lambda_ {n}}=\frac{1}{2}\ln\frac{2\pi+\Phi_{G}^{(n)}}{2\pi-\Phi_{G}^{(n)}}\quad\to \quad\beta_{\text{ent}}=\frac{k_{B}}{2\mu_{B}B_{n}}\ln\frac{2\pi+\Phi_{G}^{( n)}}{2\pi-\Phi_{G}^{(n)}}. \tag{49}\]
We choose to let the magnetic energy \(\mu_{B}B_{n}\) vary for every spin pair, while \(T_{\text{ent}}\) does not depend on \(n\). While this choice does not affect the discussion of the type of the corresponding algebra, it does yield a more intuitive physical picture. All of the spin pairs feel the same 'temperature' while being under the influence of different values of the magnetic field, in close analogy to the TFD state for infinitely extended spin chains.
To give a concrete physical example, we may think of infinitely many copies of the system defined by the Hamiltonian (7) analysed in detail in sec. 2.1.1. The above discussion follows through in exactly the same way by replacing \(|\lambda_{n}\rangle\) with the ground state
(2.8). Allowing for each of the spins of (2.7) to be under the influence of a parametrically different magnetic field \(B_{n}\) as in (2.49), the parameters map as
\[\lambda_{n}=\frac{1-\sin\alpha_{n}}{1+\sin\alpha_{n}}\quad\text{with }\quad\tan\alpha_{n}=\frac{2\mu_{B}B_{n}}{J}. \tag{2.50}\]
Following the same steps for computing the symplectic volume as in sec. 2.1.1, the corresponding geometric phases for the spin pairs are given by
\[\Phi_{G}^{(n)}=2\pi\frac{1-\lambda_{n}}{1+\lambda_{n}}. \tag{2.51}\]
Hence we may write (2.46) for every spin pair using the corresponding geometric phase only,
\[|\lambda_{n}\rangle=\frac{1}{\sqrt{2}}\Bigg{[}\sqrt{1-\frac{\Phi_{G}^{(n)}}{2 \pi}}|\!\uparrow\rangle_{n}+\sqrt{1+\frac{\Phi_{G}^{(n)}}{2\pi}}|\!\downarrow \downarrow\rangle_{n}\Bigg{]}. \tag{2.52}\]
This will be more convenient below.
Next we analyse the existence of a trace on the algebra. As for the two-spin system in sec. 2.2.1, we first need to define a linear functional. In analogy to the case of two spins, we do so utilising the state (2.47),
\[\mathcal{F}(a_{L})=\langle\Psi|a_{L}|\Psi\rangle. \tag{2.53}\]
Here \(a_{L}\) is, as before, any operator acting on the left system of spins. For the infinite collection of spins in the present discussion, such operators may be thought of as finite polynomials of Pauli matrices. For now, we assume that \(\lambda_{n}<1\). For generic values of \(\lambda_{n}\) however, the functional defined above will never be cyclic in its argument for arbitrary operators \(a_{L},b_{L}\). To make this explicit, we proceed analogously as in sec. 2.2.1. We parametrise operators acting on a single spin pair by Pauli matrices as in (2.33). Then we take a tensor product of finitely many (for concreteness, \(k\)) of such operators to obtain operators acting on \(k\) spin pairs. Evaluating the commutator of two such operators in the linear functional (2.53) results in
\[\mathcal{F}\big{(}[a_{L},b_{L}]\big{)}=\sum_{n=1}^{k}\Phi_{G}^{(n)}g_{1}^{(n) }(a_{L},b_{L})+\sum_{n,m=1}^{k}\Phi_{G}^{(n)}\Phi_{G}^{(m)}g_{2}^{(n,m)}(a_{L },b_{L})+\mathcal{O}\Big{[}\big{(}\Phi_{G}^{(n)}\big{)}^{3}\Big{]}, \tag{2.54}\]
where \(g_{i}^{(n)}\) are functions only of the operators \(a_{L},b_{L}\) that do not vanish in general, analogously to the bracket on the r.h.s. of (2.34). Fixing \(k\) to some value, these functions may be calculated straightforwardly. Of much higher importance than their explicit form however is the observation that the linear functional evaluated on the commutator receives contributions from every of the individual geometric phases \(\Phi_{G}^{(n)}\). As indicated, various powers of the geometric phases contribute. If \(a_{L}\) and \(b_{L}\) consist of \(k\) single spin pair operators, the highest power will be \(\big{(}\Phi_{G}^{(n)}\big{)}^{k}\). We highlight in particular that, while different powers of \(\Phi_{G}^{(n)}\) contribute, there is no term independent of all of the \(\Phi_{G}^{(n)}\). Therefore, \(\mathcal{F}\) evaluated on
the commutator vanishes if and only if all of the geometric phases vanish. As we discussed above, for generic \(0<\lambda_{n}<1\), which is necessary to discuss algebras of type III, the geometric phases are non-zero, cf. (51). We have therefore found a geometric reason for the non-existence of the trace on type III algebras.
So far we did not consider the case \(\lambda_{n}=1\) for one or more of the \(\lambda_{n}\). In this case, at least for infinitely many \(\lambda_{n}=1\), the situation changes significantly. Since in this instance, (47) contains only finitely many states \(|\lambda_{n}\rangle\) with \(\lambda_{n}\neq 1\), we may use the action of the algebra to transform \(|\Psi\rangle\) into a state where \(\lambda_{n}=1\) holds for all \(n\). The resulting state describes two collections of infinitely many spins for which each pair of spins is maximally entangled. Moreover, considering (51), every of the spin pairs has vanishing geometric phase. By our result (54), this shows that the trace can be consistently defined on the algebra, since every of the individual contributions vanishes. We are thus no longer in a setting which leads to defining an algebra of type III. On the other hand, we are still in a setting of two collections of infinitely many spins with an infinite amount of shared entanglement between the two spin collections. Therefore, the algebra acting on the Hilbert space defined from (47) with \(\lambda_{n}=1\) for all \(n\) is of type II (cf. the brief review in app. A). The subclass of the type II algebra is found by observing that the trace is defined for every possible operator of the algebra. In particular, due to the normalisation of the state (47), the trace of the identity operator is defined and equals 1. Therefore the algebra is of type II\({}_{1}\).
### Realisation in Holography: the Eternal Black Hole
In the past sections we have shown that states with non-vanishing geometric phases cannot be used to define a trace on the operator algebra. This in particular enabled us to relate the non-existence of the trace for type III algebras to the fact that no state of the Hilbert space of a type III algebra has trivial geometric phase since all these states are non-maximally entangled. In the following we discuss how this is realised in the holographic setting of the eternal black hole. We start by briefly reviewing the results for the corresponding operator algebras of [17; 18] and [19]. We then explain how the results of [14; 15] fit into this algebraic description, in particular w.r.t. the relation between the geometric phase and the type of the operator algebra.
#### 2.3.1 Algebraic Setup for the Eternal Black Hole
Within the AdS/CFT correspondence, the eternal black hole is dual to two CFTs entangled in the TFD state (25), each of the CFTs located on the left and right asymptotic boundaries of the spacetime [6]. Considering the single-trace operators of one of the CFTs (say, the left one), it was shown in [17; 18] that in the large \(N\) limit, the single-trace operators describe a generalised free field theory. The corresponding operator algebra \(\mathcal{A}_{L,0}\) is of type III\({}_{1}\). The commutant of this algebra \(\mathcal{A}_{R,0}\) is again of type III\({}_{1}\) and can be understood as the operator algebra of single-trace operators of the right CFT. These two algebras are factors: they only contain operators with non-trivial commutation relations, so the centre is trivial.
In this construction, the CFT Hamiltonian cannot be included in \(\mathcal{A}_{L,0}\). Both the expectation value as well as the two-point function of the CFT Hamiltonian scale with
\(N^{2}\) and diverge in the large \(N\) limit. While the divergence of the expectation value may be removed by defining \(H_{L}^{\prime}=H_{L}-\langle H_{L}\rangle\), the divergence of the two-point function is still present for \(H_{L}^{\prime}\). Therefore this operator cannot be part of the algebra. To define an operator which exists in the large \(N\) limit, dividing \(H_{L}^{\prime}\) by \(N\) is sufficient, \(U=H_{L}^{\prime}/N\), since this precisely cancels the \(N^{2}\) scaling of the two-point function. However, the operator \(U\) is central to \(\mathcal{A}_{L,0}\) for \(N\to\infty\),
\[[U,\mathcal{O}]=\frac{1}{N}[H_{L}^{\prime},\mathcal{O}]=-\frac{\mathrm{i}}{N} \partial_{t}\mathcal{O}\stackrel{{ N\to\infty}}{{\to}}0 \tag{55}\]
for any operator \(\mathcal{O}\in\mathcal{A}_{L,0}\). \(U\) may be included into the algebra by a tensor product \(\mathcal{A}_{L}=\mathcal{A}_{L,0}\otimes\mathcal{A}_{U}\), where \(\mathcal{A}_{U}\) is the algebra of bounded functions of \(U\). The analogous argument can be run for \(H_{R}^{\prime}\), with operators chosen from \(\mathcal{A}_{R,0}\). Interestingly, to construct \(\mathcal{A}_{R}\) the same operator \(U\) has to be used. Therefore, the centre of both the left and right algebras \(\mathcal{A}_{L}\) and \(\mathcal{A}_{R}\) is shared. Physically, this can be understood by the fact that the mass of the black hole has to be the same when measured from both sides. In this sense, from the bulk perspective, the mass corresponds to a shared degree of freedom. The operator \(U\) is defined by a rescaling of the Hamiltonian, the latter one measuring the energy (i.e. the mass).
The shared degree of freedom can also be understood by the fact that the difference of the Hamiltonians \(h=H_{L}-H_{R}\) is a well-defined operator in the large \(N\) limit. In particular, \(h\) annihilates the TFD state (25). In the bulk dual, this corresponds to an isometry of the bulk solution under evolution by the bulk dual of \(h\). This isometry also becomes important when defining a geometric phase for time translations for the black hole as in [14], whose existence is in fact intimately related to the non-trivial centre of the algebra. Before discussing this in detail in the next section, we will first complete reviewing the algebras acting on the Hilbert space of the eternal black hole by discussing \(1/N\) corrections [19].
The above discussion is valid in the large \(N\) limit. However, taking into account \(1/N\) corrections, the right hand side of (55) no longer vanishes. Therefore, \(U\) does not define a central operator. To properly include it into the algebra, it was observed that the previous definition of \(U\) also receives \(1/N\) corrections. These corrections resemble that \([U,\mathcal{O}]\neq 0\) when including terms proportional to \(1/N\). The resulting algebra is then defined by taking the crossed product of \(\mathcal{A}_{L,0}\) with the algebra of bounded functions of \(\hat{h}+X\), where \(X=\beta NU\) and \(\hat{h}\) corresponds to the bulk dual of \(h\). The crossed product in \(\mathcal{A}_{L}=\mathcal{A}_{L,0}\rtimes\mathcal{A}_{\hat{h}+X}\) indicates that the two pieces composing \(\mathcal{A}_{L}\) do not commute.
As shown in [19], \(\mathcal{A}_{\hat{h}+X}\) can be understood as the modular automorphism group of the type III\({}_{1}\) algebra \(\mathcal{A}_{L,0}\). Taking the crossed product of a type III\({}_{1}\) algebra with its modular automorphism group yields a factor of type II\({}_{\infty}\)[49; 50; 51]. Since the resulting algebra after taking the crossed product is of type II, it allows for defining a trace. Moreover, since it is a factor, the centre is now trivial.
#### 2.3.2 Topology of Phase Space and Centre of the Algebra
In the preceding section we have briefly reviewed which operator algebras are found for the eternal black hole. In particular, there is a transition between a type III algebra, present in
the large \(N\) limit, and a type II algebra, obtained by including \(1/N\) corrections to the large \(N\) limit. In the following we discuss how this change of algebra is related to the geometric phase defined for the AdS eternal black hole in [14].
In the eternal black hole spacetime, there is no globally defined time-like Killing vector. This is due to the fact that a time-like Killing vector defined locally switches sign when passing the horizon [52]. Therefore, time is not defined globally for the eternal black hole. Rather, there are two time coordinates \(t_{L}\) and \(t_{R}\) associated to each of the boundaries. A natural choice for the relation between these times at the boundary is to simply identify them, \(t_{L}=t_{R}\). However, since time is not defined globally, this identification is not guaranteed to hold deep in the bulk. In particular, at the horizon a different identification may be imposed, \(t_{L}=2\delta-t_{R}\), where \(\delta\) is the relative offset between the left and right times. The offset \(\delta\) is locally invisible. Its value can only be determined by a non-local measurement. However, inserting \(t_{L}=t_{R}\) into \(t_{L}=2\delta-t_{R}\) shows that \(\delta=t_{L}=t_{R}\) can be used to describe time evolution on both boundaries.
In fact, \(\delta\) corresponds to a degree of freedom in the bulk. This can be seen as follows. As mentioned in the introduction, the bulk Hilbert space may be obtained by defining a fibre bundle over the space of classical solutions \(\mathcal{G}_{M}\), using insights from AdS/CFT. The Hilbert space is found by analysing the asymptotic symmetries of the bulk spacetime, described by groups \(G_{L}\) and \(G_{R}\). These groups typically include time translations as well as rotations. The full asymptotic symmetry group then follows as \(G_{L}\times G_{R}\). To obtain \(\mathcal{G}_{M}\), this product has to be quotiented by the isometries of the bulk spacetime. The isometries can be understood as trivial bulk diffeomorphisms, i.e. diffeomorphisms that do not change the state in the CFT. In the CFT picture, isometries are reflected by the fact that the CFT state is annihilated by the difference of the corresponding asymptotic charges \(Q_{L}\) and \(Q_{R}\). We already mentioned above that the TFD state is annihilated by the difference of the boundary Hamiltonians \(h\). This corresponds to the bulk isometry for trivial time translations. The full isometry group, which is the diagonal subgroup of \(G_{L}\times G_{R}\), is denoted by \(G_{D}\) and the moduli space of classical bulk solutions is given by
\[\mathcal{G}_{M}=\frac{G_{L}\times G_{R}}{G_{D}}. \tag{56}\]
This space contains parameters \(g\) that fix the particular bulk solution. As such, it contains parameters determining the relative alignment between the left and right coordinates, in particular also the time coordinates. So \(\delta\) is a bulk degree of freedom.
In the TFD state (25), the slice \(\delta=0\) is chosen. Choosing different slices with \(\delta\neq 0\) leads to time-shifted TFD states, obtained by evolving (25) using \(H_{L}+H_{R}\) for a time period of \(\delta\). This is visualised in fig. 4. The entanglement properties of the original TFD state are unaffected by this evolution. The time-shifted TFD states are interpreted as microstates of the black hole [53] (see also [54] for a discussion of the time-shifted TFD states as black hole microstates in the sense of [55; 56]). Correspondingly, the action of \(H_{L}+H_{R}\) is non-trivial in the phase space and thereby alters the CFT state, as opposed to the action of \(H_{L}-H_{R}\). This is reflected by the fact that in the phase space, the variable \(\delta\)
appears with a dual variable \(H=H_{L}+H_{R}\). In contrast, \(h\) is not part of the phase space; see e.g. [57] for an explicit analysis in two-dimensional Jackiw-Teitelboim (JT) gravity.
Time translations are always part of the asymptotic symmetry group. Since these are generated by the Hamiltonian, which is the operator of importance in discussing the transition between the algebra types III\({}_{1}\) and II\({}_{\infty}\), we focus on time translations from now on. Time translations contribute to \(G_{L}\) and \(G_{R}\) with a factor of \(\mathrm{U}(1)\) each. The isometry of the bulk is described by time evolution with \(h\) and is therefore also described by \(\mathrm{U}(1)\). Therefore, the corresponding part of the moduli space of classical solutions is given by
\[\frac{\mathrm{U}(1)\times\mathrm{U}(1)}{\mathrm{U}(1)}\sim\mathrm{U}(1)\sim S ^{1}. \tag{57}\]
The circle is parametrised by \(\delta\). Since \(S^{1}\) is not simply connected,15 the fibre bundle defined above \(S^{1}\) is non-trivial. The corresponding geometric phases are related to winding numbers. To see this explicitly, following [14], consider time evolution of the TFD state by \(H=H_{L}+H_{R}\) by an amount \(\delta\). The connection is then defined as
Footnote 15: A simple way to see this is to consider a path winding twice around \(S^{1}\). This path cannot be deformed smoothly, i.e. without leaving the manifold, into a path winding around \(S^{1}\) only once.
\[A_{\delta}=\mathrm{i}\langle\mathrm{TFD}|U^{\dagger}\partial_{\delta}U| \mathrm{TFD}\rangle, \tag{58}\]
where \(U=e^{\mathrm{i}(H_{L}+H_{R})\delta}\). From this time evolution also follows that \(\delta\) are periodic as \(\delta\sim\delta+\frac{\pi}{E_{n}}\) for each individual value of \(E_{n}\). Collecting all of these circles together yields the punctured plane \(\mathds{R}^{2}\backslash\{0\}\). Closed paths encircling the puncture are not contractible but pick up winding numbers.
Let us illustrate this by the two-spin example discussed in sec. 2.1.1. We have derived the TFD state of this system in sec. 2.1.2 as
\[|\psi_{0}\rangle=\frac{1}{\sqrt{1+e^{-\beta_{\mathrm{ent}}2\mu_{B}B}}}\Big{[} |\!\downarrow\!\tilde{\downarrow}\rangle+e^{-\beta_{\mathrm{ent}}\mu_{B}B}|\! \uparrow\!\tilde{\uparrow}\rangle\Big{]}. \tag{59}\]
Since this state provides a thermal description of the entanglement between the two spins, the natural Hamiltonians to use for the evolution are the modular Hamiltonians of the first
Figure 4: Visualisation of different time evolutions of the TFD state. In the central panel, the holographic dual of the usual TFD state (25) at the \(\delta=0\) slice is depicted. The blue line represents the slice at which (25) is defined. The left panel shows the holographic dual to the TFD state time-evolved by \(H_{L}+H_{R}\), which changes the state in the CFT. The panel on the right shows the time evolution by \(H_{L}-H_{R}\), which leaves the CFT state invariant.
and second spin,
\[H_{\text{mod},1/2}=\frac{1}{2}\ln\frac{2\pi+\Phi_{G}}{2\pi-\Phi_{G}}\, \sigma_{z}=\beta_{\text{ent}}\mu_{B}B\,\sigma_{z}. \tag{60}\]
Note that in (58), the evolution is performed with the physical Hamiltonians \(H_{L/R}\). Since the usual TFD state describes a thermal system, the modular Hamiltonian in this case is given by \(H_{\text{mod},L/R}=\beta H_{L/R}\). While the connection \(A_{\delta}\) defined in (58) and the periodicity of \(\delta\) will change accordingly under this rescaling, the resulting geometric phase after integrating the connection is invariant.
Evolving \(|\psi_{0}\rangle\) by \(H_{\text{mod},1}-H_{\text{mod},2}\) leaves the state invariant, resembling the invariance of the TFD state (25) under evolution of \(H_{L}-H_{R}\). On the other hand, evolution by the sum of the modular Hamiltonians for some duration of time \(\delta\) (modular time in this example) gives rise to the state
\[U|\psi_{0}\rangle = e^{\text{i}(H_{\text{mod},1}+H_{\text{mod},2})\delta}|\psi_{0}\rangle \tag{61}\] \[= \frac{1}{\sqrt{1+e^{-\beta_{\text{ent}}2\mu_{B}B}}}\Big{[}e^{- \text{i}\beta_{\text{ent}}2\mu_{B}B\delta}|\!\downarrow\!\tilde{\downarrow} \rangle+e^{\text{i}\beta_{\text{ent}}2\mu_{B}B\delta}e^{-\beta_{\text{ent}} \mu_{B}B}|\!\uparrow\!\tilde{\uparrow}\rangle\Big{]}.\]
As we mentioned below (58), time evolution by the sum of the Hamiltonians leads to a periodicity \(\delta_{P}\). Evolving for this amount of time results in exactly the same state as the initial state. With the above result (61) we find that this periodicity is given by
\[\delta_{P}=\frac{\pi r}{\beta_{\text{ent}}\mu_{B}B}, \tag{62}\]
where \(r\in\mathds{Z}\) is an integer.
Evaluating (58) for the time evolved TFD state of the two-spin system (61), the connection is given by
\[A_{\delta}=2\beta_{\text{ent}}\mu_{B}B\frac{1-e^{-\beta_{\text{ent}}2\mu_{B}B }}{1+e^{-\beta_{\text{ent}}2\mu_{B}B}}. \tag{63}\]
Integrating over \(\delta\) results in the geometric phase
\[\Phi_{G}^{\text{(TFD)}}=\int_{0}^{\delta_{P}}\text{d}\delta\,A_{ \delta}=2\pi r\frac{1-e^{-\beta_{\text{ent}}2\mu_{B}B}}{1+e^{-\beta_{\text{ent }}2\mu_{B}B}}=\Phi_{G}r, \tag{64}\]
where \(\Phi_{G}\) is the geometric phase calculated as the symplectic volume in (16). As we pointed out earlier below (60), the same result for \(\Phi_{G}^{\text{(TFD)}}\) is obtained when the evolution of \(|\psi_{0}\rangle\) is performed with the sum of the rescaled modular Hamiltonians \(\tilde{H}_{\text{mod},1/2}=H_{\text{mod},1/2}/\beta_{\text{ent}}\).
The integer \(r\) in (64) counts how many times the time evolution wraps around \(S^{1}\). In terms of the evolution, this may also be phrased as \(r\) counting how many times the time evolved state returns to the initial state. Interestingly, while usually the winding number comes with a prefactor of \(2\pi\), the prefactor of \(r\) in (64) is given by the geometric phase of the two-spin system. This however matches perfectly with the fact that \(\Phi_{G}\) is defined as the symplectic volume. While in the usual definition, the factor \(2\pi\) is the volume of
\(S^{1}\), this changes for non-vanishing entanglement between the spins. The circle, while still possessing the topological features of \(S^{1}\), is parametrised by an angular variable with a periodicity different from \(2\pi\) and accordingly, its volume changes. In particular, we find that the geometric phase (64) always vanishes for maximal entanglement. This can be understood by the fact that for maximal entanglement, time evolution by the sum of the modular Hamiltonians leaves the state completely invariant, as happens for evolution by the difference of the modular Hamiltonians already for arbitrary entanglement.
Returning to the holographic setting of the eternal black hole, in the large \(N\) limit it admits a description in terms of an algebra of type III\({}_{1}\)[17; 18], as reviewed in sec. 2.3.1. This algebra does not have a trace. As discussed in sec. 2.2.2, this can be understood by the fact that none of the states on which an algebra of type III acts is maximally entangled, or equivalently, by the presence of the geometric phases \(\Phi^{(n)}_{G}\) in the states (52). As we have seen above, the geometric phases \(\Phi^{(\text{TFD})}_{G}\) computed in (64) can only be non-trivial if the geometric phases \(\Phi^{(n)}_{G}\) do not vanish. Therefore, the geometric phase of [14] can be defined for every state of a type III algebra. In the case where all of the geometric phases \(\Phi^{(n)}_{G}\) vanish, the state is maximally entangled. It allows to define a trace and therefore corresponds to an algebra of type II. For this particular state, which is the cyclic and separating vector of the type II algebra, all of the geometric phases \(\Phi^{(\text{TFD})}_{G}\) computed in (64) vanish. To summarise, the geometric phase of [14] is defined for every state of a type III algebra, but for a type II algebra, there exists one particular state where all of the geometric phases vanish. It is this particular state which consistently defines the trace for the type II algebra, as shown by our earlier result (54). We have therefore found an explicit realisation of the general result derived in sec. 2.2.2, for the AdS eternal black hole.
Finally, we point out that the geometric phase of [14] is related to the non-trivial common centre of the type III\({}_{1}\) algebras of [17; 18]. The geometric phase computed by integrating the connection (58) is associated to time translations, generated by the boundary Hamiltonians. The latter ones are both related to the central operator \(U\)[17; 18], as reviewed in sec. 2.3.1, and act as collective coordinates. For the Hamiltonians the collective coordinate is given by the mass of the black hole. This is made explicit by [15; 58] where the mass of the black hole shows up as a coupling term in the chiral boson action, defined on an annulus geometry. Due to this shared mode, the left and right type III algebras share a common centre. If this was not a shared mode, but \(H_{L}\) and \(H_{R}\) would correspond to different bulk isometries, each represented by U(1), the moduli space (57) would be topologically trivial. In this case, there would not be any ambiguity in relating the left and right boundary times. A non-vanishing of the geometric phase of [14] may therefore be interpreted as an indicator of a non-factorisation of the operator algebras, in that they have a non-trivial common centre. Once \(1/N\) corrections are included, the centre becomes trivial and the algebra is deformed to type II factor [19]. In this case, as discussed above, there exists a state where the geometric phase of [14] vanishes, which in particular is the cyclic and separating vector of the type II algebra.
Geometric Phase and Missing Information
The temperature of a black hole is a consequence of the information hidden behind the horizon. In sec. 2.1.2 we found that the entanglement temperature is determined by the geometric phase. We therefore expect that a non-vanishing geometric phase can also be interpreted as indicating missing information. In the following, we elaborate on this relation by showing that geometric phases are a signature of missing information about the microscopic structure of the phase space. The missing information has its origin in the inability of a local observer to access the full Hilbert space and is present both in systems with and without entanglement. In systems without entanglement, the missing information is related to the existence of a global symmetry. The global symmetry allows to distinguish between the projective Hilbert space and the full Hilbert space of the system. A local observer only has access to the former. In a system with entanglement between two subregions, the local observer located in one of the subregions cannot access information arising from global symmetries of the full system. The global symmetry generates additional relative phases in the state describing the full system. These relative phases do not impact measurements of an observer in a subregion as they leave invariant the density matrix of the subregion.
We begin in sec. 3.1 by discussing the relation between geometric phases and missing information in systems without entanglement in two examples, namely the well-known spin in a magnetic field [11] and Virasoro Berry phases [26] in a CFT. In sec. 3.2, we then discuss missing information for Berry phases in two entangled CFTs dual to the eternal AdS black hole and modular Berry phases for subregions in CFTs.
### Examples without Entanglement
We now discuss the relation between missing information and Berry phases for the examples of a single spin in a magnetic field in sec. 3.1.1 and for a two-dimensional CFT in sec. 3.1.2.
#### 3.1.1 Single Spin System
The relation between missing information and the geometric phase is already present in systems without entanglement. To illustrate this, we use the example of a single spin coupled to a magnetic field. As we will see in more detail in the following, the missing information is a consequence the necessity to use (at least) two coordinate patches to fully cover the projective Hilbert space. The Hamiltonian of the system is given by
\[H=J\vec{B}\cdot\vec{S}=\frac{JB}{2}\vec{n}\cdot\vec{\sigma}, \tag{3.1}\]
where \(\vec{B}=B\vec{n}\) with \(\vec{n}\) a radial unit vector and \(\vec{S}=\frac{1}{2}\vec{\sigma}\). The unit vector \(\vec{n}\) depends on the two angular coordinates \(\phi\in[0,2\pi)\) and \(\theta\in[0,\pi]\) parameterising the Bloch sphere. The projective Hilbert space is given by \(\mathds{CP}^{1}\sim S^{2}\). Global phases to the eigenvectors of \(H\) then constitute a U(1) bundle over the projective Hilbert space \(\mathds{CP}^{1}\). This is essentially the Hopf fibration [25] mentioned in the introduction as an example for a non-trivial bundle.
To see explicitly that this bundle is non-trivial, the ground state of the above Hamiltonian is given by
\[|\tau\rangle=-e^{-{\rm i}\phi}\sin\frac{\theta}{2}|\!\uparrow\rangle+\cos\frac{ \theta}{2}|\!\downarrow\rangle, \tag{3.2}\]
where we assumed w.l.o.g. that \(JB>0\). This state becomes singular at the north pole of \(S^{2}\) where \(\theta=0\) since \(\phi\) is not defined there. Since an overall phase does not change the physical properties of the state, we may use the state \(|\tilde{\tau}\rangle\) obtained by multiplying \(|\tau\rangle\) with \(e^{{\rm i}\phi}\),
\[|\tilde{\tau}\rangle=-\sin\frac{\theta}{2}|\!\uparrow\rangle+e^{{ \rm i}\phi}\cos\frac{\theta}{2}|\!\downarrow\rangle. \tag{3.3}\]
This state is non-singular at \(\theta=0\). However, a singular point is still present and now located at the south pole of \(S^{2}\) where \(\theta=\pi\).16 Since there are no other singular points, the combination of the two coordinate patches where the states \(|\tau\rangle\) and \(|\tilde{\tau}\rangle\) are defined fully covers the projective Hilbert space \(\mathds{C}\mathds{P}^{1}\). For each of the patches, typically referred to as north pole and south pole patches, a connection can be defined using the corresponding state, \(|\tilde{\tau}\rangle\) and \(|\tau\rangle\) respectively,
Footnote 16: Had we assumed that \(JB<0\), the corresponding ground state \(|\tau^{\prime}\rangle\) would be singular at \(\theta=\pi\) and the corresponding transformed ground state \(|\tilde{\tau}^{\prime}\rangle\) at \(\theta=0\). In this version, the ground state \(|\tau^{\prime}\rangle\) defines the north pole patch while \(|\tilde{\tau}^{\prime}\rangle\) defines the south pole patch. This does not change the result for the geometric phase.
\[A^{(N)} ={\rm i}\langle\tilde{\tau}|{\rm d}|\tilde{\tau}\rangle=\frac{-1 -\cos\theta}{2}{\rm d}\phi, \tag{3.4}\] \[A^{(S)} ={\rm i}\langle\tau|{\rm d}|\tau\rangle=\frac{1-\cos\theta}{2}{ \rm d}\phi. \tag{3.5}\]
Note that at the points where \(|\tau\rangle\) and \(|\tilde{\tau}\rangle\) are singular, the corresponding connections, \(A^{(S)}\) and \(A^{(N)}\) respectively, vanish. The connections are related by the aforementioned U(1) transformation \(U=e^{{\rm i}\phi}\) as
\[A^{(S)}=A^{(N)}-{\rm i}U^{\dagger}{\rm d}U. \tag{3.6}\]
The curvature of the connections is given by
\[F={\rm d}A^{(N)}={\rm d}A^{(S)}=\frac{1}{2}\sin\theta\,{\rm d} \theta\wedge{\rm d}\phi. \tag{3.7}\]
The geometric phase is found by integrating \(F\) over the full projective Hilbert space \(\mathds{C}\mathds{P}^{1}\sim S^{2}\). The result does not vanish, showing that the fibre bundle is non-trivial. As we saw in the above derivation, this is tied to the necessity to use two patches in order to fully cover the projective Hilbert space.
The geometric phase contains information about the geometry and topology of the projective Hilbert space. In particular, the curvature allows to compute its Euler characteristic \(\chi\) by use of the Chern theorem,
\[\chi({\cal M})=\int_{\cal M}e(F)=\frac{1}{(2\pi)^{n}}\int_{\cal M} \sqrt{\det(F)}, \tag{3.8}\]
given here for a general (suitable) manifold \(\mathcal{M}\) of \(2n\) real dimensions. Also, the Euler characteristic is related to the number of holes \(g\) of a manifold as \(\chi=2-2g\). Inserting (3.7) and using that in our case \(n=1\) yields the well-known result
\[\chi(S^{2})=2\quad\text{and correspondingly,}\quad g=0. \tag{3.9}\]
We see that knowledge of the curvature of the bundle allows to calculate that the base space (in this case) is topologically trivial as it has no holes. Every two-dimensional manifold without holes is topologically equivalent to \(S^{2}\), which as mentioned earlier is indeed the geometry of the base space. Such topological quantities as the Euler characteristic are not only present in the field of high energy physics, but also arise in condensed matter physics. The most famous example is the quantum Hall effect, where the Hall conductance is quantised in terms of Chern numbers [59; 60].
This discussion demonstrates how the information provided by the geometric phase determines a simple single-spin system. In the next section we will turn to the more advanced example of Virasoro Berry phases in CFTs.
#### 3.1.2 Virasoro Berry Phase in a Single Boundary Geometry
Here we discuss missing information in systems without entanglement for the example of a geometric phase in a two-dimensional CFT. We show that the missing information that gives rise to the geometric phase is related to the existence of a global symmetry in the CFT.
The Virasoro Berry phase in a CFT dual to a spacetime with a single boundary was first derived in [26]. It arises from applying local conformal symmetry transformations to a highest-weight state of the CFT in the presence of global symmetries. In two dimensions, the conformal transformations are the diffeomorphisms of the unit circle \(f\in\text{Diff}(S^{1})\). The Virasoro group is the central extension of \(\text{Diff}(S^{1})\), which we denote by \(\widehat{\text{Diff}}(S^{1})\). Due to the central extension, group elements are pairs \((f,\alpha)\in\widehat{\text{Diff}}(S^{1})\), where \(\alpha\in\mathds{R}\) is central. Irreducible representations are formed by Verma modules with highest-weight states \(\ket{h}\). Depending on the value of the conformal weight \(h\), the state has different global symmetries. For \(h=0\), which corresponds to a CFT in the vacuum, the symmetry is \(\text{SL}(2,\mathds{R})\), whereas for \(h>0\), it is U(1). In the following we focus on the case \(h>0\).
The U(1) symmetry transformations are generated by the CFT Hamiltonian and imply time translation invariance of the CFT. Taking into account the left- and right-moving sector of the CFT, the Hamiltonian reads \(H=L_{0}+\bar{L}_{0}\). Acting with the U(1) subgroup of the conformal group on the state \(\ket{h}\) only yields an overall phase to the state, \(\ket{h}\to e^{i\gamma}\ket{h}\) which cannot be measured. The state \(\ket{h}\) belongs to the projective Hilbert space and represents all physically equivalent states \(e^{i\gamma}\ket{h}\) in the full Hilbert space of the system. A local observer only has access to the projective Hilbert space, but not to information related to the global symmetry. Therefore, the global symmetry group U(1) represents the fibre of the fibre bundle in fig. 2. On the other hand, the base space of the fibre bundle is then formed by the set of transformations that physically change the state \(\ket{h}\) and are given by \(\frac{\widehat{\text{Diff}}(S^{1})}{\text{U}(1)}\). Upon quantisation, this gives rise to the projective Hilbert space.
Moving along a closed path in the base space \(\frac{\overline{\mathrm{Diff}}(S^{1})}{\mathrm{U}(1)}\), the state considered transforms non-trivially under the conformal transformation \(f\). At each point along the path, it has a phase ambiguity due to the global symmetry \(\mathrm{U}(1)\). This phase ambiguity may be interpreted as a freedom to choose an origin for the time coordinate. Since all expectation values are invariant, the phase cannot be measured by an observer and represents missing information. An observer can therefore only distinguish states in the projective Hilbert space and loses information regarding the state in the full Hilbert space. In particular, the observer only has access to the overall phase accumulated when they return to the original state (up to the phase). This signals the missing information. The Berry phase is obtained by integrating the non-vanishing curvature form on the manifold \(\frac{\overline{\mathrm{Diff}}(S^{1})}{\mathrm{U}(1)}\) over the surface enclosed by the closed path through the manifold. The curvature form is given by the Kirillov-Kostant symplectic form on the coadjoint orbit of the Virasoro group and yields the Berry phase [26; 61]
\[\Phi_{B}=S_{\mathrm{geo}}^{\pm}\left[f,b_{0}\right]=\int\mathrm{d}t\mathrm{d} \sigma\left(b_{0}f^{\prime}\partial_{\pm}f+\frac{c}{12}\frac{f^{\prime\prime} \partial_{\pm}f^{\prime}}{\left(f^{\prime}\right)^{2}}\right) \tag{3.10}\]
up to a boundary term. Here, \(b_{0}\) denotes the expectation value of the energy-momentum tensor of the CFT in the state \(\left|h\right>\), \(b_{0}=\frac{1}{2\pi}\left<h\right|T\left|h\right>\).
To summarise, the Virasoro Berry phase arises due to the existence of a global symmetry in the CFT combined with a geometrically non-trivial base space. The existence of the global symmetry represents missing information about the choice of time coordinate for the global state at each point along the closed path in the base space.
### Examples with Entanglement
We now move on to discuss how the Berry phase signals missing information in systems with entanglement between subregions. Here, the missing information is related to the existence of relative phases in the global state describing the full system. This cannot be accessed by an observer located in a subregion. The relative phases exists due to independent 'global' symmetries in each subregion. We discuss the missing information for the examples of the Virasoro Berry phase and the gauge Berry phase, both in the presence of a spacetime wormhole, in sec. 3.2.1 and sec. 3.2.2. We also consider modular Berry phases in a two-dimensional CFT in sec. 3.2.3.
#### 3.2.1 Virasoro Berry Phase in the Presence of a Wormhole
Here we consider the Virasoro Berry phases of sec. 3.1.2 for CFTs dual to an eternal AdS black hole. We show that the missing information originates from an independent choice of time coordinate in each CFT that gives rise to a relative phase in the TFD state of the full system.
The eternal AdS black hole implies the presence of a wormhole in the bulk. Moreover, there are now two asymptotic regions, i.e. also two boundaries, on each of which a copy of the CFT is defined, see. fig. 1. Since both CFTs are causally separated by the horizon, we may independently apply conformal transformations on each boundary, \(\{f_{R},f_{L}\}\in\mathrm{Diff}(S^{1})\). Additionally, there are two copies of the Hamiltonian, \(H_{L}\) and \(H_{R}\), giving rise to the global
asymptotic symmetry group \(\mathrm{U}(1)\times\mathrm{U}(1)\). The Hamiltonians generate time translations on the left and right boundaries. This induces an independent choice of time coordinate for each boundary. From this perspective, we may naively expect that we obtain two copies of the single CFT Berry phase (3.10) corresponding to fibre bundles with base space \(\frac{\widehat{\mathrm{Diff}}(S^{1})}{\mathrm{U}(1)}\times\frac{\widehat{ \mathrm{Diff}}(S^{1})}{\mathrm{U}(1)}\). This is not true, however. As shown in [15; 58], there are additional constraints due to the presence of the wormhole in the bulk. In particular, the black hole mass or equivalently its energy must be the same when measured from either boundary. This is reflected in the invariance of the TFD state under the combined action of \(H_{L}-H_{R}\) as explained in sec. 2.3.2. For the Berry phase, the existence of the additional constraint implies that the value of \(b_{0}\) in (3.10) must be the same both for the left and the right CFT since \(b_{0}\) is related to the black hole mass by \(b_{0}=\frac{M}{32\pi G_{N}}\)[58], in the case of a non-rotating black hole. Therefore, the mass acts as a coupling between Berry phases in the left and right boundary. Up to a boundary term, the Berry phase in the presence of a wormhole then reads
\[\Phi_{B}=S_{\mathrm{geo}}^{-}\left[f_{L},b_{0}\right]-S_{\mathrm{ geo}}^{+}\left[f_{R},b_{0}\right]. \tag{3.11}\]
The Virasoro Berry phase in the presence of a wormhole originates from independent misaligned choices of the time coordinate for the left and right CFTs. An observer on the left and right boundary may choose an independent origin for their time coordinate, as they cannot communicate with each other. The missing information for an observer is therefore the overall misalignment of the time frame, which cannot be measured by either observer. The misalignment is generated by a \(\mathrm{U}(1)\) transformation and gives rise to the fibre bundle structure with base space \(\frac{\widehat{\mathrm{Diff}}(S^{1})\times\widehat{\mathrm{Diff}}(S^{1})}{ \mathrm{U}(1)}\) and fibre \(\mathrm{U}(1)\).
Let us now discuss this result in the context of vN algebras. As reviewed in sec. 2.3.1, in the large \(N\) limit, holographic CFTs are type III vN algebras with a dual effective bulk field theory in the limit \(G_{N}\to 0\). The operator algebras of the CFTs on the left and right boundaries are given by \(\mathcal{A}_{L/R}=\mathcal{A}_{L/R,0}\otimes\mathcal{A}_{U}\). The central operator \(U\) is given by \(U=\frac{H_{L/R}}{N}\)[19]. The centre is related to the mass of the black hole, or equivalently the common \(b_{0}\propto M\) in (3.11). In particular, since the black hole mass is the same when measured from either boundary, the centre is common to both CFTs. In terms of the operator algebras, this is reproduced by including the same \(\mathcal{A}_{U}\) both in \(\mathcal{A}_{L}\) and \(\mathcal{A}_{R}\), so the centre is shared by the two algebras. In the language of Virasoro Berry phases, this is reflected in the structure of the base space \(\frac{\widehat{\mathrm{Diff}}(S^{1})\times\widehat{\mathrm{Diff}}(S^{1})}{ \mathrm{U}(1)}\) by the fact that the quotient is taken only by a single \(\mathrm{U}(1)\) group.
#### 3.2.2 Gauge Berry Phase
Here we consider the modular time evolution in the presence of an eternal AdS black hole. Since both exterior regions are separated by a horizon, observers on the left and right boundary may independently choose their modular time coordinates \(s_{L}\) and \(s_{R}\). If their modular times are aligned, the TFD state is given by
\[|\mathrm{TFD}\rangle=\frac{1}{\sqrt{Z}}\sum_{E}e^{-\beta\frac{E}{2}}|E_{L} \rangle|E_{R}^{*}\rangle. \tag{3.12}\]
Note that here, \(|E_{L/R}\rangle\) is the eigenbasis of the modular Hamiltonian, while in (25), \(|n_{L/R}\rangle\) is the eigenbasis of the physical Hamiltonian. If the modular times of the observers are not aligned, the global state differs by a relative phase from the state (3.12) and reads
\[|\text{TFD}\rangle_{\delta^{\prime}}=\frac{1}{\sqrt{Z}}\sum_{E}e^{\text{i}2E \delta^{\prime}}e^{-\beta\frac{E}{Z}}|E_{L}\rangle|E_{R}^{*}\rangle, \tag{3.13}\]
where \(2\delta^{\prime}=s_{L}+s_{R}\) is a relative difference in the left and right modular time. This may be understood as follows. The horizon separates the left and right exterior regions, and each subregion is described by a reduced density matrix which is not sensitive to the relative difference \(\delta^{\prime}\). Therefore, from the perspective of an observer on the left boundary, the information about the relative phase in the global state of the CFT is missing. The observer cannot access the choice of modular time on the right boundary. Following the approach of sec. 2.3.2, we may then define a Berry phase for which the modular time shift \(\delta^{\prime}\) is \(\frac{\pi}{E}\)-periodic with Berry connection
\[A_{\delta^{\prime}}=\text{i}\,_{\delta^{\prime}}\!\langle\text{TFD}|\partial _{\delta^{\prime}}|\text{TFD}\rangle_{\delta^{\prime}}. \tag{3.14}\]
The Berry phase signals missing information about the structure of the Hilbert space from the point of view of an observer located in the left exterior. An observer in the left exterior measures expectation values of observables with respect to the density matrix \(\rho_{L}\), as they do not have access to the global state. Observers in the left and right exterior regions may then independently choose the origins of their modular time. This leads to a misalignment of the modular time coordinates in the subregions and yields the relative phase in the global state (3.13). The Berry phase associated to the modular Berry curvature \(A_{\delta^{\prime}}\) is topological since the periodicity of \(2\delta^{\prime}=s_{L}+s_{R}\) leads to a non-trivial topology \(\mathds{R}^{2}\backslash\{0\}\) of the base space. As explained in sec. 2.3.2 such an entanglement Berry phase is in relation to the non-existence of a trace on a type III algebra and is absent for a particular state by transitioning to a type II vN algebra.
Moreover as we elaborated in sec. 2.2 around (2.64), this geometric phase has an interpretation in terms of winding numbers due to the non-trivial topology, as discussed above. A concrete example of this was found in the context of JT gravity [14]. Let us now explain how these winding numbers are related to missing information and the operator algebra. Within geometric quantisation, winding numbers appear in the spectrum of the 'precuantum' momentum operator (see e.g. [62]). In JT gravity, this operator corresponds to the Hamiltonian. In general, the spectrum of this operator takes the form \(\{r+\lambda,r\in\mathds{Z}\}\), where \(r\) is the winding number and \(\lambda\in[0,1)\) is an ambiguity parameter that leaves the symplectic form invariant. However, since \(\lambda\) appears in the spectrum, different values of \(\lambda\) correspond to inequivalent prequantum operators. Thus, the algebra of observables is sensitive to this ambiguity parameter \(\lambda\). Given a fixed \(\lambda\), we may define an equivalence of prequantum momentum operators corresponding to different values of \(r\). All these operators correspond to the same symplectic form and spectrum and therefore are physically indistinguishable for a local observer. The information about \(r\) can therefore be termed missing information, since we cannot distinguish different elements within the same equivalence class. In the
context of AdS/CFT, this information corresponds to the specific gluing of the bulk and boundary spacetimes (cf. the discussion around (56)).
#### 3.2.3 Modular Berry Phase in the Presence of a Wormhole
Similarly to the Virasoro Berry phases in the presence of a wormhole, modular Berry phases arise from symmetries in the subregions of a system. In contrast to the Virasoro Berry phase, these symmetries are not generated by the physical Hamiltonian but by the modular Hamiltonian. As we will see, in this case the missing information is a relative phase in the global state describing the full system which cannot be measured by an observer restricted to a subregion. The relative phase originates from a misalignment of the modular time parameters in each subregion. This may be understood as follows.
The modular Hamiltonian is formally defined as \(H_{\text{mod},A}=-\ln(\rho_{A})\) and generates an automorphism that maps operators \(\mathcal{O}\) of the algebra \(\mathcal{A}_{A}\) to themselves,
\[U(s)\mathcal{O}U(-s)=\mathcal{O},\text{ where }\quad U(s)=e^{\mathrm{i}s(H_{ \text{mod},A}+H_{\text{mod},A})}\text{ and }\mathcal{O}\in\mathcal{A}_{A}. \tag{15}\]
Note that only the two-sided operator \(H_{\text{mod}}=H_{\text{mod},A}+H_{\text{mod},\bar{A}}\) has a well-defined action on a state. In most cases the modular Hamiltonian is a complicated non-local operator. We consider examples where the modular Hamiltonian may be derived from the Rindler Hamiltonian and is thus known. The modular Hamiltonian then generates a generalised time evolution with the unitary \(U(s)=e^{\mathrm{i}s(H_{\text{mod},A}+H_{\text{mod},\bar{A}})}\). Similarly to the ordinary time evolution generated by the physical Hamiltonian, the modular time evolution is a symmetry of the subregion. Observers in \(A\) and \(\bar{A}\) then have the freedom to choose their modular time parameter.
We now discuss missing information for the parallel transport of intervals in CFTs on either boundary in the eternal AdS black hole geometry. The parallel transport operator \(V_{\delta\lambda}\) for an interval \(\lambda\) may be found by solving the modular-parallel transport equations [30],
\[\partial_{\lambda}H_{\text{mod}}-P_{0}^{\lambda}\left[\partial_{ \lambda}H_{\text{mod}}\right] =\left[V_{\delta\lambda}(\lambda),H_{\text{mod}}\right] \tag{16}\] \[P_{0}^{\lambda}\left[V_{\delta\lambda}(\lambda)\right] =0,\]
where \(P_{0}^{\lambda}\) is the projector into the fibre. The Berry curvature then follows from
\[R=[V_{\delta\lambda^{i}},V_{\delta\lambda^{j}}]d\lambda^{i}\wedge d\lambda^{j}. \tag{17}\]
The modular Berry curvature signals missing information in the presence of a black hole as follows. In particular, we consider a BTZ black hole with non-compact spatial direction and mass \(M=\frac{c}{12}\left(\frac{2\pi}{\beta}\right)^{2}\). There are two CFTs dual to the black hole. In each of these CFTs, we consider an interval with endpoints \(P_{1}^{L}:\left(-\frac{x}{2},t_{L}=-t\right),\quad P_{2}^{L}:\left(\frac{x}{2 },t_{L}=-t\right),\quad P_{3}^{R}:\left(-\frac{x}{2},t_{R}=t\right),\quad P_{4 }^{R}:\left(\frac{x}{2},t_{R}=t\right)\). For the BTZ black hole with non-compact spatial direction and mass \(M=\frac{c}{12}\left(\frac{2\pi}{\beta}\right)^{2}\), the modular Hamiltonians for two disjoint intervals are given
by [63]
\[\begin{split} H_{12,\text{mod}}&=\frac{\beta}{2\pi\sinh \frac{\pi x}{\beta}}\int_{-\frac{\pi}{2}}^{\frac{\pi}{2}}dy\left(\cosh\frac{\pi x }{\beta}-\cosh\frac{2\pi y}{\beta}\right)T_{zz}\left(-t+i\frac{\beta}{2},y \right)\quad\text{if}\,x>\frac{t}{2}\\ H_{13,\text{mod}}&=\frac{\beta}{2\pi\cosh\frac{2 \pi t}{\beta}}\int_{-t+\frac{i\beta}{2}}^{t}dy\left(\sinh\frac{2\pi t}{\beta}- \sinh\frac{2\pi y}{\beta}\right)T_{zz}\left(y,-\frac{x}{2}\right)\quad\text{if }\,x<\frac{t}{2}\end{split} \tag{3.18}\]
and similarly for \(H_{34,\text{mod}}\), \(H_{24,\text{mod}}\). The Berry curvature may then be obtained by solving (3.16) and (3.17). The calculation was performed in [15]. In particular, it was found that the Berry phase obtained from \(H_{13,\text{mod}}\) vanishes if the time coordinates in the left and right boundary, \(t_{L}\) and \(t_{R}\), are aligned. On the other hand, a misalignment by \(\delta=t_{L}-t_{R}\) yields the Berry curvature [15]
\[\hat{R}_{u_{L},u_{R}}=-\frac{4\pi^{2}}{\beta^{2}}\operatorname{sech}^{2}\left( \frac{\pi}{\beta}(2t+\delta)\right)K_{+,12}d\delta\wedge dt, \tag{3.19}\]
where \(K_{+,12}\) is the generator of boosts for the modular Hamiltonian \(H_{\text{mod}}=\int d\Sigma^{\mu}K^{\nu}T_{\mu\nu}\).
The Berry phase again signals a misalignment of the time coordinate between observers in an interval on the left and right boundary. The Berry phases vanishes only if their time coordinates are aligned. This misalignment is possible because an observer in the left boundary cannot communicate with an observer in the right boundary to align their time coordinates. Therefore, the choice of time coordinate in the boundary hidden behind the horizon presents missing information. The missing information is available in the global state which exhibits a relative phase due to the misaligned time coordinates similar to (2.61). However, the relative phase is not present in the reduced density matrix an observer in the boundary employs to obtain expectation values of observable.
## 4 Conclusion and Outlook
In the first part of this work we have established a relation between the presence of geometric phases in states of the Hilbert space and the possibility of consistently defining a trace on the corresponding operator algebra. For illustrative purposes, we have demonstrated this first for a simple two-spin model. Next we generalised the setup to two copies of infinitely many spins, prepared in a state that is the cyclic and separating vector for algebras of type III. We found that the trace is consistently defined if all of the geometric phases vanish. In this case, the aforementioned state is the cyclic and separating vector for algebras of type II. In this way, we provided a geometric argument, related to the geometry of entanglement, for the non-existence of the trace on type III algebras. Finally, we have discussed an explicit realisation of our result within holography, for the eternal black hole.
In the second part of this paper, we discussed how Berry phases signal missing information in system with and without entanglement. For systems without entanglement, considering the examples of a single spin in a magnetic field and of conformal transformations in a two-dimensional CFT, we discussed that the associated Berry phase signals the existence of global charges. This charge leads to additional phases in the global state
inaccessible to a local observer. We then discussed missing information for Berry phases in entangled systems for the example of two intervals on a constant time slice in a two-dimensional CFT, and also for the two CFTs dual to the eternal black hole. In both cases, the Berry phase signals missing information related to independent symmetries in each subregion. These symmetries yield a relative phase in the global state describing the full system that cannot be measured by an observer restricted to a subregion.
These results exemplify the use of geometric phases for characterising the Hilbert spaces of both simple quantum systems and of quantum gravity in the AdS/CFT context. They quantify missing information that is also inherent in entangled systems and thus of central importance for the analysis of quantum Hilbert spaces based on von Neumann algebras.
Our results lead to several interesting follow-up research questions in relation to the interplay of geometric phases and entanglement. In the following, we elaborate on two of them: on implications for the Hawking-Page transition, as well as on the symmetry resolution of entanglement entropy.
#### Entanglement Geometry for the Hawking-Page Phase Transition
In quantum mechanics, it is straightforward to define operators implementing a flow between different entanglement orbits. While local unitary operations of the form \(U_{L}\otimes U_{R}\) do not alter the entanglement properties of a given state, linear combinations of local unitaries \(\sum_{i}c_{i}U_{L}^{(i)}\otimes U_{R}^{(i)}\) generically change its entanglement properties by an amount depending on the coefficients \(c_{i}\). Essentially, such operators may be understood as evolving the state with an interaction Hamiltonian. As an example, starting from a two-spin product state \(|\!\downarrow\downarrow\rangle\) with vanishing entanglement, evolving this state using \(\exp({\rm i}H_{\rm int}t)\), where \(H_{\rm int}=\gamma\,a_{L}^{\dagger}\otimes a_{R}^{\dagger}\) with the raising operators \(a_{L/R}^{\dagger}=\frac{1}{2}(\sigma_{x,L/R}+\sigma_{y,L/R})\), results in an entangled state. The amount of entanglement depends on the duration \(t\) of the evolution and the interaction strength \(\gamma\) appearing in \(H_{\rm int}\). In particular, evolving for a duration of \(t^{*}=\frac{1}{\gamma}\) results in a maximally entangled state. Since such operators change between entanglement orbits, they also change the geometric phase of the states that they act on.
It will be interesting to study generalisations of this mechanism to infinite-dimensional systems in order to make contact to systems with operator algebras of type II and type III. In particular, we may think of a similar scenario for a black hole in holography. In view of describing the Hawking-Page transition [64] between empty AdS and the black hole geometry, we may simply start from the CFT vacuum \(|\mathrm{vac}\rangle=|0_{L}\rangle|0_{R}\rangle\) and act with an operator of the form \(\mathcal{O}\sim\exp\bigl{(}e^{-\beta\frac{E}{2}}a_{L}^{\dagger}a_{R}^{ \dagger}\bigr{)}\) (cf. [65]). This also generates the entanglement between the left and right CFTs described by the TFD state. Since this operator changes the entanglement properties of the state, it also changes its geometric phase. It will be interesting to study how this operator is interpreted in the holographic setting. The algebraic interpretation of the Hawking-Page transition is a change from two copies of type I\({}_{\infty}\) to two copies of type III\({}_{1}\)[17; 18]. Analysing the geometric phase and its alteration for the Hawking-Page transition as described above will be useful for providing a geometric explanation for the transition related to the microscopic states of the system. In particular, due to the relation between geometric phases and entanglement, this analysis may put a new perspective on how the interior region of the eternal black hole emerges by increasing
the entanglement.
From the algebraic perspective, the Hawking-Page transition is closely related to the Hagedorn transition. The Hagedorn transition was originally discovered in particle physics in the context of the confined and deconfined phases of quark matter [66]. In terms of operator algebras, the Hagedorn transition is understood as a transition from type I (confined phase) to type III (deconfined phase).17 Due to the similarity in terms of the operator algebraic description, understanding the Hawking-Page transition in terms of geometric phases may also be useful for analysing its microscopic origin, in analogy to the Hagedorn transition.
Footnote 17: This phase transition was studied further in the context of string theory [67] and also in \(\mathcal{N}=4\) supersymmetric Yang–Mills theory [68; 69] in the context of holography.
### Entanglement Geometry and Symmetry Resolution
We have studied in detail how a geometric description of entanglement in bipartite quantum systems may be obtained using the SZK construction [24]. For each value of entanglement, analysing the Schmidt coefficients of the state given enables to define orbits of equal entanglement by quotienting the local unitary transformations with the appropriate stabiliser group. For each orbit, a connection may be defined, which by integration defines a geometric phase. In the light of our findings, it will be interesting to apply the SZK construction [24] to systems studied in the context of symmetry-resolved entanglement [70; 71]. Due to the symmetry resolution, the same orbit construction should be applicable to each charge sector. The geometric phase defined by the orbit construction may then be useful to interpret symmetry resolved entanglement in a geometric way. In particular, this method may be useful to gain a new interpretation for the equipartition of entanglement between sectors of different charge [71]. The equipartition of entanglement refers to the fact that systems in the thermodynamic limit, to leading order in the UV cutoff the symmetry-resolved entanglement entropy is independent of the charge. A recent study showed that, for a two-dimensional CFT with a global U(1) symmetry, when resolving w.r.t. the U(1) symmetry the equipartition of entanglement is related to the form of the U(1) characters [72]. In a different study it was shown that symmetry resolving a two-dimensional CFT w.r.t. irreducible representations of the Virasoro group, equipartition of entanglement is related to the form of the conformal characters, or alternatively, the quantum dimension of the irreducible representation [73]. Finally in [74], after establishing the symmetry resolution of elements of Tomita-Takesaki theory, it was found that also the modular correlation function of the charge density for a massless Dirac theory in two dimensions shows an equipartition when symmetry resolving w.r.t. a global U(1) symmetry. In general however, at the time of writing, the origin of equipartition is not fully understood and requires further study. We expect that an analysis of equipartition in terms of geometric phases will put a new perspective on this question and in this way contribute to further demystify the equipartition of entanglement.
## Acknowledgments
We thank Vijay Balasubramanian, Pablo Basteiro, Arpan Bhattacharyya, Saurya Das, Giuseppe Di Giulio, Ro Jefferson, Rene Meyer, Djordje Minic, Flavio Nogueira, Joris Raeymaekers, Shubho Roy, Eric Sharpe and Gideon Vos for useful discussions.
We acknowledge support by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy through the Wurzburg-Dresden Cluster of Excellence on Complexity and Topology in Quantum Matter - ct.qmat (EXC 2147, project-id 390858490), via the SFB 1170 ToCoTronics (project-id 258499086) and via the German-Israeli Project Cooperation (DIP) grant 'Holography and the Swampland'. This research was also supported in part by Perimeter Institute for Theoretical Physics. Research at Perimeter Institute is supported by the Government of Canada through the Department of Innovation, Science and Economic Development and by the Province of Ontario through the Ministry of Research, Innovation and Science.
## Appendix A Some Essential Ingredients of von Neumann Algebras
Here we briefly review the basic notions of vN algebras relevant for the discussion in the present paper. For more detailed reviews, we refer the reader to [21; 22; 23].
The set of states describing a quantum system is called the Hilbert space. With these states, measurements of observables are performed by calculating expectation values of Hermitian operators representing the observables. Each such operator acts on the Hilbert space. The algebra satisfied by these operators is a vN algebra, i.e. a unital weakly closed \({}^{*}\)-algebra of bounded operators. Equivalently, due to the bicommutant theorem [75], a vN algebra is a subalgebra of all bounded operators that is closed under the \({}^{*}\)-operation and is its own double commutant. There exist three types of operator algebras, which we will briefly describe in the following.
Type I:Algebras of type I are those typically encountered in quantum mechanics. They always allow for an irreducible representation. Equivalently, they allow for defining pure states on the algebra.18 A trace on the algebra can be naturally defined consistently, as we will discuss in an explicit example shortly in sec. 2.2.1. Utilising the trace, also density matrices and entanglement entropy are naturally defined. Type I algebras fall into two subclasses. For finite dimensional, i.e. \(d\)-level quantum systems such as qudits, the algebra is of type I\({}_{d}\), \(d<\infty\). Those can always be understood as matrix algebras. For infinite dimensional quantum systems such as an infinite collection of qubits, the algebra is of type I\({}_{\infty}\). In this case, the trace on the algebra is not defined for every operator since e.g. the trace of the identity diverges.
Footnote 18: See e.g. [22] for an explanation why this is equivalent. Note also that a state on the algebra is not the same as a state(vector) in the Hilbert space, the latter notion of a state being more common in quantum mechanics. Since in this paper, we will not explicitly need the concept of a state on the algebra, we will refer to the elements of the Hilbert space as states.
Type II:Algebras of type II only appear for infinite-dimensional systems. They do not have an irreducible representation. This can be understood by the fact that an algebra of type II always has a commutant of the same type. The absence of an irreducible representation is equivalent to such algebras not allowing for pure states on the algebra (c.f. fn. 18). However, a trace on the algebra, and therefore also density matrices, can be consistently defined. Again, there are two subclasses: type II\({}_{1}\) allows to define a trace for every operator. The other case is denoted as type II\({}_{\infty}\) and can be thought of as a tensor product of II\({}_{1}\) and I\({}_{\infty}\). For the same reason as with type I\({}_{\infty}\), the trace is not defined for every element of II\({}_{\infty}\).
Within the context of gravity, algebras of type II\({}_{1}\) and II\({}_{\infty}\) where found to appear for empty de Sitter spacetime [34] and black holes in spacetimes of different (constant) curvature [19; 34; 35], respectively.
States corresponding to a type I algebra may contain a finite amount of entanglement, even for infinite dimensional quantum systems with a type I\({}_{\infty}\) description. This is not true for the case of type II algebras: in this case, entanglement is universally divergent since each state contains an infinite amount of entanglement. The same is true for algebras of type III, whose properties are stated in the following.
Type III:As for type II, algebras of type III only appear in infinite dimensional systems and do not allow for an irreducible representation. Moreover, they do not allow for a consistent definition of a trace on the algebra. Correspondingly, density matrices and in particular the entanglement entropy are not defined. For this type, there exist three subclasses which can be characterised by the spectrum of the modular operator \(\Delta\). If the only accumulation points of the spectrum of \(\Delta\) are \(0\) and \(1\), the algebra is said to be of type III\({}_{0}\). If instead, the accumulation points are \(0\) and the integer powers of some \(\lambda^{*}<1\), one has type III\({}_{\lambda^{*}}\). If there are (at least) two such values \(\lambda^{*}_{1},\lambda^{*}_{2}\neq 0\), the accumulation points are given by \((\lambda^{*}_{1})^{n}(\lambda^{*}_{2})^{m},\ n,m\in\mathds{Z}\). Therefore, the accumulation points can approximate any real number19 and the algebra is said to be of type III\({}_{1}\). It is the third case which is the typical scenario for quantum field theory and is the primary focus for the discussion in sec. 2.2.2. Type III\({}_{1}\) also appears for the eternal black hole in AdS spacetime in the large \(N\) limit [17; 18; 33].
Footnote 19: This is true except for the case where \((\lambda^{*}_{1})^{n}=\lambda^{\prime},(\lambda^{*}_{2})^{m}=\lambda^{\prime},n,m\in\mathds{Z}\). If such a \(\lambda^{\prime}\) exists, the algebra is of type III\({}_{\lambda^{\prime}}\). See e.g. [21] for a more detailed discussion.
Factor:An algebra \(\mathcal{A}\) is referred to as a factor if its centre consists only of complex scalars, i.e. if the operators \(\mathcal{O}_{C}=z\mathds{1}\), \(z\in\mathds{C}\) are the only operators satisfying \([a,\mathcal{O}_{C}]=0\) for every \(a\in\mathcal{A}\). In this case, the centre is said to be trivial.
## Appendix B Entanglement Orbit Structure of Pure States
Here we provide details on the SZK construction [24].
Consider a generic quantum system in a Hilbert space \(\mathcal{H}\) of size \(d^{2}\). The pure states form the projective Hilbert space \(\mathcal{H}_{P}=P(\mathds{C}^{d^{2}})=\mathds{C}\mathrm{P}^{d^{2}-1}\subset \mathcal{H}\). Every state \(|\psi\rangle\) can be
written in the Schmidt decomposition
\[|\psi\rangle=\sum_{i=1}^{d}\kappa_{i}|i,\tilde{i}\rangle, \tag{114}\]
where the two bases \(|i\rangle\), \(|\tilde{i}\rangle\) imply some partition of the Hilbert space into sub-Hilbert spaces \(\mathcal{H}\otimes\tilde{\mathcal{H}}\). The numbers \(0\leq\kappa_{i}\leq 1\) are known as Schmidt coefficients and, as the square roots of the diagonal entries of the reduced density matrix \(\rho_{\text{red}}\) of either subsystem, uniquely fix the entanglement between the subsystems by
\[S_{\text{EE}}(|\psi\rangle)=-\operatorname{tr}(\rho_{\text{red}}\ln\rho_{\text {red}})=-\sum_{i=1}^{d}\kappa_{i}^{2}\ln\kappa_{i}^{2}. \tag{115}\]
The three most interesting cases are
* \(\kappa_{i}=0\)\(\forall i\) except for \(\kappa_{i^{*}}=1\): vanishing entanglement,
* \(\kappa_{i}=\frac{1}{\sqrt{d}}\)\(\forall i\): maximal entanglement,
* \(0\leq\kappa_{1}\leq\kappa_{2}\leq...\leq\kappa_{d}<1\): intermediate entanglement.
Note that the ordering of \(\kappa_{i}\) in the third case is arbitrary. By a convenient labeling of the base vectors, the reduced density matrix can be written in the form
\[\rho_{\text{red}}=\text{diag}(\underbrace{0,...,0}_{m_{0}},\underbrace{\kappa _{1},...,\kappa_{1}}_{m_{1}},...,\underbrace{\kappa_{d},...,\kappa_{d}}_{m_{d} }), \tag{116}\]
such that the coefficients increase from the upper left to the lower right. Generically, several Schmidt coefficients may have the same value; above, this is denoted by the degeneracies \(m_{l}\) for each Schmidt coefficient, i.e. the lowest non-vanishing Schmidt coefficient \(\kappa_{1}\) appears \(m_{1}\) times, the second lowest \(\kappa_{2}\) appears \(m_{2}\) times and so on. Note that this also includes \(m_{0}\), which always indicates how many Schmidt coefficients are vanishing. By definition, \(\sum_{l=0}^{d}m_{l}=d\). After finishing the general discussion of this construction, below we will give a minimal example using a two spin system for illustration.
The reduced density matrix \(\rho_{\text{red}}\) is also understood as the square of the coefficient matrix of the Schmidt decomposition of \(|\psi\rangle\),
\[|\psi\rangle=\sum_{i,j=1}^{d}\sqrt{\rho_{\text{red}}}_{ij}|i,j\rangle. \tag{117}\]
We can now ask for unitary matrices \(U,\;V\in\text{U}(d)\) that, up to an overall phase, leave the Schmidt decomposition invariant,
\[U\otimes V^{T}|\psi\rangle=\sum_{i,j=1}^{d}(U\sqrt{\rho_{\text{red}}}V)_{ij}| i,j\rangle\sim|\psi\rangle. \tag{118}\]
It has been shown [24] that such matrices have the form
\[U=\begin{bmatrix}U_{0}&0&0&\ldots&0\\ 0&U_{1}&0&&\\ 0&0&U_{2}&&\vdots\\ \vdots&&&\ddots&\\ 0&&\ldots&&U_{d}\end{bmatrix},\quad V=\begin{bmatrix}V_{0}&0&0&\ldots&0\\ 0&U_{1}^{\dagger}&0&&\\ 0&0&U_{2}^{\dagger}&&\vdots\\ \vdots&&&\ddots&\\ 0&\ldots&&U_{d}^{\dagger}\end{bmatrix}. \tag{100}\]
Here, \(\dim(U_{i})=\dim(V_{i})=m_{i}\). This determines the orbit structure
\[\mathcal{O}_{\psi}=\frac{\mathrm{U}(d)}{\mathrm{U}(m_{0})\times\mathrm{U}(m_{ 1})\times...\times\mathrm{U}(m_{d})}\times\frac{\mathrm{U}(d)}{\mathrm{U}(m_{ 0})\times\mathrm{U}(1)}, \tag{101}\]
which we give reason for in the following. The two factors of \(\mathrm{U}(d)\) follow as the transformations consisting of local unitaries acting on the \(|\psi\rangle\). Due to the matrices \(U\) and \(V\) leaving the Schmidt decomposition invariant, the first factor has the stabilisers \(\mathrm{U}(m_{i})\). Since \(V_{0}\) is not determined by any \(U_{i}\), the second factor also has the stabiliser \(\mathrm{U}(m_{0})\). Moreover, since \(\rho_{\mathrm{red}}\) has to be invariant only up to an overall phase, there is a factor \(\mathrm{U}(1)\). This construction can be understood as a fibre bundle. The base space is formed by the reduced density matrices of the same spectrum. The fibre is formed by all pure states related by the partial trace to density matrices with a particular spectrum.
As a minimal example, consider two spins in the state
\[|\chi\rangle=\sin\sigma|\!\uparrow\downarrow\rangle+\cos\sigma|\!\downarrow \uparrow\rangle. \tag{102}\]
The projective Hilbert space for \(|\chi\rangle\) is \(\mathds{C}\mathds{P}^{3}\), so \(d=2\) and there are only two Schmidt coefficients. Comparing with (100) shows that \(\kappa_{1}=\sin\sigma\) and \(\kappa_{2}=\cos\sigma\). For particular values of \(\sigma\), this leads to the following geometric classification:
\[\sigma=0: \kappa_{1}=0,\ \kappa_{2}=1 \to S_{\mathrm{EE}}=0\] \[m_{0}=1,\ m_{1}=1 \mathcal{O}_{\chi}=\frac{\mathrm{U}(2)}{\mathrm{U}(1)\times \mathrm{U}(1)}\times\frac{\mathrm{U}(2)}{\mathrm{U}(1)\times\mathrm{U}(1)}= \mathds{C}\mathds{P}^{1}\times\mathds{C}\mathds{P}^{1}\] \[\sigma=\frac{\pi}{4}: \kappa_{1}=\kappa_{2}=\frac{1}{\sqrt{2}} \to S_{\mathrm{EE}}=\ln 2\] \[m_{0}=0,\ m_{1}=2 \mathcal{O}_{\chi}=\frac{\mathrm{U}(2)}{\mathrm{U}(2)}\times \frac{\mathrm{U}(2)}{\mathrm{U}(1)}=\mathds{1}\times\mathds{R}\mathds{P}^{3}\] \[0<\sigma<\frac{\pi}{4}: 0<\kappa_{1}\neq\kappa_{2}<1 \to 0<S_{\mathrm{EE}}<\ln 2\] \[m_{0}=0,\ m_{1}=m_{2}=1 \mathcal{O}_{\chi}=\frac{\mathrm{U}(2)}{\mathrm{U}(1)\times \mathrm{U}(1)}\times\frac{\mathrm{U}(2)}{\mathrm{U}(1)}=\mathds{C}\mathds{P}^ {1}\times\mathds{R}\mathds{P}^{3}\]
Note that for \(\frac{\pi}{4}<\sigma<\frac{\pi}{2}\), the same situation as for \(0<\sigma<\frac{\pi}{4}\) arises, just with the roles of \(\kappa_{1}\) and \(\kappa_{2}\) interchanged. Furthermore, \(\sigma=\frac{\pi}{2}\) is the same as \(\sigma=0\), except that \(\kappa_{1}=1\) and \(\kappa_{2}=0\). This reflects that there are two ways to embed \(\mathds{C}\mathds{P}^{1}\times\mathds{C}\mathds{P}^{1}\) into \(\mathds{C}\mathds{P}^{3}\), as mentioned below (3) for bipartite quantum systems with general \(d\). |
2309.09192 | A Swin-Transformer-based Model for Efficient Compression of Turbulent
Flow Data | This study proposes a novel deep-learning-based method for generating reduced
representations of turbulent flows that ensures efficient storage and transfer
while maintaining high accuracy during decompression. A Swin-Transformer
network combined with a physical constraints-based loss function is utilized to
compress the turbulent flows with high compression ratios and then restore the
data with the underlying physical properties. The forced isotropic turbulent
flow is used to demonstrate the ability of the Swin-Transformer-based (ST)
model, where the instantaneous and statistical results show the excellent
ability of the model to recover the flow data with remarkable accuracy.
Furthermore, the capability of the ST model is compared with a typical
Convolutional Neural Network-based auto-encoder (CNN-AE) by using the turbulent
channel flow at two friction Reynolds numbers $Re_\tau$ = 180 and 550. The
results generated by the ST model are significantly more consistent with the
DNS data than those recovered by the CNN-AE, indicating the superior ability of
the ST model to compress and restore the turbulent flow. This study also
compares the compression performance of the ST model at different compression
ratios (CR) and finds that the model has low enough error even at very high CR.
Additionally, the effect of transfer learning (TL) is investigated, showing
that TL reduces the training time by 64\% while maintaining high accuracy. The
results illustrate for the first time that the Swin-Transformer-based model
incorporating a physically constrained loss function can compress and restore
turbulent flows with the correct physics. | Meng Zhang, Mustafa Z Yousif, Linqi Yu, HeeChang Lim | 2023-09-17T07:36:52Z | http://arxiv.org/abs/2309.09192v1 | # A Swin-Transformer-based Model for Efficient Compression of Turbulent Flow Data
###### Abstract
This study proposes a novel deep-learning-based method for generating reduced representations of turbulent flows that ensures efficient storage and transfer while maintaining high accuracy during decompression. A Swin-Transformer network combined with a physical constraints-based loss function is utilized to compress the turbulent flows with high compression ratios and then restore the data with the underlying physical properties. The forced isotropic turbulent flow is used to demonstrate the ability of the Swin-Transformer-based (ST) model, where the instantaneous and statistical results show the excellent ability of the model to recover the flow data with remarkable accuracy. Furthermore, the capability of the ST model is compared with a typical Convolutional Neural Network-based auto-encoder (CNN-AE) by using the turbulent channel flow at two friction Reynolds numbers \(Re_{\tau}=180\) and \(550\). The results generated by the ST model are significantly more consistent with the DNS data than those recovered by the CNN-AE, indicating the superior ability of the ST model to compress and restore the turbulent flow. This study also compares the compression performance of the ST model at different compression ratios (_CR_) and finds that the model has low enough error even at very high _CR_. Additionally, the effect of transfer learning (TL) is investigated, showing that TL reduces the training time by 64% while maintaining high accuracy. The results illustrate for the first time that the Swin-Transformer-based model incorporating a physically constrained loss function can compress and restore turbulent flows with the correct physics.
Introduction
Turbulence, represented by the chaotic interactions among multiple spatial and temporal flow scales, has a significant impact on various fields such as aerospace[1], environment[2], wind energy[3; 4], and combustion[5]. With the development of measurement technologies and computing power, high-quality turbulence data can be obtained through experiments or simulations. In terms of experiments, hot-wire anemometry[6; 7], Particle Image Velocimetry (PIV)[8], and Particle-Tracking Velocimetry (PTV)[9] can measure the instantaneous velocity fields of turbulent flows with high accuracy and high spatial and temporal resolution. In terms of simulations, several computational fluid simulations are making it possible to process large amounts of data quickly and accurately, such as Reynolds-Averaged Navier-Stokes (RANS) models[10], Large Eddy Simulation (LES)[11], and Direct Numerical Simulation (DNS)[12]. The advancement of experimental and simulation techniques and the increasing demand for high-quality turbulence data have led to large amounts of high-dimensional data, posing great challenges in storage and transmission. Therefore, efficient and accurate data compression techniques are necessary to reduce storage requirements, facilitate data transfer, and extract the main features of the flow field. Efficient storage and transmission methods are critical to turbulence research and help to understand the complex behavior of turbulence.
Typically, data compression techniques extract the most critical features in the data while eliminating redundant or irrelevant information. Some techniques have been developed for the efficient storage and transfer of data. Singular value decomposition (SVD), a classic matrix decomposition technique, has been applied for data dimensionality reduction, feature extraction, and dynamic mode analysis[13; 14]. Principal component analysis (PCA) (usually termed as proper orthogonal decomposition (POD) in the fluid dynamics community)[15; 16; 17; 18], an unsupervised linear mapping compression method based on SVD technique, transforms the high dimensional data into the lower representation. Dynamic mode decomposition (DMD) is also based on SVD to compute the low-rank representation of the spatio-temporal flow data[19]. The above methods for data compression are all linear techniques, which makes them sensitive to outliers in the data. Another limitation of the above methods is they can not handle translation, rotation, and scaling of the data[19]. Furthermore, many nonlinear methods have been developed to capture complicated nonlinear structures in data. Kernel
Principal Component Analysis (KPCA) was proposed by Scholkopf _et al._[20], which can efficiently compute principal components in high dimensional spaces by using integral operator kernel functions. Lee _et al._[21] compared two nonlinear projection algorithms, Isomap and Curvilinear Distance Analysis (CDA), and showed that Isomap is faster and theoretically more robust than CDA, while CDA is slower but more robust in practical applications. Hinton and Roweis [22] introduced a probabilistic approach, called Stochastic neighbor embedding, for mapping high-dimensional representations or pairwise differences to a lower-dimensional space while preserving the neighborhood relations. A wavelet-based method incorporating a block-structured Cartesian mesh method was proposed by Sakai _et al._[23] for the flow simulation data compression. Sifuzzaman _et al._[24] compared the wavelet transform with the Fourier transform, revealing that the former approach took less response time. These methods provide more flexibility than linear compression methods but can result in high computation time and cost, especially for large datasets.
Thanks to big data, computing power, and algorithm development, machine learning has received extensive attention in recent decades and has been applied in various fields, such as computer vision[25; 26], speech recognition[27], natural language translation[28], weather forecasting[29], autonomous driving[30] and so on. In Fluid Dynamics, machine learning has been applied to solve several problems, such as flow denoising and reconstruction[31; 32; 33; 34; 35; 36; 37; 38], flow prediction[39; 40], active flow control[41; 42], and turbulent inflow generation[43; 44]. The findings from the previous papers demonstrate the potential of deep learning to efficiently handle complex spatiotemporal data. Furthermore, deep learning-based techniques have shown great promise over the past decades in compressing fluid flow data efficiently while preserving its main features. Liu _et al._[45] presented a data compression model using a generative adversarial network (GAN), where the discriminative network compresses data, and the generative network reconstructs data. They verified the performance of the GAN-based model on 3D flow past the cylinder, separation flow on the leeward of the double-delta wing, and shockwave vortex interaction. The results showed that the GAN-based model could save compression time and provide acceptable reconstruction quality. Glaws _et al._[46] proposed a fully convolutional autoencoder deep-learning method to compress decaying homogeneous isotropic turbulence, Taylor-Green vortex, and turbulent channel flow. The study demonstrated the autoencoder model outperformed a variant of SVD with a similar compression ratio and had a good generalization. Furthermore, Olmo _et al._[47] improved Glaws's work by
leveraging the physical properties inherent in the CFD, which led to short training time and less training data under the same quality reconstructions. Yousif _et al._[43] applied a multiscale convolutional auto-encoder with a subpixel convolution layer (MSCSP-AE) to obtain the compact representation of the turbulent channel flow and used Long-Short-Term-Memory (LSTM) Network as a sequence learning model to predict the flow field over time scales. Their results showed that the MSCSP-AE could capture the crucial feature of the flow field and then feed the compressed data to LSTM to ensure the model predicts the key pattern of the flow. In the papers mentioned above, the compression models utilize stacked convolutional layers as the basis for their models, where finite-size filters capture the spatial correlation between neighborhood points, creating a more compact representation.
The convolutional layer plays a vital role in deep learning due to its ability to capture adjacent spatial information and its non-linear approximation algorithm. However, convolutional layers rely on the kernel, or receptive field, which is limited to acquiring only local spatial correlations within the kernel field, making it challenging to recognize complex patterns[48; 49]. The padding operation is one of the important parts of the convolutional layer, which is used to keep the feature map size the same as the original input. Still, it may cause artifacts at the edges of the input data, potentially affecting the model's performance in various applications, including turbulent boundary layer reconstruction citeYousifetal2023b. Additionally, the convolutional layer was originally used to solve the pixel prediction and reconstruction in images, where pixels are distributed uniformly in a rectangular or square region. However, when processing the non-uniform flow data in fluid mechanics, the convolutional layer requires pre-processing it into a uniformly cartesian mesh, which is unrealistic[50]. Moreover, the convolutional layer could lack flow details and consequently give wrong results for complex geometries[51].
Recently, Transformer[52] has achieved some success in sequence prediction and natural language processing (NLP)[53; 54; 55; 56; 44], as its attention mechanism can discover the long-term dependencies in data, which has also sparked attention to its potential in computer vision applications. For example, Carion _et al._[57] introduced Detection Transformer (DETR) for objection detection. Dosovitskiy _et al._[58] proposed the Vision Transformer (ViT) for image classification tasks and demonstrated that ViT outperforms CNNs. Han _et al._[59] proposed the Transformer in Transformer (TNT) for visual recognition tasks, demonstrating better preservation of local information than ViT. Liu _et al._[60] introduced the Swin Transformer
with the shifted window scheme to address the window artifact problems encountered in the ViT model and found that the Swin Transformer achieves advanced performance on object detection and semantic segmentation. Thanks to the impressive performance of the Swin Transformer, there are a large number of papers that utilized the Swin Transformer to tackle various vision problems. Liang _et al._[48] restored high-quality images from low-quality images using Swin Transformers as deep feature extraction blocks and convolutional layers as shallow feature extraction blocks. Liu _et al._[61] extended the Swin Transformer model from image recognition to video recognition and performed well on Kinetics-400, Kinetics-600, and Something-Something v2 benchmarks. Lu _et al._[49] developed an Image Compression using the variational autoencoder (VAE) architecture and Swin Transformer. Their study indicated that the Swin Transformer model requires significantly fewer model parameters than other advanced methods such as CNN-based learnt image encoding. Inspired by the success of Swin Transformer-based models in the computer vision field, this study proposes an efficient Swin-Transformer (ST)-based model incorporating the physical properties of the flow field for turbulent data storage and transmission. The ST model does not use convolutional layers to avoid the limitations of convolutional layers, such as artifacts caused by padding operation, local spatial limitations caused by the finite-size kernel, and the inapplicability of non-uniform grid data.
The remainder of this paper is organized as follows. Section 2 introduces the methodology of compressing and decompressing flow data using the proposed ST model. The Direct numerical simulation (DNS) datasets used for training and testing the ST model are described in section 3. In section 4, the results from testing the ST model are discussed, and section 5 provides a summary of the conclusions drawn from this study.
## II Methodology
Transformer[52] was originally proposed for NLP problems, but the ViT[58] adapted it for computer vision by splitting input images into patches, similar to NLP tokens. Therefore, the correlation between patches can be captured through the self-attention operation in Transformer, addressing the limitation of CNN kernels in capturing only local information. Swin Transformer[60] improves upon the ViT model and incorporates shifted windows to avoid window artifact issues. The proposed ST model is based on Swin Transformer, which
divides the input flow field data into multiple patches, groups them into several windows, and employs shifted windows to overcome the lack of window boundary information. The architecture of the ST model is shown in Figure 1 (a). The model consists of an encoder and a decoder. The encoder plays a critical role in reducing the input data size for efficient storage and transmission while maintaining the important features. The decoder is responsible for restoring the original data from the reduced representations with high accuracy. Figure 1 (a) shows that the encoder starts and ends with a dense layer, with a series of Swin Transformer blocks (SwinT-blocks) and patch-merging sandwiched in between. The decoder structure is symmetrical with the encoder one, but the patch-splitting replaces the patch-merging. Here, the dense layers at the beginning project the data to an arbitrary dimension \(C\), while the dense layers at the end project the data dimension back to the original dimension. The SwinT-block captures the main features of the data, which will be described in detail later. The patch-merging operation performs a similar function to the downsampling layer in CNN, which reduces the number of patches as the network is stacked. While the patch-splitting operation can be considered an upsampling layer, increasing the number of patches. It is worth noting that the entire architecture has no convolutional layers.
As shown in Figure 1 (b), the SwinT-block mainly consists of a Window-based multi-head self-attention (W-MSA) and a Shifted Window-based multi-head self-attention (SW-MSA), both of them followed by a Multilayer Perceptron (MLP). Each W-MSA, SW-MSA, and
Figure 1: The architecture of (a) the ST model and (b) the SwinT-block.
MLP in the block is placed with a LayerNorm layer at the beginning, followed by residual connections that connect the output with its input. The ViT uses global self-attention to calculate relationships between all tokens, which increases the computational cost when the number of tokens is very large. However, unlike global self-attention in ViT, as Figure 2 (a) shows, the ST model uses local self-attention to compute self-attention within each non-overlapping local window, where each window contains _M\(\times\)M_ patches (with \(M\) set to 8 in this study). The computational complexity \(\Omega\) of the global multi-head self-attention (MSA) and window-based MSA for input data of _h\(\times\)w_ size can be expressed as follows:
\[\Omega(MSA)=4hwC^{2}+2(hw)^{2}C, \tag{1}\]
\[\Omega(W-MSA)=4hwC^{2}+2M^{2}hwC, \tag{2}\]
here, the only difference is the last term, where the global MSA is quadratic to the input size (_hw_), whereas the W-MSA is linear to _hw_ when the value of \(M\) is fixed. Therefore, W-MSA is more cost-effective, especially for larger input sizes.
Furthermore, the lack of cross-window information, that is the connection on the boundaries of each window can be solved by using a shifted window multi-head self-attention (SW-MSA). The shifted window partitioning method cyclically shifts the divided window towards the upper-left direction to form a new window division with the same number of windows, as shown in Figure 2 (b). Then masking mechanism restricts self-attention from calculating non-adjacent window features.
Self-attention in W-MSA and SW-MSA is a function that maps a query and a set of key-value pairs to an output, and its formula is as follows:
\[\textbf{{Q}}=\textbf{{X}}\textbf{{W}}_{\textbf{{Q}}}, \tag{3}\]
Figure 2: The window partitioning method for (a) W-MSA and (b) SW-MSA. Here, each red block means one window to calculate the local self-attention.
\[\mathbf{K} =\mathbf{X}\mathbf{W}_{\mathbf{K}}, \tag{4}\] \[\mathbf{V} =\mathbf{X}\mathbf{W}_{\mathbf{V}},\] (5) \[Attention(\textbf{{Q,K,V}}) =SoftMax(\frac{\mathbf{Q}\mathbf{K}^{\top}}{\sqrt{d}}+\textbf{{B}})\,\mathbf{V}, \tag{6}\]
where \(\mathbf{W}_{\textbf{{Q}}},\ \mathbf{W}_{\mathbf{K}},\ \mathbf{W}_{\mathbf{V}}\) are the weight matrices shared among all windows; \(X\in\mathbb{R}^{M^{2}\times C}\) is one of the local window features, \(\textbf{{Q}}\), \(\mathbf{K}\), \(\mathbf{V}\in\mathbb{R}^{M^{2}\times d}\) are query, key and value matrices, respectively; \(d\) is the dimension of query; \(\textbf{{B}}\in\mathbb{R}^{M^{2}\times M^{2}}\) is the learnable relative positional encoding. The attention function mentioned above is typically calculated multiple times, with the number of calculations equal to the number of attention heads used (referred to as \(h\)). The output of each attention calculation is then spliced together to form the final multi-head attention output.
The proposed ST model in this study incorporates physical principles to guide its learning process, facilitating the capture of the underlying physical behavior of turbulent flow and achieving better fitting to the training data. The first physical loss employed in the proposed ST model is the gradient error loss \(L_{gradient}\), which is computed from the gradient of the flow. This loss term can assist the model in accurately reconstructing the turbulent flow with non-uniform grid distribution, particularly in the wall-normal direction of turbulent channel flow in this study. Reynolds stress error \(L_{Reynolds\ stress}\) and the spectrum error \(L_{spectrum}\) quantify the variance in the Reynolds stress tensor of velocity fields and the difference in the spectral content of the flow parameters, respectively. By incorporating these loss terms, the model's ability to reconstruct the Reynolds stress components and the energy spectra of the flow is enhanced. In addition, the reconstructed velocity field error \(L_{velocity}\) also be considered as the basic loss in this model. The loss functions for the proposed ST model are defined as follows:
\[L_{gradient}=\frac{1}{S}\sum_{s=1}^{S}\lVert\nabla\hat{\mathbf{x}}_{s}-\nabla{ \mathbf{x}}_{s}\rVert_{2}^{2}, \tag{7}\]
\[L_{Reynolds\ stress}=\frac{1}{S}\sum_{s=1}^{S}\lVert\hat{\mathbf{T}}_{s}-\textbf{{T }}_{s}\rVert_{2}^{2}, \tag{8}\]
\[L_{spectrum}=\frac{1}{S}\sum_{s=1}^{S}\lVert\hat{E}(k)_{s}-E(k)_{s}\rVert_{1}, \tag{9}\]
\[L_{velocity}=\frac{1}{S}\sum_{s=1}^{S}\lVert\hat{\mathbf{x}}_{s}-{\mathbf{x}}_{s} \rVert_{2}^{2}, \tag{10}\]
\[L_{total}=\lambda_{1}L_{gradient}+\lambda_{2}L_{Reynolds\ stress}+\lambda_{3}L_{ spectrum}+\lambda_{4}L_{velocity}, \tag{11}\]
where the quantities with " \(\char 127\) " are the outputs of the ST model; \(\|\cdot\|_{1}\) and \(\|\cdot\|_{2}\) are the \(L_{1}\) and \(L_{2}\) norms; \(\mathbf{T}\) expresses the Reynolds stress tensor; \(E(k)\) is the energy spectrum, \(k\) is the wavenumber; \(S\) is batch size. The balance coefficients of the loss terms, denoted as \(\lambda_{1}\), \(\lambda_{2}\), \(\lambda_{3}\) and \(\lambda_{4}\), have been empirically determined as 0.01, 80, \(10^{-5}\), and 300 for isotropic turbulent flow, respectively. For turbulent channel flow, they are set as 5, 100, \(10^{-5}\), and 200, respectively.
## III Data description and pre-processing
In this study, we investigate two different types of flows: the forced isotropic turbulence flow obtained from the Johns Hopkins turbulence databases (JHTDB), which serves as a demonstration case, and the turbulent channel flow at \(Re_{\tau}=180\) and 550 generated by performing DNS, which is used as systematic model capability test case. In both cases, the ST model is trained using an adaptive moment estimation (Adam) optimization algorithm [62] with a batch size \(S=8\) and an initial learning rate \(\eta=0.0001\). To implement the model, the open-source library TensorFlow 2.2.3 is utilized. Additionally, an early stopping regulation technique is employed to terminate the training.
### Forced isotropic turbulence flow data
For the demonstration case, the forced isotropic turbulence dataset obtained from the JHTDB at a Taylor-scale Reynolds number \(Re_{\lambda}=\lambda u_{rms}/\nu=418\) is considered to train and test the proposed ST model, where \(\lambda=(15\nu u_{rms}^{2}/\varepsilon)^{1/2}\) is Taylor microscale, \(u_{rms}=(\langle u_{i}u_{i}\rangle/3)^{1/2}\) represents root-mean-squared velocity, \(\nu\) is the kinematic viscosity and \(\varepsilon\) means dissipation rate. This dataset was generated from DNS using a pseudo-spectral parallel code. The governing equations used for simulation were the incompressible Navier-Stokes equations. The velocity vector \(\mathbf{u}=(u,\,v,\,w)\), where \(u,\,v,\,w\) are streamwise, wall-normal, and spanwise components, respectively, with the corresponding directions \(x,\,y,\,z\). The grid points are uniformly distributed in all directions. The detailed parameters for the forced isotropic turbulence are shown in Table 1. Further information regarding the simulation and the database utilized in this study can be found in Perlman _et al._[63].
The velocity dataset is applied as input to the ST model, which contains 200 snapshots of the \(x-y\) plane (where \(z=0\)). The dataset spans approximately two large-eddy turnover times. The training dataset consists of 100 snapshots, and the test dataset is another 100 snapshots that are completely separate from the training dataset. The time interval between each snapshot in the training and testing dataset is 0.02. In order to reduce computational costs, the entire domain is divided into 64 parts, resulting in a change in data size from the original \(N_{x}\times N_{y}=1024\times 1024\) in the \(x-y\) plane to 128\(\times\)128. Consequently, the training dataset comprises 6400 sub-snapshots, which are randomly shuffled before being fed into the model.
### Turbulent channel flow
The turbulent channel flow data at \(Re_{\tau}=180\) and 550 are utilized as datasets for the proposed model. The flow data are produced through DNS using the incompressible momentum and continuity equations, which are expressed as:
\[\frac{\partial\mathbf{u}}{\partial t}+\nabla\cdot(\mathbf{u}\mathbf{u})=-\frac{1}{\rho} \nabla p+\nabla\cdot(\nu\nabla\mathbf{u}), \tag{12}\]
\[\nabla\cdot\mathbf{u}=0. \tag{13}\]
In the equations above, \(\mathbf{u}=(u\), \(v\), \(w)\) denotes the velocity vector, where \(u\), \(v\) and \(w\) represent the streamwise, wall-normal and spanwise components in \(x\), \(y\), \(z\) directions. \(t\), \(\rho\), \(p\), and \(\nu\) are time, density, pressure, and kinematic viscosity, respectively. The open-source computational fluid dynamics (CFD) finite-volume code OpenFOAM-5.0x is used to perform the simulations.
The simulation parameters of each friction Reynolds number are shown in Table 2. The streamwise and spanwise directions are subject to periodic boundary conditions. Meanwhile,
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \(Re_{\lambda}\) & \(L_{x}\times L_{y}\times L_{Z}\) & \(N_{x}\times N_{y}\times N_{z}\) & \(\nu\) & \(\Delta t\) \\ \hline
418 & \(2\pi\times 2\pi\times 2\pi\) & \(1024\times 1024\times 1024\) & 0.000185 & 0.0002 \\ \hline \hline \end{tabular}
\end{table}
Table 1: The detailed parameters for the forced isotropic turbulence. Here, \(L\) is the domain dimension and \(N\) is the number of grid points. \(\nu\) and \(\Delta t\) represent kinematic viscosity and simulation time-step, respectively.
the channel top and bottom are subject to no-slip conditions. The grid points are uniformly distributed in the \(x\) and \(z\) directions, while a non-uniform distribution is used in the \(y\) direction. DNS data obtained from Moser _et al._[64] have been used to validate the turbulence generated by the simulation, and it was verified that the simulated data had similar statistical characteristics. The simulation uses the pressure implicit split operator algorithm to solve the coupled pressure-momentum system. A second-order accurate linear upwind scheme is utilized to discretize the convective fluxes. Similarly, all other discretization schemes used in the simulation also have second-order accuracy.
The training dataset contains 16,000 snapshots of a single (\(y-z\)) plane extracted from turbulent channel flow simulation, split evenly between turbulence data at \(Re_{\tau}\) = 180 and \(Re_{\tau}\) = 550, with 8,000 snapshots in each subset. Additionally, the test dataset for each case consists of another 1,000 snapshots. To apply transfer learning to the data at \(Re_{\tau}\) = 550 by initializing the model weights with the weights of the flow at \(Re_{\tau}\) = 180, we interpolate the data \(Re_{\tau}\) = 550 to match the grid size of the data at \(Re_{\tau}\) = 180. The interval between the collected snapshots of the flow fields equals ten simulation time steps for the flow at each \(Re_{\tau}\).
## IV Results and Discussion
### Forced isotropic turbulence flow
In this section, the forced isotropic turbulence data are used to examine the ability of the ST model to compress and reconstruct data. The compression ratio (\(CR\)) is used to
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline \(Re_{\tau}\) & \(L_{x}\times L_{y}\times L_{z}\) & \(N_{x}\times N_{y}\times N_{z}\) & \(\Delta x^{+}\) & \(\Delta z^{+}\) & \(\Delta y^{+}_{w}\) & \(\Delta y^{+}_{c}\) & \(\Delta t^{+}\) \\ \hline
180 & \(4\pi\delta\times 2\delta\times 2\pi\delta\) & \(256\times 128\times 256\) & 8.831 & 4.415 & 0.63 & 4.68 & 0.113 \\
550 & \(4\pi\delta\times 2\delta\times 2\pi\delta\) & \(512\times 336\times 512\) & 13.492 & 6.746 & 0.401 & 5.995 & 0.030 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Simulation parameters of turbulent channel flow at \(Re_{\tau}\) = 180 and 550. Here, \(L\) is the domain dimension and \(N\) is the number of grid points. The superscript ”+” denotes that the quantity is made dimensionless by using \(u_{\tau}\) and \(\nu\). \(\Delta y^{+}_{w}\) refers to the distance near the wall and \(\Delta y^{+}_{c}\) refers to the spacing in the center of the channel.
quantify the degree of compression achieved by the given model, where \(CR=\) (original data size / compressed data size) (with \(CR\) is 16 in this section). Additionally, test data that are not contained in the training are used to obtain subsequent results. The decompressed instantaneous spanwise vorticity field (\(\omega_{z}\)) and velocity field (\(w\)) for three different time steps are shown in Figure 3. As can be observed, the ST model achieves a satisfactory qualitative reproduction of the true fields.
In addition to qualitative assessments, a detailed analysis of flow statistics is conducted to evaluate the performance of the ST model. Figure 4 displays the probability density function (p.d.f.) plot of the decompressed velocity gradient field (\(\partial u/\partial x\)), which demonstrates the
Figure 3: Instantaneous spanwise (a) vorticity field and (b) velocity field for the case of forced isotropic turbulence.
ability of the ST model to accurately reconstruct the flow field. It is worth noting that slight deviations are shown at the tails of the p.d.f. because the decompressed flow fields are less intermittent than DNS data. Furthermore, the Kinetic energy spectrum (\(E(k)\)) is used to check the performance of the ST model in terms of the inertial scale, where \(k\) is the wave number. As shown in Figure 5, the spectrum of the decompressed data agreed well with the DNS result, indicating that the ST model can reproduce the flow with an accurate spectrum content along the whole inertial scales.
There is clear evidence from the above demonstration results that the ST model is capable of compressing and decompressing the uniformly distributed turbulent flow effectively and
Figure 4: Probability density function plot of the velocity gradient field for the case of isotropic turbulent flow.
Figure 5: Kinetic energy spectrum for the case of isotropic turbulent flow.
maintaining the same instantaneous and statistical results as the ground truth data. In the next section, the ability of the ST model to reconstruct the non-uniformly distributed turbulent flow is verified.
### Turbulent channel flow
In this section, the compression and decompression capabilities of the ST model are verified using turbulent channel flow at \(Re_{\tau}\) = 180 and \(Re_{\tau}\) = 550. To establish a baseline for comparison, the channel flow snapshots were compressed and reconstructed using a CNN-based autoencoder (CNN-AE) with an architecture similar to the ST model. Here, convolutional layers, downsampling, and upsampling are used instead of SwinT-blocks, patch-merging, and patch-splitting. Both the ST model and the CNN-AE have the same \(CR\) of 64 and the same hyperparameters. In addition, this section evaluates the performance of the ST model at different \(CR\), verifying the robustness of the model.
Figures 6 and 7 display the instantaneous streamwise velocity field (\(u^{+}\)) and vorticity field (\(\omega_{x}^{+}\)) of the DNS and ST-decompressed results for three different time steps at \(Re_{\tau}\) = 180 and \(Re_{\tau}\) = 550, respectively. It can be observed that the ST model successfully compresses and decompresses the flow data at \(Re_{\tau}\) = 180, yielding results that are consistent with the DNS data. Nonetheless, there are some visual disparities in the decompressed turbulent channel flow at \(Re_{\tau}\) = 550, particularly in the representation of small-scale structures, while the dominant flow features and flow patterns have been well-preserved.
The turbulent statistics of the reconstructed velocity fields are compared with the turbulent statistics of the DNS turbulent channel flow at \(Re_{\tau}\) = 180 and 550 in Figure 8 (a) and (b), respectively. The mean streamwise velocity (\(U^{+}\)) profiles of the decompressed flow using the ST model and the CNN-AE at \(Re_{\tau}\) = 180 and 550 show accurate alignment with the profiles from the DNS data, covering the entire \(y^{+}\) range. The comparison of the root-mean-square (r.m.s.) profiles of the velocity components (\(u^{+}_{rms}\), \(v^{+}_{rms}\) and \(w^{+}_{rms}\)) reveal a different observation. The r.m.s. profiles of the reconstructed flow obtained using the ST model fit well with the DNS data at both \(Re_{\tau}\) = 180 and 550. In contrast, the CNN-AE produces relatively less accurate results, particularly for the flow at \(Re_{\tau}\) = 550. Similarly, the Reynolds shear stress profile profiles have the same behavior as the r.m.s. profiles. This can be attributed to the fact that at higher \(Re_{\tau}\), the flow becomes more complex and chaotic,
Figure 6: Instantaneous streamwise (a) velocity field and (b) vorticity field for the case of turbulent channel flow at \(Re_{\tau}\) = 180.
Figure 7: Instantaneous streamwise (a) velocity field and (b) vorticity field for the case of turbulent channel flow at \(Re_{\tau}\) = 550.
making it more challenging for the CNN-AE to reconstruct the boundary region accurately.
The p.d.f. plots of the three velocity fields (\(u^{+}\), \(v^{+}\) and \(w^{+}\)) for \(Re_{\tau}=180\) and 550 decompressed from the ST model and the CNN-AE are shown in Figure 9. It can be observed that the p.d.f. of the reconstructed velocity components are consistent with the DNS results, while those from the CNN-AE exhibit a relatively high deviation, especially for the flow at \(Re_{\tau}=550\). These results indicate that the ST model offers greater advantages in compressing and decompressing the flow data than the CNN-AE.
To further confirm the capability of the ST model in reconstructing genuine spatial spectra of the restored velocity fields, the premultiplied spanwise wavenumber energy spectra of the three velocity components denoted as \(k_{z}\phi_{\xi\xi}\), are examined. Here, \(\phi_{\xi\xi}\) denotes the spanwise wavenumber spectrum, \(\xi\) means velocity component and \(k_{z}\) is the spanwise wavenumber. Figure 10 shows the plots of the \(k_{z}^{+}\phi_{\xi\xi}^{+}\) as a function of the wall-normal distance \(y^{+}\) and the spanwise wavelength \(\lambda_{z}^{+}\). The spectra of the velocity components obtained from the ST model conform to the spectra from the DNS data with a small discrepancy observed at the high wavenumbers, while the \(k_{z}^{+}\phi_{\xi\xi}^{+}\) plots obtained from the CNN-AE are less accurate
Figure 8: Turbulent statistics for the turbulent channel flow at (a) \(Re_{\tau}=180\) and (b) \(Re_{\tau}=550\). Mean streamwise velocity profile (left), r.m.s. profiles for the three velocity components (middle), and Reynolds shear stress profile (right).
than those obtained from the ST model. These results further validate the ST model's outstanding ability to accurately capture the spatial distribution of the velocity fields.
The compression and decompression accuracy of the ST model and the CNN-AE at \(Re_{\tau}\) = 180 and 550 are investigated by using the \(L_{2}\)-norm relative error of the velocity fields:
\[\epsilon(\xi)=\frac{1}{I}\sum_{i=1}^{I}\frac{\|\hat{\xi}_{i}-\xi_{i}\|_{2}}{\| \xi_{i}\|_{2}}. \tag{14}\]
where \(\hat{\xi}_{i}\) and \(\xi_{i}\) denote the decompressed velocity fields by each model and the DNS data, respectively. \(I\) represents the total number of test snapshots, which is set to 1,000. Figure 11 presents the \(L_{2}\)-norm relative error for the reconstructed flow at (a) \(Re_{\tau}\) = 180 and (b) \(Re_{\tau}\) = 550. As shown, the ST model achieves lower errors than the CNN-AE for the two Reynolds numbers with the same \(CR\), indicating the superior performance of the ST model. These results further confirm that the ST model outperforms the CNN-AE. This can be attributed
Figure 9: Probability density function plots of the three velocity components (streamwise velocity on the left, wall-normal velocity in the middle, and spanwise velocity on the right) as a function of the wall-normal distance for the turbulent channel flow at (a) \(Re_{\tau}\) = 180 and (b) \(Re_{\tau}\) = 550. Shaded contours indicate the p.d.f. from the DNS data, while black contours and grey contours represent reconstruction results of the ST model and the CNN-AE, respectively. The contours levels are 20%, 40%, 60% and 80% of the maximum p.d.f.
to the ability of the ST model to capture long-distance spatial correlation, making it more suitable for non-uniformly distributed data. These results give confidence that the ST model can be applied to complex geometric flow data such as pipe flow by adjusting the window segmentation strategy and masking mechanism, while for the CNN-AE, the use of the padding operation can result in significant errors at the boundaries.
In addition to the \(CR=64\) mentioned earlier in this section, here, two more \(CR\) values are added to validate the ability of the ST model. The errors of the three velocity components increase relatively as the \(CR\) increases, which aligns with the trade-off between \(CR\) and
Figure 10: Premultiplied spanwise wavenumber energy spectra of the three velocity components (streamwise velocity on the left, wall-normal velocity in the middle, and spanwise velocity on the right) as a function of the wall-normal distance and the spanwise wavelength for the turbulent channel flow at (a) \(Re_{\tau}=180\) and (b) \(Re_{\tau}=550\). Shaded contours show DNS data, while black and grey contours represent reconstruction results of the ST model and the CNN-AE, respectively. The contour levels are set at 10% increments, ranging from 10% to 90% of the maximum premultiplied spanwise wavenumber energy spectra.
reconstruction quality. However, even with \(CR=256\), the error of the ST model is still smaller than the result of the CNN-AE with \(CR=64\), which indicates that the proposed ST model is robust for different compression ratios. Moreover, the decompression flow exhibits larger errors at the high \(Re_{\tau}\), which is attributable to the increased turbulence and complexity of the flow field at higher Reynolds numbers. Nevertheless, Figure 11 (b) shows that the error of the proposed model does not increase significantly with increasing \(CR\), demonstrating that our model can still achieve high accuracy even for challenging recovery cases.
Notably, the transfer learning (TL) technique [35; 43] is employed in this study to decrease the training time by leveraging the weights of a trained model to initialize another model. The ST model is first trained on the turbulent channel flow at \(Re_{\tau}=180\). Subsequently, the weights of the trained ST model are transferred to initialize the model for the turbulent channel flow at \(Re_{\tau}=550\), thus enabling faster convergence. The results indicate that TL can reduce the training time by 64% without compromising the accuracy, and the reduced training time is relatively greater than that reported in Yousif _et al._[44] since this study did not reduce the amount of training data.
Finally, it is important to consider the computational cost of the ST model. When \(CR\) = 64, the ST model has a total of approximately \(6.60\times 10^{6}\) trainable parameters (\(3.30\times 10^{6}\) for the encoder part and \(3.30\times 10^{6}\) for the decoder part). When training the ST model for turbulent channel flow at \(Re_{\tau}=180\) and 550, it takes around 40 and 14 hours, respectively, using a single NVIDIA TITAN RTX GPU with the aid of TL. Despite the relatively long
Figure 11: Relative \(L_{2}\)-norm error of the decompressed velocity fields at (a) \(Re_{\tau}=180\) and (b) \(Re_{\tau}\) = 550. Cases 1, 2, and 3 correspond to the decompressed results from the ST with \(CR=16\), 64, and 256, while Case 4 represents the decompressed results from the CNN-AE with \(CR=64\).
training time, the computational cost is a one-time expense. After the model training is completed, the computational cost of compressing and decompressing flow data is negligible, which meets the requirements for fast and efficient data processing.
## V Conclusions
This study proposed an efficient compression deep-learning method for turbulent data storage and transmission using a Swin-Transformer-based model, called the ST model. A physical constraints-based loss function was made of the velocity gradient error, Reynolds stress error, energy spectrum error, and velocity error, which guides the model's learning process to capture the underlying physical behavior of the turbulent flow.
First, the forced isotropic turbulent flow at \(Re_{\lambda}=418\) obtained from the JHTDB was considered as a demonstration case of the ST model's ability to compress and decompress the turbulent data. The instantaneous and statistical results of the isotropic flow exhibit the outstanding capability of the ST model to compress large data for storage and transmission and restore it with factual information. Furthermore, the ability of the ST model was tested and validated by the turbulent channel flow at \(Re_{\tau}=180\) and \(550\) generated by DNS. The restored instantaneous velocity fields showed excellent results that matched well with the DNS data. In addition to visual analysis, the statistical analysis of the velocity fields also yielded accurate results, with the exception of a minor deviation in the flow at \(Re_{\tau}=550\), which can be attributed to the increased chaotic nature of the turbulence with the increasing of Reynolds number. The probability density function and the premultiplied spanwise wavenumber energy spectra agreed with the ground truth data, indicating the accurate spatial distribution of the reconstructed velocity fields. While the above results were obtained using \(CR=64\), a higher \(CR=256\) was used to prove the robustness of the ST model to the change in the \(CR\). The relative error plots denoted that the errors remained significantly low even under the high compression ratio, confirming the reliable compression power of the model.
In addition, the proposed ST model was compared in terms of performance with a CNN-AE. The statistical profiles of the turbulent channel flow revealed that the results from the ST model were significantly more consistent with the DNS data than those obtained by the CNN-AE, indicating the superior ability of the ST model to compress and decompress the
turbulent flow. The comparisons of p.d.f. and the energy spectra further supported the ST model's superior ability, especially for the turbulent channel flow at \(Re_{\tau}\) = 550. Moreover, the relative error of the CNN-AE was much higher than the ST model under the same \(CR\). All the compared results suggested that the ST model can achieve better restoration than the CNN-AE for non-uniform flow data. Finally, the effect of transfer learning that leverages the weights of a trained model to initialize another model was checked by transferring the weights of the trained ST model for the flow at \(Re_{\tau}\) = 180 to initialize the model for the flow at \(Re_{\tau}\) = 550. The results showed that TL reduced the training time by 64% without diminishing the correctness.
In this study, the ST model combined with a physical constraints-based loss function provides a powerful data compression and decompression solution in fluid mechanics, which can provide high compression ratios and accurate results. This can result in reduced data storage and transmission requirements and consequently can increase the efficiency of data-driven turbulence research.
###### Acknowledgements.
This work was supported by 'Human Resources Program in Energy Technology' of the Korea Institute of Energy Technology Evaluation and Planning (KETEP), granted financial resource from the Ministry of Trade, Industry & Energy, Republic of Korea (no. 20214000000140). In addition, this work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIP) (no. 2019R1I1A3A01058576).
## Data Availability
The data that supports the findings of this study are available within this article.
|
2309.05970 | Coloured corner processes from asymptotics of LLT polynomials | We consider probability measures arising from the Cauchy summation identity
for the LLT (Lascoux--Leclerc--Thibon) symmetric polynomials of rank $n \geq
1$. We study the asymptotic behaviour of these measures as one of the two sets
of polynomials in the Cauchy identity stays fixed, while the other one grows to
infinity. At $n=1$, this corresponds to an analogous limit of the Schur
process, which is known to be given by the Gaussian Unitary Ensemble (GUE)
corners process.
Our main result states that, for $n>1$, our measures asymptotically split
into two parts: a continuous one and a discrete one. The continuous part is a
product of $n$ GUE corners processes; the discrete part is an explicit finite
distribution on interlacing $n$-colourings of $n$ interlacing triangles, which
has weights that are rational functions in the LLT parameter $q$. The latter
distribution has a number of interesting (partly conjectural) combinatorial
properties, such as $q$-nonnegativity and enumerative phenomena underlying its
support.
Our main tools are two different representations of the LLT polynomials, one
as partition functions of a fermionic lattice model of rank $n$, and the other
as finite-dimensional contour integrals, which were recently obtained in
arXiv:2012.02376, arXiv:2101.01605. | Amol Aggarwal, Alexei Borodin, Michael Wheeler | 2023-09-12T05:39:40Z | http://arxiv.org/abs/2309.05970v1 | # Coloured corner processes from asymptotics of LLT polynomials
###### Abstract.
We consider probability measures arising from the Cauchy summation identity for the LLT (Lascoux-Leclerc-Thibon) symmetric polynomials of rank \(n\geqslant 1\). We study the asymptotic behaviour of these measures as one of the two sets of polynomials in the Cauchy identity stays fixed, while the other one grows to infinity. At \(n=1\), this corresponds to an analogous limit of the Schur process, which is known to be given by the Gaussian Unitary Ensemble (GUE) corners process.
Our main result states that, for \(n>1\), our measures asymptotically split into two parts: a continuous one and a discrete one. The continuous part is a product of \(n\) GUE corners processes; the discrete part is an explicit finite distribution on interlacing \(n\)-colourings of \(n\) interlacing triangles, which has weights that are rational functions in the LLT parameter \(q\). The latter distribution has a number of interesting (partly conjectural) combinatorial properties, such as \(q\)-nonnegativity and enumerative phenomena underlying its support.
Our main tools are two different representations of the LLT polynomials, one as partition functions of a fermionic lattice model of rank \(n\), and the other as finite-dimensional contour integrals, which were recently obtained in arXiv:2012.02376, arXiv:2101.01605.
###### Contents
* 1 Introduction
* 2 Fermionic vertex models
* 3 Partition functions
* 4 Fusion
* 5 LLT measures and Plancherel specialization
* 6 Asymptotics
* 7 Distribution on colour sequences
* A Interlacing triangles and graph colourings
## 1. Introduction
### Preface
The Gaussian Unitary Ensemble (or GUE, for short) is one of the cornerstones of Random Matrix Theory that goes back to Wigner [14]. It consists of Hermitian matrices \(H\) distributed according to the Gaussian measure \(P(dH)\sim\exp(-Tr(H^{2}))dH\), which is the essentially unique1 distribution on this set that satisfies two natural conditions: (a) It is invariant under any unitary conjugation; and (b) Linearly independent real and imaginary parts of matrix elements are statistically independent, see [13, Section 2.5].
Footnote 1: Up to shifting and scaling.
One important feature of the GUE is that it can be viewed as a universal limiting object for discrete probabilistic systems related to representation theory. The first limiting relation of this kind goes back to Kerov [10] who studied the distribution of symmetry types of tensors in high tensor powers of a finite-dimensional vector space. In this case the limit is described by the distribution of spectra of traceless GUE matrices, and the condition of vanishing trace can be naturally removed by randomizing the number of tensor factors (also known as Poissonization). The result was later rediscovered by Tracy-Widom [12] in the first wave of works related to the asymptotics of longest increasing sequences. A somewhat more conceptual way to view this result is that of the quasi-classical limit in representation theory, see _e.g._ Heckman [11].
GUREs of different sizes can be coupled by viewing them as upper-left corners of the same infinite Hermitian matrices. Such measures on infinite Hermitian matrices naturally appear in Asymptotic Representation Theory, see Olshanski-Vershik [16]. In the framework of tiling models, thus coupled GUEs were first obtained by Johansson-Nordenstam [17] and Okounkov-Reshetikhin [1], and their universality in such contexts was recently shown by Aggarwal-Gorin [1]. The terms "GUE minors process" and "GUE corners process" have both been introduced for the resulting ensemble; we will use the latter one.
If one translates the problem of analyzing tensor symmetry types to the language of symmetric functions (which in this case represent the characters of both general linear and symmetric groups), then one is looking at probability measures on partitions obtained from summands in the Cauchy summation identity for the Schur symmetric polynomials. There are two sets of Schur polynomials in the game; one of them remains fixed, in correspondence with the fixed dimension of the vector space that is being tensored, while the specialization of the other one is growing in the way corresponding to the growing tensor power2. The measures on partitions arising from specializations of this Cauchy identity have been known as _Schur measures_ since the work of Okounkov [1].
Footnote 2: More exactly, the Poissonization parameter tends to infinity.
The goal of the present work is to perform asymptotic analysis in a similar setup, but with the role of the Schur polynomials played instead by _LLT symmetric polynomials_. The LLT polynomials were introduced by Lascoux-Leclerc-Thibon in [14]; an insightful and easy-to-read account of their first 25 years by Thibon can be found at [15]. Cauchy-type summation identities for the LLT polynomials were later obtained by Lam [16], and the probability measures that we study have weights proportional to the summands of such an identity.
While the origins of the LLT polynomials were representation theoretic, _cf._[1], their most transparent definition is combinatorial -- they are generating functions of ribbon Young tableaux, where monomials in the variables of the polynomials are used to track the weight of the tableaux, and powers of a new parameter \(q\) track the so-called _spin_ statistics introduced in [14]. When \(q=1\), the LLT polynomials reduce to products of Schur polynomials, the number of which (also equal to the size of the ribbons) will be called the _rank_; we will denote it by \(n\) throughout the paper. Thus, one can think of the LLT polynomials as a higher rank \(q\)-analogue of (products of) Schur polynomials.
Neither the combinatorial nor the representation theoretic definitions of the LLT polynomials seem suitable for the asymptotic problem in question. On the other hand, we recently found an integral representation for these polynomials in [1, Chapter 11]. It is the steepest descent analysis of those integral representations that allowed us to reach our main result.
The limit that we obtained carried a couple of surprises, the main one being that it splits into a continuous and a discrete part. The continuous part is a direct product of \(n\) GUE corners processes. The discrete part is a probability distribution on the (finitely many) ways to colour \(n\) interlacing triangular arrays3 by \(n\) colors so that each color interlaces (an exact definition is below). The latter distribution has a few interesting properties.
Footnote 3: These arrays originate from \(n\) Gelfand–Tsetlin patterns drawn next to each other.
First, its weights can be represented as certain partition functions of a _fermionic lattice model of rank \(n\)_. The connection is in no way immediate, and it is related to the vertex model representations for the LLT polynomials obtained in [1], see also Corteel-Gitlin-Keating-Meza [1]. This vertex model interpretation of the limiting distribution ends up being crucial for our proof.
Second, these weights, which are _a priori_ rational functions of the deformation parameter \(q\), appear to be given by polynomials in \(q\) with positive integer coefficients divided by a power of the \(q\)-factorial of \(n\). We conjecture that this is always the case, even though we were only able to observe this phenomenon on the few examples we tested on a computer. The combinatorial meaning of the coefficients of the resulting polynomials also remains unclear. See Figure 2 in Appendix A below for a quick example in rank 2.
Third, the size of the support of the distributions, _i.e._, the number of interlacing \(n\)-colourings of \(n\) triangular arrays appears to be combinatorially interesting. It is easy to compute for \(n=1\) and \(2\), when it is equal to \(1\) and to a simple power of \(2\), respectively. However, for \(n=3\) it turns out to be equal to the number of \(4\)-colourings of a triangle in the triangular lattice. We originally conjectured this coincidence on the basis of numerics, and it was later proved via an elegant bijective construction by Gaetz-Gao [1]. For \(n=4\) the numerics suggest a similar relationship with \(5\)-colourings of squares in the "king graph", see
Conjecture A.5, although no proof is currently available. Finally, for \(n\geqslant 5\) we were not able to find similar matchings.
Recalling the appearance of the GUE corners process in random tilings, it is natural to ask if the limiting object we observed has a meaning in the world of tiling models. We believe it is indeed so, and in particular, the limiting behaviour of the random \(n\)-tilings of Aztec diamonds introduced by Corteel-Gitlin-Keating [12] should have the same limit, as the size of the Aztec diamond tends to infinity, near the tangency points of the "arctic curve" that bounds the frozen regions. The reason is that these \(n\)-tilings can be described via a closely related _dual_ Cauchy identity for the LLT polynomials. The focus on a tangency point of the arctic curve results in one set of the LLT polynomials within the identity staying fixed, while the specialization of the other one is growing with the size of the domain, much like in the limit that we investigated. We will, however, leave this connection to future studies.
Let us now describe our results in more detail.
### Fermionic vertex models, coloured compositions and partition functions
The vertex models that we consider in this work assign weights to collections of paths drawn on a square grid. Each vertex that is traversed by at least one path produces a weight that depends on the configuration of all the paths that go through it. The total weight for a collection of paths is the product of weights of the vertices that the paths traverse (we assume the normalization in which the weight of an empty vertex is equal to unity).
Each path carries a colour that is a number between \(1\) and \(n\), where \(n\geqslant 1\) is the rank of the model. Let us first assume that each horizontal edge of the underlying square grid can carry no more than one path, while vertical edges can be occupied by multiple paths of _distinct colours_. Thus, the states of the horizontal edges can be encoded by an integer between \(0\) and \(n\), with \(0\) denoting an edge that is not occupied by a path, while the states of the vertical edges can be encoded by \(n\)-dimensional binary strings which specify whether each colour \(\{1,\dots,n\}\) appears (or not) at that edge.
Our paths will always travel upward in the vertical direction, and in the horizontal direction a path can travel rightward or leftward, depending on the specific type of vertices that are used; this choice will always be explicitly stated.
Let us now specify our vertex weights more precisely. In regions of rightward horizontal travel, our vertex weights take the following form:
(1.1)
where \(\tilde{L}^{(s)}_{x,q}\) is a rational function of three parameters \(x,q,s\). Here \(x\) is the _spectral parameter_ associated to a row of the lattice (a different parameter may be used for each row), \(q\) is the _quantum deformation parameter_ (a global parameter that is common to all vertices), and \(s\) is the _spin parameter_, which arises due to the fact that the vertical line of the vertex is a higher-spin module for the underlying quantized affine Lie algebra \(U_{q}(\widehat{\mathfrak{sl}}(1|n))\). For the explicit form of these weights, see equation (2.3).
In regions of leftward horizontal travel, our vertex weights are given by
(1.2)
where \(\tilde{M}^{(s)}_{x,q}\) is again a rational function of the three parameters \(x,q,s\) defined above. The weights (1.1) and (1.2) are related via the simple identity
\[\tilde{M}^{(s)}_{x,q}(\boldsymbol{A},b;\boldsymbol{C},d)=\tilde{L}^{(1/s)}_{ 1/x,1/q}(\boldsymbol{A},b;\boldsymbol{C},d), \tag{1.3}\]
which holds for all \(\boldsymbol{A},\boldsymbol{C}\in\{0,1\}^{n}\) and \(b,d\in\{0,1,\dots,n\}\); see equation (2.6). For full details about the weights (1.1) and (1.2), including their Yang-Baxter equations, see Sections 2.2-2.4. We note that the
fermionic weights (1.1) and (1.2) appeared previously in [1], and bosonic counterparts of them date even further back to [1].
The partition functions (and ultimately, probability measures) that we consider are all indexed by a set of objects called _coloured compositions_:
**Definition 1.1** (Definition 3.1 below).: Let \(\lambda=(\lambda_{1},\ldots,\lambda_{n})\) be a composition of length \(n\). We introduce the set \(\mathcal{S}_{\lambda}\) of \(\lambda\)-coloured compositions as follows:
\[\mathcal{S}_{\lambda}=\Big{\{}\mu=\Big{(}0\leqslant\mu_{1}^{(1)}<\cdots<\mu_{ \lambda_{1}}^{(1)}\Big{|}0\leqslant\mu_{1}^{(2)}<\cdots<\mu_{\lambda_{2}}^{(2) }\Big{|}\cdots\Big{|}0\leqslant\mu_{1}^{(n)}<\cdots<\mu_{\lambda_{n}}^{(n)} \Big{)}\Big{\}}. \tag{1.4}\]
One may think of the elements of \(\mathcal{S}_{\lambda}\) as \(n\)-tuples \(\big{(}\mu^{(1)},\ldots,\mu^{(n)}\big{)}\) of strict compositions. For each \(1\leqslant i\leqslant n\), the superscript of \(\mu^{(i)}\) is its _colour_, and its length is \(\lambda_{i}\).
Our first partition function of interest is denoted \(f_{\mu}(\lambda;x_{1},\ldots,x_{m};s)\). This is a (nonsymmetric) rational function in an alphabet \((x_{1},\ldots,x_{m})\), indexed by a composition \(\lambda=(\lambda_{1},\ldots,\lambda_{n})\) satisfying \(\sum_{i=1}^{n}\lambda_{i}=m\), as well as a coloured composition \(\mu\in\mathcal{S}_{\lambda}\). Up to an overall multiplicative factor, \(f_{\mu}(\lambda;x_{1},\ldots,x_{m};s)\) is defined as a partition function using the vertex weights (1.1):
\[(-s)^{|\mu|}\cdot f_{\mu}(\lambda;x_{1},\ldots,x_{m};s)= \tag{1.5}\]
where \(\boldsymbol{e}_{0}\) denotes the \(n\)-dimensional zero vector and \(\boldsymbol{A}(k)=\sum_{i=1}^{n}\boldsymbol{1}_{k\in\mu^{(i)}}\boldsymbol{e} _{i}\) is a binary string that encodes whether \(k\) is present (or not) as a part in \(\mu^{(i)}\), for all \(1\leqslant i\leqslant n\) and \(k\geqslant 0\). A convenient visualization aid is that for each \(1\leqslant i\leqslant n\), a collection of \(\lambda_{i}\) paths of colour \(i\) enter the partition (1.5) via its left boundary and travel through the lattice, ultimately exiting via the top of the columns \(\mu_{1}^{(i)}<\cdots<\mu_{\lambda_{i}}^{(i)}\).
In a similar vein, one may define multivariate (nonsymmetric) rational functions as partition functions constructed from the weights (1.2). We denote these by \(g_{\mu}(\lambda;x_{1},\ldots,x_{m};s)\), where the specification of \((x_{1},\ldots,x_{m})\), \(\lambda\) and \(\mu\in\mathcal{S}_{\lambda}\) is exactly as above. Up to an overall multiplicative factor, \(g_{\mu}(\lambda;x_{1},\ldots,x_{m};s)\) is defined as follows:
\[(-s)^{-|\mu|}\cdot g_{\mu}(\lambda;x_{1},\ldots,x_{m};s)= \tag{1.6}\]
where (as above) \(\mathbf{A}(k)=\sum_{i=1}^{n}\mathbf{1}_{k\in\mu^{(i)}}\mathbf{e}_{i}\) for all \(1\leqslant i\leqslant n\) and \(k\geqslant 0\).
The functions \(f_{\mu}(\lambda;x_{1},\dots,x_{m};s)\) and \(g_{\mu}(\lambda;x_{1},\dots,x_{m};s)\) are also not new; they were introduced in [1]. They have a number of key properties, including exchange relations under the action of Hecke algebra (Section 3.5) and antisymmetrization identities (Section 3.6). They also have meaningful \(s=0\) degenerations, when they both reduce to (certain antisymmetrizations of) nonsymmetric Hall-Littlewood polynomials. Moreover, the \(s=0\) degenerations of \(f_{\mu}(\lambda;x_{1},\dots,x_{m};s)\) and \(g_{\mu}(\lambda;x_{1},\dots,x_{m};s)\) pair together to provide an integral formula for the LLT polynomials; it is the latter fact that shall be of most interest to us in the current text.
### Two formulas for LLT polynomials
In this section we recall two formulas for the LLT polynomials. The first is as partition functions in a fermionic \(U_{q}(\widehat{\mathfrak{sl}}(1|n))\) vertex model, following [13, 1]. The second is as a contour integral, following [1].
We begin with the partition function representation of the LLT polynomials. To state it, we extend our previous notion of vertex models to the situation where both horizontal and vertical edges may admit multiple paths of distinct colours; as such, every edge of the underlying square grid is now labelled by an \(n\)-dimensional binary string which specifies whether each colour \(\{1,\dots,n\}\) appears (or not) at that edge. For arbitrary binary strings \(\mathbf{A}=(A_{1},\dots,A_{n})\), \(\mathbf{B}=(B_{1},\dots,B_{n})\), \(\mathbf{C}=(C_{1},\dots,C_{n})\), \(\mathbf{D}=(D_{1},\dots,D_{n})\) we then introduce the vertex weights
(1.7)
where \(|\mathbf{D}|=\sum_{i=1}^{n}D_{i}\) and \(\varphi(\mathbf{X},\mathbf{Y})=\sum_{1\leqslant i<j\leqslant n}X_{i}Y_{j}\) for any two vectors \(\mathbf{X},\mathbf{Y}\in\mathbb{Z}^{n}\).
Fix a composition \(\lambda=(\lambda_{1},\dots,\lambda_{n})\) and two coloured compositions \(\mu,\nu\in\mathcal{S}_{\lambda}\). The skew LLT (symmetric) polynomial \(\mathbb{G}_{\mu/\nu}(\lambda;x_{1},\dots,x_{p})\) is given by the following partition function in the model (1.7):
(1.8)
where \(\mathbf{A}(k)=\sum_{i=1}^{n}\mathbf{1}_{k\in\mu^{(i)}}\mathbf{e}_{i}\), \(\mathbf{B}(k)=\sum_{i=1}^{n}\mathbf{1}_{k\in\nu^{(i)}}\mathbf{e}_{i}\) for all \(1\leqslant i\leqslant n\) and \(k\geqslant 0\). As with our previous partition functions, there is a simple lattice path interpretation of (1.8): for each \(1\leqslant i\leqslant n\), a collection of \(\lambda_{i}\) paths of colour \(i\) enter the partition function (1.8) via the base of columns \(\nu_{1}^{(i)}<\dots<\nu_{\lambda_{i}}^{(i)}\) and exit at the top of columns \(\mu_{1}^{(i)}<\dots<\mu_{\lambda_{i}}^{(i)}\). As such, (1.8) provides a realization of the LLT polynomials in terms of \(n\) overlapping ensembles of non-intersecting lattice paths.
**Theorem 1.2** (Theorem 5.3 below).: _Fix a composition \(\lambda=(\lambda_{1},\dots,\lambda_{n})\) such that \(\sum_{i=1}^{n}\lambda_{i}=m\), and choose two coloured compositions \(\mu,\nu\in\mathcal{S}_{\lambda}\). The LLT polynomials (1.8) are given by the following integral
expression:_
\[\mathbb{G}_{\mu/\nu}(\lambda;x_{1},\dots,x_{p}) =\frac{q^{m(m+1)/2}}{(q-1)^{m}}\cdot\left(\frac{1}{2\pi\mathbf{i}} \right)^{m}\oint_{C_{1}}\frac{dy_{1}}{y_{1}}\dots\oint_{C_{m}}\frac{dy_{m}}{y_{m}}\] \[\times\prod_{1\leqslant i<j\leqslant m}\left(\frac{y_{j}-y_{i}}{y _{j}-qy_{i}}\right)f_{\tilde{\mu}}(1^{m};y_{1}^{-1},\dots,y_{m}^{-1};0)g_{\nu}( \lambda;y_{1},\dots,y_{m};0)\prod_{i=1}^{p}\prod_{j=1}^{m}\frac{1}{1-x_{i}y_{j }}, \tag{1.9}\]
_where the contours \(\{C_{1},\dots,C_{m}\}\) are certain \(q\)-nested contours that all surround the origin; see the discussion at the start of Section 3.7. We have also used the notation \(1^{m}=(1,\dots,1)\) (where \(1\) appears with multiplicity \(m\)) and have defined \(\tilde{\mu}\) to be the unique element of \(\mathcal{S}_{1^{m}}\) obtained by ordering the parts of \(\mu\) in increasing order; see Definition 3.13._
Throughout most of the text, we consider LLT polynomials (1.8) in which \(\lambda=N^{n}\) for some \(N\geqslant 1\); that is, each colour is represented exactly \(N\) times within the partition function (1.8). Whenever we make this choice, we write
\[\mathbb{G}_{\mu/\nu}(N^{n};x_{1},\dots,x_{p})=\mathbb{G}_{\mu/\nu}(x_{1},\dots,x_{p}).\]
We also assign a special notation to the coloured composition in \(\mathcal{S}_{N^{n}}\) whose parts are as small as they can be, by writing
\[\Delta=(0,1,\dots,N-1|0,1,\dots,N-1|\cdots|0,1,\dots,N-1)\in\mathcal{S}_{N^{n}}. \tag{1.10}\]
### LLT Cauchy identity and Markov kernels
The Markov kernels that we study in this work are built from the (skew) Cauchy identity for the LLT polynomials [1, 1, 1]:
**Theorem 1.3** (Theorem 5.5 below).: _Fix two positive integers \(p\) and \(N\), and two alphabets \((x_{1},\dots,x_{p})\) and \((y_{1},\dots,y_{N})\). Let \(\nu\in\mathcal{S}_{N^{n}}\) be a coloured composition. The LLT polynomials (1.8) satisfy the Cauchy summation identity_
\[\sum_{\mu\in\mathcal{S}_{N^{n}}}q^{-2\psi(\mu)}\mathbb{G}_{\mu/\nu}(x_{1},\dots,x_{p})\mathbb{G}_{\mu}(y_{1},\dots,y_{N})=\prod_{i=1}^{p}\prod_{j=1}^{N}\frac {1}{(x_{i}y_{j};q)_{n}}\cdot q^{-2\psi(\nu)}\mathbb{G}_{\nu}(y_{1},\dots,y_{N}), \tag{1.11}\]
_where \((z;q)_{n}=\prod_{k=1}^{n}(1-q^{k-1}z)\) denotes the standard \(q\)-Pochhammer function, the exponents on the left and right hand side are defined as_
\[\psi(\mu)=\frac{1}{2}\sum_{1\leqslant i<j\leqslant n}\ \sum_{a\in\mu^{(i)}}\ \sum_{b\in\mu^{(j)}}\mathbf{1}_{a>b},\]
_and \(\mathbb{G}_{\mu}(y_{1},\dots,y_{N})\equiv\mathbb{G}_{\mu/\Delta}(y_{1},\dots,y_{N})\). This holds either as a formal power series, or as a numeric equality as long as \(|q|<1\) and \(|x_{i}y_{j}|<1\) for all \(i,j\)._
If one divides equation (1.11) by the expression appearing on the right hand side, the resulting summands comprise a probability measure on coloured compositions \(\mu\in\mathcal{S}_{N^{n}}\), assuming that they are nonnegative. One simple choice of the alphabets \((x_{1},\dots,x_{p})\) and \((y_{1},\dots,y_{N})\) which respects this nonnegativity requirement is to set the former all equal to \(1\) and the latter to a Plancherel specialization \(\mathrm{Pl}_{t}\), where \(t\in\mathbb{R}_{>0}\) (see Section 5.4). This choice will be our sole focus in the current work; we denote the resulting Markov kernels as follows:
\[\mathbb{P}_{t,p}(\nu\to\mu)=q^{-2(\psi(\mu)-\psi(\nu))}\exp\left(-\frac{p(1-q ^{n})}{1-q}t\right)\mathbb{G}_{\mu/\nu}(1^{p})\frac{\mathbb{G}_{\mu}(\mathrm{ Pl}_{t})}{\mathbb{G}_{\nu}(\mathrm{Pl}_{t})}, \tag{1.12}\]
for any pair of coloured compositions \(\mu,\nu\in\mathcal{S}_{N^{n}}\). In particular, will be interested in strings of random coloured compositions generated by the repeated action of \(\mathbb{P}_{t,1}\) on the initial state (1.10):
\[\Delta\xrightarrow{\mathbb{P}_{t,1}}\lambda^{[1]}\xrightarrow{\mathbb{P}_{t,1 }}\dots\xrightarrow{\mathbb{P}_{t,1}}\lambda^{[m]}\xrightarrow{\mathbb{P}_{t,1 }}\lambda^{[m+1]}\xrightarrow{\mathbb{P}_{t,1}}\dots\xrightarrow{\mathbb{P}_{t,1}}\lambda^{[N]}. \tag{1.13}\]
It is worth noting that any individual coloured composition within (1.13) is distributed according to a non-skew version of (1.12); see, in particular, Proposition 5.7 of the text.
Our main result is a complete description of the asymptotic behaviour of the coloured compositions \(\lambda^{[i]}\), \(1\leqslant i\leqslant N\), as \(t\to\infty\) (with the number of steps in the chain (1.13) remaining finite).
### Asymptotic analysis of Markov kernels
Before proceeding with the asymptotics, we introduce a convenient way to encode the coloured compositions appearing in the chain (1.13); we focus our attention on two neighbours in this sequence, namely \(\lambda^{[m]}\) and \(\lambda^{[m+1]}\). We shall begin with the assumption that these coloured compositions have pairwise distinct parts.4
Footnote 4: This is the first of several assumptions that we make prior to performing our asymptotic analysis. The justification for these assumptions is an _a posteori_ one: any sequence of coloured compositions (1.13) which violates our assumptions will be shown to take up a vanishingly small part of the measure, in the limit \(t\to\infty\).
Considering firstly \(\lambda^{[m]}\in\mathcal{S}_{m^{n}}\), we see that it may be expressed uniquely in terms of its _coordinates_\(\ell^{[m]}=\left\{\ell_{1}^{[m]}<\cdots<\ell_{nm}^{[m]}\right\}\subset\mathbb{Z}_ {\geqslant 0}\), which are simply the parts of \(\lambda^{[m]}\) listed in increasing order, and its _colour sequence_\(c^{[m]}=\left(c_{1}^{[m]},\ldots,c_{nm}^{[m]}\right)\in\{1,\ldots,n\}^{nm}\), which is a vector that records the colour \(c_{i}^{[m]}\) attributed to the part \(\ell_{i}^{[m]}\) as it occurs within \(\lambda^{[m]}\); the reader is referred to Definition 6.1 for a precise formulation of these objects. After performing a similar identification for \(\lambda^{[m+1]}\in\mathcal{S}_{(m+1)^{n}}\), we have the correspondences
\[\lambda^{[m]}\leftrightarrow\left(\ell_{1}^{[m]},\ldots,\ell_{nm}^{[m]} \Big{|}c_{1}^{[m]},\ldots,c_{nm}^{[m]}\right),\qquad\lambda^{[m+1]} \leftrightarrow\left(\ell_{1}^{[m+1]},\ldots,\ell_{n(m+1)}^{[m+1]}\Big{|}c_{1 }^{[m+1]},\ldots,c_{n(m+1)}^{[m+1]}\right), \tag{1.14}\]
and work directly with the right hand sides of these expressions in our calculations.
Our next assumption concerning the coordinates \(\ell^{[m]}\) and \(\ell^{[m+1]}\) is that, in the limit \(t\to\infty\), they arrange into \(n\)_interlacing bundles_ as follows:
\[\ell_{j(m+1)+i}^{[m+1]}<\ell_{jm+i}^{[m]}<\ell_{j(m+1)+i+1}^{[m+1]},\qquad \forall\ i\in\{1,\ldots,m\},\quad j\in\{0,\ldots,n-1\}. \tag{1.15}\]
A schematic illustration of such an arrangement, for \(n=3\) and varying \(m\), is provided below (see also Figure 1 in the main body of the text):
(1.16)
More precisely, we will assume that the coordinates \(\ell^{[m]}\) and \(\ell^{[m+1]}\) scale as
\[\ell_{i}^{[k]}\mapsto q^{n-\lceil i/k\rceil}t+(q^{n-\lceil i/k\rceil}t)^{ \frac{1}{2}}x_{i}^{[k]},\qquad 1\leqslant i\leqslant nk,\qquad k\in\{m,m+1\}, \tag{1.17}\]
as \(t\to\infty\). Here \(x^{[m]}=\left\{x_{1}^{[m]}<\cdots x_{nm}^{[m]}\right\}\) and \(x^{[m+1]}=\left\{x_{1}^{[m+1]}<\cdots<x_{n(m+1)}^{[m+1]}\right\}\) are sets of reals that obey the relations (1.15) (with \(\ell\) replaced by \(x\)), while \(\lceil i/k\rceil\) denotes the ceiling function.
Let \(\theta_{1}^{[j]}\leqslant\cdots\leqslant\theta_{j}^{[j]}\) denote the eigenvalues of the top-left \(j\times j\) corner of a random \(N\times N\) matrix in the Gaussian Unitary Ensemble. The joint law of the eigenvalues \(\theta_{i}^{[j]}\), \(1\leqslant i\leqslant j\), \(j\in[1,N]\) is known as the _GUE corners process_ of rank \(N\). We let
\[\rho_{\text{GUE}}\left(x^{[1]}\prec\cdots\prec x^{[N]}\right):=\rho\left( \theta_{i}^{[j]}=x_{i}^{[j]},1\leqslant i\leqslant j\leqslant N\right)\]
denote the associated joint probability density, and write
\[\rho_{\text{GUE}}\left(x^{[m]}\to x^{[m+1]}\right):=\rho\left(\theta_{i}^{[m+ 1]}=x_{i}^{[m+1]},1\leqslant i\leqslant m+1\Big{|}\theta_{i}^{[m]}=x_{i}^{[m] },1\leqslant i\leqslant m\right)\]
for the conditional probability density for the eigenvalues of top-left \((m+1)\times(m+1)\) corner, given those of the \(m\times m\) one. See [10] and Section 6.3 of the current text for more information on these definitions.
We are now able to state the main result of this paper.
**Theorem 1.4** (Theorem 6.4 below).: _In the asymptotic regime described by (1.17), the Markov kernel \(\mathbb{P}_{t,1}\) weakly converges to a product of \(n\) independent probability measures with densities in the GUE corners process, multiplied by a factor that depends only on the colour sequences (1.14):_
\[\mathbb{P}_{t,1}\left(0\cup\lambda^{[m]}\to\lambda^{[m+1]}\right)\\ \to\prod_{i=1}^{n}\rho_{\mathrm{GUE}}\left(x_{(i-1)m+1}^{[m]}, \ldots,x_{im}^{[m]}\to x_{(i-1)(m+1)+1}^{[m+1]},\ldots,x_{i(m+1)}^{[m+1]}\right) dx^{[m+1]}\cdot\mathbb{P}_{\mathrm{col}}\left(c^{[m]}\to c^{[m+1]}\right) \tag{1.18}\]
_as \(t\to\infty\), where \(dx^{[m+1]}\) denotes the \(n(m+1)\)-dimensional Lebesgue measure. The final multiplicative factor in (1.18) is given explicitly by equation (1.22) below, and defines a discrete transition probability in a process on colour sequences:_
\[\sum_{c^{[m+1]}}\mathbb{P}_{\mathrm{col}}\left(c^{[m]}\to c^{[m+1]}\right)=1, \tag{1.19}\]
_where the sum is taken over all \(c^{[m+1]}\in\{1,\ldots,n\}^{n(m+1)}\)._
Our proof of Theorem 1.4 is by explicit analysis of (1.12) at \(p=1\), employing the lattice model formula (1.8) for the factor \(\mathbb{G}_{\mu/\nu}(1)\) and (a Plancherel-specialized version of) the integral formula (1.9) for the functions \(\mathbb{G}_{\mu}(\mathrm{Pl}_{t})\) and \(\mathbb{G}_{\nu}(\mathrm{Pl}_{t})\). The study of the latter integrals proceeds by steepest descent analysis, combined with certain crucial algebraic properties of the functions (1.5) and (1.6) which appear in their integrands. As \(t\to\infty\) one observes a remarkable factorization of these integrals into purely coordinate dependent and colour sequence dependent parts; the former can then be matched directly with transition densities for the GUE corners process. At the end of this procedure we have a leftover factor valued on colour sequences (see the second line of equation (6.34), below) and _a priori_ it is by no means obvious that this quantity defines a valid discrete probability measure. Resolution of this particular issue is the topic of Section 7 (see also Section 1.6, below).
As a direct consequence of Theorem 1.4 we obtain the following corollary, completely describing the behaviour of the chain of coloured compositions (1.13) as \(t\to\infty\):
**Corollary 1.5** (Corollary 6.5 below).: _Let \(\mathbb{P}_{t,N}(\Delta\to\lambda^{[1]}\to\cdots\to\lambda^{[N]})\) denote the joint distribution of coloured compositions \(\lambda^{[1]},\ldots,\lambda^{[N]}\) generated by \(N\) applications of the kernel \(\mathbb{P}_{t,1}\) to the trivial state \(\Delta\). In the asymptotic regime described by (1.17), we have the following weak convergence of measures:_
\[\mathbb{P}_{t,N}\left(\Delta\to\lambda^{[1]}\to\cdots\to\lambda^{ [N]}\right)\\ \to\prod_{i=1}^{n}\rho_{\mathrm{GUE}}\left((x^{[1]})_{i}\prec(x^{ [2]})_{i}\prec\cdots\prec(x^{[N]})_{i}\right)dx^{[1,N]}\cdot\mathbb{P}_{ \mathrm{col}}\left(c^{[1]}\prec c^{[2]}\prec\cdots\prec c^{[N]}\right) \tag{1.20}\]
_as \(t\to\infty\), with \(dx^{[1,N]}=\prod_{i=1}^{N}dx^{[i]}\) denoting the \(nN(N+1)/2\)-dimensional Lebesgue measure. Here we have introduced the shorthand_
\[\left(x^{[k]}\right)_{i}=\left(x^{[k]}_{(i-1)k+1},\ldots,x^{[k]}_{ik}\right), \qquad\forall\ 1\leqslant i\leqslant n,\ \ 1\leqslant k\leqslant N,\]
_and \(\mathbb{P}_{\mathrm{col}}(c^{[1]}\prec c^{[2]}\prec\cdots\prec c^{[N]})\) is a joint distribution on colour sequences given explicitly by (1.23) below._
### Distribution on interlacing triangles
While Theorem 1.4 and Corollary 1.5 provide a complete description of the asymptotic behaviour of coordinates of the coloured compositions (1.13) as \(t\to\infty\), we are left with the task of understanding the factors \(\mathbb{P}_{\mathrm{col}}\left(c^{[m]}\to c^{[m+1]}\right)\) and \(\mathbb{P}_{\mathrm{col}}\left(c^{[1]}\prec c^{[2]}\prec\cdots\prec c^{[N]}\right)\) that occur therein. These factors provide information about how colours distribute themselves within interlacing diagrams of the form (1.5), as \(t\to\infty\).
Let \(i^{[m]}\in\{1,\ldots,n\}^{nm}\) and \(j^{[m+1]}\in\{1,\ldots,n\}^{n(m+1)}\) be two sequences such that each colour \(\{1,\ldots,n\}\) is represented exactly \(m\) times in \(i^{[m]}\) and \(m+1\) times in \(j^{[m+1]}\). We say that these colour sequences interlace,
and write \(i^{[m]}\prec j^{[m+1]}\), provided they can be stacked to form an _admissible diagram_:
(1.21)
In the above diagram the incoming/outgoing vertical arrows are grouped into a total of \(n\) bundles, each of width \(m\) or \(m+1\), respectively. The colours \(i^{[m]}=(i_{1},\ldots,i_{nm})\) enter sequentially via the arrows at the base, while colours \(j^{[m+1]}=(j_{1},\ldots,j_{n(m+1)})\) exit sequentially via the arrows at the top. A copy of all colours \([1,n]\equiv\{1,\ldots,n\}\) enters via the right, and no colours exit via the left. The diagram is admissible provided that, after one draws the trajectories of all coloured paths, each colour \(\{1,\ldots,n\}\) never occurs more than once at any point along the thick horizontal line.
Given two colour sequences \(i^{[m]}\prec j^{[m+1]}\) we define a statistic \(\xi\left(i^{[m]};j^{[m+1]}\right)\) which enumerates the number of events of the form
in a path of colour \(c\) passes over a colour \(i\), with \(c>i\), within the diagram (1.21).
**Theorem 1.6**.: _The factor \(\mathbb{P}_{\mathrm{col}}\left(c^{[m]}\to c^{[m+1]}\right)\) appearing in (1.18) is given by_
\[\mathbb{P}_{\mathrm{col}}\left(c^{[m]}\to c^{[m+1]}\right)\\ =\mathbf{1}_{c^{[m]}\prec c^{[m+1]}}(-1)^{n}q^{\binom{nm+n+1}{2}- \binom{nm+1}{2}-\xi\left(c^{[m]};c^{[m+1]}\right)}\frac{(1-q)^{nm}}{(q;q)_{n} ^{2m+1}}\frac{g_{\Delta}^{c^{[m+1]}}\left((m+1)^{n};\vec{Q}^{[m+1]}\right)}{g_ {\Delta}^{c^{[m]}}\left(m^{n};\vec{Q}^{[m]}\right)}, \tag{1.22}\]
_where \(g_{\Delta}^{c^{[m]}}\left(m^{n};\vec{Q}^{[m]}\right)\) denotes a partition function of the form (1.6), with \(m\mapsto nm\), \(\lambda=m^{n}\), \(\mu=\Delta\), \(s=0\),_
\[(x_{1},\ldots,x_{nm})=\underbrace{(q^{n-1},\ldots,q^{n-1})}_{m\text{ times}}\cup\cdots\cup\underbrace{(q,\ldots,q)}_{m\text{ times}}\cup\underbrace{(1,\ldots,1)}_{m\text{ times}}\equiv\vec{Q}^{[m]},\]
_and in which colour \(c_{i}^{[m]}\) exits via the left edge of row \(i\) within (1.6) (rather than in totally ordered fashion). An analogous definition applies to \(g_{\Delta}^{c^{[m+1]}}\left((m+1)^{n};\vec{Q}^{[m+1]}\right)\). The expression (1.22) constitutes a valid discrete transition probability; namely, it satisfies the sum-to-unity property (1.19)._
The sum-to-unity property in Theorem 1.6 is not immediate and plays a substantial role in the proof of Theorem 1.4 and Corollary 1.5. In particular, as briefly mentioned in the beginning of Section 1.5, we only compute the asymptotics of the Markov kernel (1.12) under a certain ansatz for the behaviour of the coordinates \(\ell^{[m]},\ell^{[m+1]}\) and colour sequences \(c^{[m]},c^{[m+1]}\). To show that this ansatz asymptotically exhausts the full measure induced by the kernel (1.12) requires the above sum-to-unity property, whose proof hinges upon a rather unusual expansion property of the partition functions in question; see, in particular, Theorem 7.3. The main tool behind this proof are commutation relations between the row operators used to build our partition functions, which in turn are a consequence of the underlying Yang-Baxter integrability.
More generally, one may consider collections of colour sequences \(\emptyset\prec c^{[1]}\prec\cdots\prec c^{[N]}\) such that for all \(1\leqslant k\leqslant N\), \(c^{[k]}\in\{1,\ldots,n\}^{nk}\) and each colour \(\{1,\ldots,n\}\) is represented exactly \(k\) times in \(c^{[k]}\). We refer to such a collection of positive integers as an _interlacing triangular array_ of _rank \(n\)_ and _height \(N\)_, and let \(\mathcal{T}_{N}(n)\) denote the set of all such objects; see Definition 7.5 for a more precise formulation.
As a direct consequence of Theorem 1.6 we obtain the following result:
**Corollary 1.7** (Corollary 7.9 below).: _Let \(\emptyset\prec c^{[1]}\prec\cdots\prec c^{[N]}\) be an interlacing triangular array generated by \(N\) successive applications of the Markov kernel (1.22) on the empty sequence \(\emptyset\). This array has joint distribution_
\[\mathbb{P}_{\mathrm{col}}\left(c^{[1]}\prec\cdots\prec c^{[N]}\right)=\mathbf{1 }_{c^{[1]}\prec\cdots\prec c^{[N]}}(-1)^{nN}q^{\binom{nN+1}{2}}\frac{(1-q)^{n \binom{N}{2}}}{(q;q)_{n}^{N^{2}}}g_{\Delta}^{c^{[N]}}\left(N^{n};\vec{Q}^{[N]} \right)\prod_{i=1}^{N}q^{-\xi\left(c^{[i-1]};c^{[i]}\right)}. \tag{1.23}\]
### Positivity and enumeration conjectures
A number of interesting observations arise concerning the measure (1.23), as well as the set \(\mathcal{T}_{N}(n)\) of interlacing triangular arrays on which it is supported. The first is a positivity property that we noticed from explicit implementation of the Markov kernel (1.22) on a computer:
**Conjecture 1.8** (Conjecture 7.11 below).: _Fix integers \(m,n\geqslant 1\) and a colour sequence \(c^{[m]}\in\{1,\ldots,n\}^{nm}\). Let \(\mathbb{P}_{\mathrm{col}}(c^{[m]})\) denote the probability of arriving at the colour sequence \(c^{[m]}\) after \(m\) applications of the Markov kernel (1.22) to the trivial sequence \(c^{[0]}=\emptyset\). Then one has that_
\[\mathbb{P}_{\mathrm{col}}\left(c^{[m]}\right)=\mathcal{P}\left(c^{[m]}\right) \cdot\left(\prod_{i=1}^{n}\frac{1-q}{1-q^{i}}\right)^{m^{2}}\quad\text{where} \ \ \mathcal{P}\left(c^{[m]}\right)\in\mathbb{N}[q]. \tag{1.24}\]
In fact, one sees that (1.24) expresses \(\mathbb{P}_{\mathrm{col}}(c^{[m]})\) as a ratio of two positive polynomials in \(q\); the denominator is nothing but the Poincare polynomial associated to \(\mathfrak{S}_{n}\) raised to the power \(m^{2}\). An explicit illustration of this conjecture, for \(n=2\), is given in Figure 2. At this stage we do not know of any combinatorial interpretation of \(\mathcal{P}\left(c^{[m]}\right)\), although it would be very interesting to find one.
There is also the purely combinatorial problem of enumerating the number of elements in the set \(\mathcal{T}_{N}(n)\). It a trivial fact that \(|\mathcal{T}_{N}(1)|=1\),5 and one can easily show that \(|\mathcal{T}_{N}(2)|=2^{N}\); see Proposition A.1. While for \(n\geqslant 3\) we have no direct enumeration of \(|\mathcal{T}_{N}(n)|\), we do present two conjectures relating to \((n+1)\)-colourings of certain graphs:
Footnote 5: In the case \(n=1\), LLT measures degenerate to their Schur counterparts. In that situation, the asymptotic analysis carried through in this text leads to a single GUE corners process, which has a trivial interlacing \(1\)-colouring.
**Conjecture 1.9** (Conjecture A.3 below).: _Let \(G_{N}^{\triangle}\) denote the triangular graph_
_where the number of vertices along one side of the triangle is equal to \(N+1\). Let \(\mathfrak{g}_{N}^{\triangle}(4)\) denote the number of \(4\)-colourings of \(G_{N}^{\triangle}\) (adjacent vertices must have different colours). We conjecture that_
\[4\cdot|\mathcal{T}_{N}(3)|=\mathfrak{g}_{N}^{\triangle}(4),\qquad\forall\ N \geqslant 1.\]
**Conjecture 1.10** (Conjecture A.5 below).: _Let \(G_{N}^{\times}\) denote the graph_
_where two vertices share an edge if they are connected via a king move on the chessboard (that is, they a connected via a unit horizontal, vertical, or diagonal step), and the number of vertices along one side of the square is equal to \(N+1\). Let \(\mathfrak{g}_{N}^{\times}(5)\) denote the number of \(5\)-colourings of \(G_{N}^{\times}\) (adjacent vertices must have different colours). We conjecture that_
\[5\cdot|\mathcal{T}_{N}(4)|=\mathfrak{g}_{N}^{\times}(5),\qquad\forall\ N \geqslant 1.\]
Conjecture 1.9 has recently been proved bijectively [GG], but Conjecture 1.10 remains open. Based on these conjectures, it is tempting to speculate about the possibility of assigning meaningful probability measures to graph colourings, but this lies outside the scope of the current work.
### Acknowledgments
Amol Aggarwal was partially supported by a Clay Research Fellowship, a Packard Fellowship, and the IAS School of Mathematics. Alexei Borodin was partially supported by the NSF grants DMS-1664619, DMS-1853981, and the Simons Investigator program. Michael Wheeler was supported by an Australian Research Council Future Fellowship, grant FT200100981.
## 2. Fermionic vertex models
In this section we review the basic vertex models that will be used throughout the text; these are _fermionic vertex models_, as introduced in [1]. We give the explicit form of our vertex weights in Sections 2.2-2.3, as well as the Yang-Baxter equations that they satisfy, in Section 2.4. We conclude by introducing _row operators_ and studying algebraic relations between them, in Sections 2.5-2.6; these results will be needed in the subsequent material on partition functions in Section 3.
### Notation
For all pairs of positive integers \(i,j\) such that \(i\leqslant j\) let \([i,j]\subset\mathbb{N}\) denote the interval \(\{i,i+1,\ldots,j\}\). Similarly, we define \((i,j]=[i+1,j]\) when \(i<j\), and \((i,j]=\emptyset\) when \(i=j\). For all \(1\leqslant i\leqslant n\), let \(\boldsymbol{e}_{i}\in\mathbb{R}^{n}\) denote the \(i\)-th Euclidean unit vector. Let \(\boldsymbol{e}_{0}\in\mathbb{R}^{n}\) denote the zero vector. Define \(\boldsymbol{e}_{[i,j]}=\sum_{i\leqslant k\leqslant j}\boldsymbol{e}_{k}\); more generally, for any non-empty set \(I\subset\mathbb{N}\) we write \(\boldsymbol{e}_{I}=\sum_{i\in I}\boldsymbol{e}_{i}\). For any vector \(\boldsymbol{A}=(A_{1},\ldots,A_{n})\in(\mathbb{Z}_{\geqslant 0})^{n}\) and indices \(i,j\in\{1,\ldots,n\}\) we define
\[\boldsymbol{A}_{i}^{+}=\boldsymbol{A}+\boldsymbol{e}_{i},\quad\boldsymbol{A} _{i}^{-}=\boldsymbol{A}-\boldsymbol{e}_{i},\quad\boldsymbol{A}_{ij}^{+-}= \boldsymbol{A}+\boldsymbol{e}_{i}-\boldsymbol{e}_{j},\quad A_{[i,j]}=\sum_{k =i}^{j}A_{k},\quad|\boldsymbol{A}|=A_{[1,n]}=\sum_{k=1}^{n}A_{k},\]
where in the second last case it is assumed that \(i\leqslant j\). By agreement, we choose \(A_{[i,j]}=0\) for \(i>j\).
Let \(\mathfrak{S}_{m}\) denote the symmetric group of degree \(m\). For any set \(I\subset\mathbb{N}\) we define \(\mathfrak{S}_{I}\) to be the set of all permutations of the elements in \(I\); in particular, we then have \(\mathfrak{S}_{[1,m]}\equiv\mathfrak{S}_{m}\).
### \(L\)-weights
Our partition functions will be expressed in terms of two families of vertex weights. The first of these were introduced in [1, Example 8.1.2 and Figure 8.2] and we call them _\(L\)-weights_; they are denoted by6
Footnote 6: We use a tilde when writing our weights for consistency with the work of [1]. In that earlier work, which dealt with models based on \(U_{q}(\widehat{\mathfrak{sl}}(n+1))\) rather than \(U_{q}(\widehat{\mathfrak{sl}}(1|n))\) of the current text, the notation \(L_{s,q}^{(s)}(\boldsymbol{A},b;\boldsymbol{C},d)\) was reserved for vertex weights in the _stochastic gauge_ (that is, with a sum-to-unity property). While the weights (2.3) no longer satisfy a sum-to-unity property, it is easily seen that they have a completely analogous structure to their tilde analogues in [1, Chapter 2].
(2.1) \[\tilde{L}_{z,q}^{(s)}(\boldsymbol{A},b;\boldsymbol{C},d)\equiv\tilde{L}_{z} (\boldsymbol{A},b;\boldsymbol{C},d)=\quad\text{\raisebox{-0.0pt}{\includegraphics[]{ rgb}{1.0}{0.0}}}\quad\quad\text{\raisebox{-0.0pt}{\includegraphics[]{ rgb}{1.0}{0.0}}}\quad\quad\text{\raisebox{-0.0pt}{\includegraphics[]{ rgb}{1.0}{0.0}}}\quad\quad\text{\raisebox{-0.0pt}{\includegraphics[]{ rgb}{1.0}{0.0}}}\quad\quad\text{\raisebox{-0.0pt}{\includegraphics[]{ rgb}{1.0}{0.0}}}\quad\quad\text{\raisebox{-0.0pt}{\includegraphics[]{ rgb}{1.0}{0.0}}}\quad\quad\text{\raisebox{-0.0pt}{\includegraphics[]{ rgb}{1.0}{0.0}}}\quad\quad\text{\raisebox{-0.0pt}{\includegraphics[]{ rgb}{1.0}{0.0}}}\quad\quad\text{\raisebox{-0.0pt}{\includegraphics[]{ rgb}{1.0}{0.0}}}\quad\quad\text{\raisebox{-0.0pt}{\includegraphics[]{ rgb}{1.0}{0.0}}}\quad\quad\text{\raisebox{-0.0pt}{\includegraphics[]{ rgb}{1.0}{0.0}}}\quad\quad\text{\raisebox{-0.0pt}{\includegraphics[]{ rgb}{1.0}{0.0}}}\quad\quad\text{\raisebox{-0.0pt}{\includegraphics[]{ rgb}{1.0}{0.0}}}\quad\quad\text{\raisebox{-0.0pt}{\includegraphics[]{ rgb}{1.0}{0.0}}}\quad\quad\text{\raisebox{-0.0pt}{\includegraphics[]{ rgb}{1.0}{0.0}}}\quad\quad\text{\raisebox{-0.0pt}{\includegraphics[]{ rgb}{1.0}{0.0}}}\quad\quad\text{\raisebox{-0.0pt}{\includegraphics[]{ rgb}{1.0}{0.0}}}\quad\quad\text{\raisebox{-0.0pt}{\includegraphics[]{ rgb}{1.0}{0.0}}}\quad\quad\text{\raisebox{-0.0pt}{\includegraphics[]{ rgb}{1.0}{0.0}}}\quad\quad\text{\raisebox{-0.0pt}{\includegraphics[]{ rgb}{1.0}{0.0}}}\quad\quad\text{\raisebox{-0.0pt}{\includegraphics[]{ rgb}{1.0}{0.0}}}\quad\quad\text{\raisebox{-0.0pt}{\includegraphics[]{ rgb}{1.0}{0.0}}}\quad\quad\text{\raisebox{-0.0pt}{\includegraphics[]{ rgb}{1.0}{0.0}}}\quad\quad\text{\raisebox{-0.0pt}{\includegraphics[]{ rgb}{1.0}{0.0}}}\quad\quad\text{\raisebox{-0.0pt}{\includegraphics[]{ rgb}{1.0}{0.0}}}\quad\quad\text{\raisebox{-0.0pt}{\includegraphics[]{ rgb}{1.0}{0.0}}}\quad\quad\text{\raisebox{-0.0pt}{\includegraphics[]{ rgb}{1.0}{0.0}}}\quad\quad\text{\raisebox{-0.0pt}{\includegraphics[]{ rgb}{1.0}{0.0}}}\quad\quad\text{\raisebox{-0.0pt}{\includegraphics[]{ rgb}{1.0}{0.0}}}\quad\quad\text{\raisebox{-0.0pt}{\includegraphics[]{ rgb}{1.0}{0.0}}}\quad\quad\text{\raisebox{-0.0pt}{\includegraphics[]{ rgb}{1.0}{0.0}}}\quad\quad\text{\raisebox{-0.0pt}{\includegraphics[]{ rgb}{1.0}{0.0}}}\quad\quad\text{\raisebox{-0.0pt}{\includegraphics[]{ rgb}{1.0}{0.0}}}\quad\quad\text{\raisebox{-0.0pt}{\includegraphics[]{ rgb}{1.0}{0.0}}}\quad\quad\text{\raisebox{-0.0pt}{\includegraphics[]{ rgb}{1.0}{0.0}}}\quad\quad\text{\raisebox{-0.0pt}{\includegraphics[]{ rgb}{1.0}{0.0}}}\quad\text{\raisebox{-0.0pt}{\includegraphics[]{ rgb}{1.0}{0.0}}}\quad\quad\text{\raisebox{-0.0pt}{\includegraphics[]{ rgb}{1.0}{0.0}}}\quad\text{\raisebox{-0.0pt}{\includegraphics[]{ rgb}{1.0}{0.0}}}\quad\text{\raisebox{-0.0pt}{\includegraphics[]{ rgb}{1.0}{0.0}}}\quad\quad\text{\raisebox{-0.0pt}{\includegraphics[]{ rgb}{1.0}{0.0}}}\quad\text{\raisebox{-0.0pt}{\includegraphics[]{ rgb}{1.0}{0.0}}}\quad\quad\text{\raisebox{-0.0pt}{\includegraphics[]{ rgb}{1.0}{0.0}}}\quad\quad\text{\raisebox{-0.0pt}{\includegraphics[]{ rgb}{1.0}{0.0}}}\quad\text{\raisebox{-0.0pt}{\includegraphics[]{ rgb}{1.0}{0.0}}}\quad\quad\text{\raisebox{-0.0pt}{\includegraphics[]{ rgb}{1.0}{0.0}}}\quad\quad\text{\raisebox{-0.
weights:
(2.3)
where it is assumed that \(1\leqslant i<j\leqslant n\).
The weights (2.3) take a very similar form to the weights \(\tilde{L}_{z}(\mathbf{A},b;\mathbf{C},d)\) defined in [11, Sections 2.2 and 2.5]; in fact, the two sets of weights differ only with respect to two details. The first is that the weights (2.1) are defined only for \(\mathbf{A},\mathbf{C}\in\{0,1\}^{n}\) (that is, for _fermionic_ states), whereas in [11, Section 2.2] one has \(\mathbf{A},\mathbf{C}\in(\mathbb{Z}_{\geqslant 0})^{n}\) (_bosonic_ states). The second is that the specific weight \(\tilde{L}_{z}(\mathbf{A},i;\mathbf{A},i)\) is different across the two works7
Footnote 7: Indeed, in [11, Sections 2.2 and 2.5], one has
\[\tilde{L}_{z}(\mathbf{A},i;\mathbf{A},i)=\frac{(sq^{A_{i}}-z)q^{A_{(i,n)}s}}{1-sz}\]
In certain partition functions that we subsequently define, the boundary conditions inject into the lattice exactly one particle of each colour \(\{1,\dots,n\}\). In such partition functions, each colour \(\{1,\dots,n\}\) flows at most once through a vertex of the lattice; in this setting, both of the differences between the weights (2.1) and those of [11, Sections 2.2 and 2.5], pointed out above, are no longer apparent. This fact will allow us to deduce matchings between certain functions that we define in the present work and those of [11], in spite of the fact that the model used in the current text is _a priori_ different.
### \(M\)-weights
The second family of vertex weights we call \(M\)_-weights_; they are denoted by
(2.4)
As in the case of \(L\)-weights, labels assigned to the left and right horizontal edges take values in \(\{0,1,\dots,n\}\), while labels assigned to the bottom and top vertical edges are \(n\)-dimensional binary strings. In contrast to \(L\)-weights, particle conservation for \(M\)-type vertices happens in the SE \(\to\) NW direction, namely:
\[\tilde{M}_{z,q}^{(s)}(\mathbf{A},b;\mathbf{C},d)=0,\qquad\text{unless}\qquad\mathbf{A}+ \mathbf{e}_{b}=\mathbf{C}+\mathbf{e}_{d}. \tag{2.5}\]
For all \(\mathbf{A},\mathbf{C}\in\{0,1\}^{n}\) and \(b,d\in\{0,1,\ldots,n\}\), we define
\[\tilde{M}^{(s)}_{z,q}(\mathbf{A},b;\mathbf{C},d)=\tilde{L}^{(1/s)}_{1/z,1/q}(\mathbf{A},b; \mathbf{C},d), \tag{2.6}\]
expressing every \(M\)-weight in terms of a corresponding \(L\)-weight, under reflection about the thick vertical line of the vertex, and reciprocation of the parameters \(z\), \(q\), \(s\).
### Yang-Baxter equations
We introduce one further set of vertex weights which arise from the fundamental \(R\)-matrix for the quantum affine superalgebra \(U_{q}(\widehat{\mathfrak{sl}}(1|n))\)[10]; these we call _fundamental weights_. They are denoted by the crossing of two thin lines:
(2.7)
These vertices have the conservation property
\[R_{z,q}(a,b;c,d)=0,\qquad\text{unless}\qquad\mathbf{e}_{a}+\mathbf{e}_{b}=\mathbf{e}_{c}+ \mathbf{e}_{d}.\]
For the cases where the constraint \(\mathbf{e}_{a}+\mathbf{e}_{b}=\mathbf{e}_{c}+\mathbf{e}_{d}\) is obeyed, we have the following table of weights:
(2.8)
where we assume that \(0\leqslant a<b\leqslant n\).
The \(L\)-weights, \(M\)-weights and fundamental weights satisfy a collection of Yang-Baxter equations, that we record as a single theorem below. These Yang-Baxter equations underpin the algebraic relations between the row operators that we define in Section 2.5.
**Theorem 2.1**.: _For any fixed integers \(a_{1},a_{2},a_{3},b_{1},b_{2},b_{3}\in\{0,1,\ldots,n\}\) and vectors \(\mathbf{A},\mathbf{B}\in\{0,1\}^{n}\), the vertex weights (2.1), (2.4), (2.7) satisfy the relations_
\[\sum_{0\leqslant c_{1},c_{2}\leqslant n}\ \sum_{\mathbf{C}\in\{0,1\}^{n}}R _{y/x}(a_{2},a_{1};c_{2},c_{1})\tilde{L}_{x}(\mathbf{A},c_{1};\mathbf{C},b_{1})\tilde{ L}_{y}(\mathbf{C},c_{2};\mathbf{B},b_{2})\\ =\sum_{0\leqslant c_{1},c_{2}\leqslant n}\ \sum_{\mathbf{C}\in\{0,1\}^{n}} \tilde{L}_{y}(\mathbf{A},a_{2};\mathbf{C},c_{2})\tilde{L}_{x}(\mathbf{C},a_{1};\mathbf{B},c_{ 1})R_{y/x}(c_{2},c_{1};b_{2},b_{1}), \tag{2.9}\]
\[\sum_{0\leqslant c_{1},c_{3}\leqslant n}\ \sum_{\boldsymbol{C}\in\{0,1 \}^{n}}\tilde{L}_{x}(\boldsymbol{A},a_{1};\boldsymbol{C},c_{1})R_{1/(qxz)}(a_{ 3},c_{1};c_{3},b_{1})\tilde{M}_{z}(\boldsymbol{C},c_{3};\boldsymbol{B},b_{3})\\ =\sum_{0\leqslant c_{1},c_{3}\leqslant n}\ \sum_{\boldsymbol{C}\in\{0,1 \}^{n}}\tilde{M}_{z}(\boldsymbol{A},a_{3};\boldsymbol{C},c_{3})R_{1/(qxz)}(c_ {3},a_{1};b_{3},c_{1})\tilde{L}_{x}(\boldsymbol{C},c_{1};\boldsymbol{B},b_{1}), \tag{2.11}\] \[\sum_{0\leqslant c_{2},c_{3}\leqslant n}\ \sum_{\boldsymbol{C}\in\{0,1 \}^{n}}\tilde{M}_{y}(\boldsymbol{A},a_{2};\boldsymbol{C},c_{2})\tilde{M}_{z}( \boldsymbol{C},a_{3};\boldsymbol{B},c_{3})R_{y/z}(c_{3},c_{2};b_{3},b_{2})\\ =\sum_{0\leqslant c_{2},c_{3}\leqslant n}\ \sum_{\boldsymbol{C}\in\{0,1 \}^{n}}\tilde{M}_{y/z}(a_{3},a_{2};c_{3},c_{2})\tilde{M}_{z}(\boldsymbol{A},c_ {3};\boldsymbol{C},b_{3})\tilde{M}_{y}(\boldsymbol{C},c_{2};\boldsymbol{B},b_{2}). \tag{2.10}\]
Proof.: All three equations may be recovered from the master Yang-Baxter equation (4.6); we will comment briefly on this in Section 4.3. The equations (2.9)-(2.11) are the fermionic cousins of equations (2.3.1)-(2.3.3) in [1, Section 2.3]; the latter being valid for the bosonic counterparts of the models (2.1) and (2.4).
### Row operators
Let \(V\) be the vector space obtained by taking the formal linear span of all \(n\)-dimensional binary strings:
\[V=\bigoplus_{\boldsymbol{A}\in\{0,1\}^{n}}\mathbb{C}\left|\boldsymbol{A} \right.,\]
and for any \(N\geqslant 0\) consider the \((N+1)\)-fold tensor product of this space:
\[\mathbb{V}(N)=\underbrace{V\otimes\cdots\otimes V}_{N+1\ \text{times}}.\]
For each \(0\leqslant i,j\leqslant n\) we introduce a linear operator \(T_{i,j}^{\rightarrow}(x;N)\in\text{End}(\mathbb{V}(N))\) with the action
(2.12)
The quantity
\[x\rightarrow\ \ i\]
is a one-row partition function in the model (2.1), and can be calculated by multiplying the weights of each vertex from left to right, noting that the integer values prescribed to all internal vertical edges are fixed by the local conservation property (2.2).
In a similar vein, for each \(0\leqslant i,j\leqslant n\) we introduce a linear operator \(T_{i,j}^{\leftarrow}(x;N)\in\text{End}(\mathbb{V}(N))\) with the action
(2.13) \[T_{i,j}^{\leftarrow}(x;N):\bigotimes_{k=0}^{N}\left|\boldsymbol{B}(k)\right> \mapsto\sum_{\boldsymbol{A}(0),\ldots,\boldsymbol{A}(N)\in\{0,1\}^{n}}\left( \begin{array}{ccccc}\boldsymbol{B}(0)&\ldots&\ldots&\ldots&\ldots&\boldsymbol {B}(N)\\ i&\hskip-5.690551pt\raisebox{-2.845276pt}{\includegraphics[height=5.690551pt]{ \includegraphics[height=5.690551pt]{\includegraphics[height=5.690551pt]{ \includegraphics[height=5.690551pt]{\includegraphics[height=5.690551pt]{ \includegraphics[height=5.690551pt]{\includegraphics[height=5.690551pt]{ \includegraphics[height=5.690551pt]{\includegraphics[height=5.690551pt]{ \includegraphics[height=5.690551pt]{\includegraphics[height=5.690551pt]{ \includegraphics[height=5.690551pt]{\includegraphics[height=5.690551pt]{ \includegraphics[height=5.690551pt]{\includegraphics[height=5.690551pt]{ \includegraphics[height=5.690551pt]{\includegraphics[height=5.690551pt]{ \includegraphics[height=5.690551pt]{\includegraphics[height=5.690551pt]{ \includegraphics[height=5.690551pt]{\includegraphics[height=5.690551pt]{ \includegraphics[height=5.690551pt]{\includegraphics[height=5.690551pt]{ \includegraphics[height=5.690551pt]{\includegraphics[height=5.690551pt]{ \includegraphics[height=5.690551pt]{\includegraphics[height=5.690551pt]{ \includegraphics[height=5.690551pt]{\includegraphics[height=5.690551pt]{ \includegraphics[height=5.690551pt]{\includegraphics[height=5.690551pt]{ \includegraphics[height=5.690551pt]{\includegraphics[height=5.690551pt]{ \includegraphics[height=5.690551pt]{\includegraphics[height=5.690551pt]{ \includegraphics[height=5.690551pt]{\includegraphics[height=5.690551pt]{ \includegraphics[height=5.690551pt]{includegraphics[height=5.690551pt]{ \includegraphics[height=5.690551pt]{includegraphics[height=5.690551pt]{ \includegraphics[height=5.690551pt]{includegraphics[height=5.690551pt]{ \includegraphics[height=5.690551pt]{includegraphics[height=5.690551pt]{ \includegraphics[height=5.690551pt]{includegraphics[height=5.690551pt]{ \includegraphics[height=5.690551pt]{includegraphics[height=5.690551pt]{ \includegraphics[height=5.690551pt]{includegraphics[height=5.690551pt]{ \includegraphics[height=5.690551pt]{\includegraphics[height=5.690551pt]{ \includegraphics[height=5.690551pt]{\includegraphics[height=5.690551pt]{ \includegraphics[height=5.690551pt]{\includegraphics[height=5.690551pt]{ \includegraphics[height=5.690551pt]{includegraphics[height=5.690551pt]{ \includegraphics[height=5.690551pt]{includegraphics[height=5.690551pt]{ \includegraphics[height=5.690551pt]{includegraphics[height=5.690551pt]{ \includegraphics[height=5.690551pt]{includegraphics[height=5.690551pt]{ \includegraphics[height=5.690551pt]{includegraphics[height=5.690551pt]{ \includegraphics[height=5.690551pt]{includegraphics[height=5.690551pt]{includegraphics[ height=5.690551pt]{includegraphics[ height=5.690551pt]{includegraphics[ height.690551pt]{includegraphics[height=5.690551pt]{includegraphics[ height=5.690551pt]{includegraphics[ height=5.690551pt]{includegraphics[ height height height height.690551pt]{includegraphics[ height=5.690551pt]{includegraphics[ height height=5.690551pt]{includegraphics[ height.690551pt]{includegraphics[height=5.690551pt]{includegraphics[ height height height.6905551pt]{includegraphics[height=5.690551pt]{includegraphics[ height height=5.690551pt]{includegraphics[ height height.690551pt]{includegraphics[height=5.690551pt]{includegraphics[ height height 5.690551pt]{includegraphics[height=5.690551pt]{includegraphics[ height height.690551pt]{includegraphics[height=5.690551pt]{includegraphics[ height.695551pt]{includegraphics[height=5.690551pt]{includegraphics[ height height.695551pt]{includegraphics[height=5.690551pt]{includegraphics[ height height.695551pt]{includegraphics[height=5.690551pt]{includegraphics[.695551pt]{includegraphics[height=5.690551pt]{includegraphics[ height=5.690551pt]{includegraphics[.690551pt]{includegraphics[height=5.690551pt]{includegraphics[height=5.690551pt]{includegraphics[ height=5.690551pt]{includegraphics[ height.69551pt]{includegraphics[height=5.690551pt]{includegraphics[height=5.690551pt]{includegraphics[ height=5.690551pt]{includegraphics[ height=5.690551pt]{includegraphics[ height.695551pt]{includegraphics[height=5.690551pt]{includegraphics[height=5.690551pt]{includegraphics[ height=5.690551pt]{includegraphics[ height.695551pt]{includegraphics[height=5.69551pt]{includegraphics[height=5.69551pt]{includegraphics[ height=5.
where the quantity
is a one-row partition function in the model (2.4).
### Commutation relations
We introduce a lift of \(\mathbb{V}(N)\) to an infinite tensor product:
\[\mathbb{V}(\infty)=\operatorname{Span}_{\mathbb{C}}\left\{\bigotimes_{k=0}^{ \infty}|\boldsymbol{A}(k)\rangle\right\}\]
where the binary strings \(\boldsymbol{A}(k)\in\{0,1\}^{n}\), \(k\geqslant 0\) have the stability property
\[\exists\ M\in\mathbb{N}\ :\ \boldsymbol{A}(k)=\boldsymbol{e}_{0},\ \forall\ k \geqslant M.\]
Let \(T_{i,0}^{\to}(x;\infty)=\mathcal{C}_{i}(x)\) and \(T_{i,0}^{\leftarrow}(x;\infty)=\mathcal{B}_{i}(x)\) denote the corresponding lifts of the operators (2.12) and (2.13), in the case where the right index \(j\) is set to \(0\). We shall only ever consider the case where \(\mathcal{C}_{i}(x)\) and \(\mathcal{B}_{i}(x)\) act on stable states in the infinite tensor product, _i.e._, on the elements of \(\mathbb{V}(\infty)\).
**Theorem 2.2**.: _Fix two nonnegative integers \(i,j\) such that \(1\leqslant i<j\leqslant n\), and two arbitrary complex parameters \(x,y\). The following exchange relations hold:_
\[\frac{x-qy}{x-y}\mathcal{C}_{i}(y)\mathcal{C}_{j}(x) =\frac{(1-q)y}{x-y}\mathcal{C}_{i}(x)\mathcal{C}_{j}(y)+ \mathcal{C}_{j}(x)\mathcal{C}_{i}(y), \tag{2.15}\] \[\frac{y-qx}{q(y-x)}\mathcal{B}_{j}(y)\mathcal{B}_{i}(x) =\frac{(1-q)x}{q(y-x)}\mathcal{B}_{j}(x)\mathcal{B}_{i}(y)+ \mathcal{B}_{i}(x)\mathcal{B}_{j}(y). \tag{2.14}\]
Proof.: The proof of (2.14) makes use of the first Yang-Baxter equation (2.9), applied successively to the two-row partition function that arises by joining the operators \(\mathcal{C}_{i}(y)\) and \(\mathcal{C}_{j}(x)\); the proof of (2.15) employs the third Yang-Baxter equation (2.11), applied to the two-row partition function that arises by joining operators \(\mathcal{B}_{j}(y)\) and \(\mathcal{B}_{i}(x)\). For full details, we refer the reader to [1, Section 3.2, Theorems 3.2.1 and 3.2.5].
**Theorem 2.3**.: _Fix two nonnegative integers \(i,j\) such that \(0\leqslant i<j\leqslant n\), and complex parameters \(x,y\) such that_
\[\left|\frac{x-s}{1-sx}\cdot\frac{y-s}{1-sy}\right|<1. \tag{2.16}\]
_The row operators \(\mathcal{C}_{i}(x)\) and \(\mathcal{B}_{j}(y)\) obey the following commutation relation:_
\[\mathcal{C}_{i}(x)\mathcal{B}_{j}(y)=\frac{1-qxy}{1-xy}\mathcal{B}_{j}(y) \mathcal{C}_{i}(x). \tag{2.17}\]
Proof.: The proof makes use of the second Yang-Baxter equation (2.10), applied successively to the two-row partition function that arises by joining the operators \(\mathcal{C}_{i}(x)\) and \(\mathcal{B}_{j}(y)\). For the full details, we refer the reader to [1, Section 3.2, Theorem 3.2.3].
## 3. Partition functions
This section brings together a number of partition function definitions, as well as fundamental results related to them, for use throughout the remainder of the text. Most of the facts summarized here were first obtained in [1, Chapters 3-5 and Chapter 8], and where a theorem is directly transcribed from there, we refer the reader to that earlier text for a full proof. We begin by defining _coloured compositions_ in Section 3.1; these are used to index many of the quantities that we subsequently define. Sections 3.2-3.4 introduce the partition functions required; we then state a number of properties of these partition functions in Sections 3.5-3.9.
### Coloured compositions
**Definition 3.1**.: Let \(\lambda=(\lambda_{1},\dots,\lambda_{n})\) be a composition of length \(n\) such that \(\left|\lambda\right|=\sum_{i=1}^{n}\lambda_{i}=m\); \(m\) is called its _weight_. We introduce the set \(\mathcal{S}_{\lambda}\) of (strict, nonnegative) \(\lambda\)-coloured compositions as follows:
\[\mathcal{S}_{\lambda}=\Big{\{}\mu=\Big{(}0\leqslant\mu_{1}^{(1)}<\dots<\mu_{ \lambda_{1}}^{(1)}\Big{|}0\leqslant\mu_{1}^{(2)}<\dots<\mu_{\lambda_{2}}^{(2)} \Big{|}\dots\Big{|}0\leqslant\mu_{1}^{(n)}<\dots<\mu_{\lambda_{n}}^{(n)}\Big{)} \Big{\}}. \tag{3.1}\]
The elements of \(\mathcal{S}_{\lambda}\) are vectors of length \(n\) whose \(i\)-th component \(\mu^{(i)}\) is a strict8, nonnegative signature of length \(\lambda_{i}\), for all \(1\leqslant i\leqslant n\). These components, or blocks, demarcate the colouring of \(\mu\); the colour of each block is indicated by the superscript attached to it. We refer to \(\lambda\) as the _colour profile_ of \(\mu\).
Footnote 8: That is, with strict inequalities in (3.1); this corresponds to the fermionicity of our model.
**Definition 3.2**.: With the same assumptions as in Definition 3.1, we also define the set \(\mathcal{S}_{\lambda}^{+}\subset\mathcal{S}_{\lambda}\) as follows:
\[\mathcal{S}_{\lambda}^{+}=\{\mu\in\mathcal{S}_{\lambda}:\mu_{1}^{(j)}\geqslant 1,\ \forall\ 1\leqslant j\leqslant n\}. \tag{3.2}\]
This is the restriction to coloured compositions that have positive parts only. For any coloured composition \(\mu\in\mathcal{S}_{\lambda}^{+}\) we define its _padding_\(0\cup\mu\in\mathcal{S}_{\lambda+1^{n}}\) by prepending a part of size \(0\) in each of the \(n\) blocks of \(\mu\).
Let \(\mu\in\mathcal{S}_{\lambda}\) be a \(\lambda\)-coloured composition. We associate to \(\mu\) a vector \(\left|\mu\right\rangle_{\lambda}\in\mathbb{V}(\infty)\), defined as follows:
\[\left|\mu\right\rangle_{\lambda}=\bigotimes_{k=0}^{\infty}\left|\mathbf{A}(k) \right\rangle,\qquad\mathbf{A}(k)=\sum_{j=1}^{n}A_{j}(k)\mathbf{e}_{j},\qquad A_{j}(k) =\left\{\begin{array}{ll}1,&\quad k\in\mu^{(j)},\\ &\\ 0,&\quad\text{otherwise}.\end{array}\right. \tag{3.3}\]
In other words, the component \(A_{j}(k)\) is equal to \(1\) if the integer \(k\) is present in the strict signature \(\mu^{(j)}\), and equal to \(0\) if not. We shall also make use of dual vectors \(\left\langle\mu\right|_{\lambda}\in\mathbb{V}(\infty)^{*}\), defined to act linearly on elements of the form (3.3) via the relation \(\left\langle\mu\right|_{\lambda}\cdot\left|\nu\right\rangle_{\lambda}=\delta_{ \mu,\nu}\) for all \(\mu,\nu\in\mathcal{S}_{\lambda}\).
**Definition 3.3** (Rainbow compositions).: The elements of \(\mathcal{S}_{1^{n}}\) are called _rainbow compositions_; we have
\[\mathcal{S}_{1^{n}}=\Big{\{}\mu=(\mu_{1}|\mu_{2}|\dots|\mu_{n})\Big{\}}.\]
That is, a rainbow composition consists of \(n\) blocks, each of unit length; no constraint is imposed on the relative ordering of the parts.
### Functions \(G_{\mu/\nu}\)
Fix a \(\lambda\)-coloured composition \(\nu\in\mathcal{S}_{\lambda}\) with component signatures \(\nu^{(i)}\), \(1\leqslant i\leqslant n\), and define, similarly to (3.3), a vector \(\left|\nu\right\rangle_{\lambda}\in\mathbb{V}(\infty)\):
\[\left|\nu\right\rangle_{\lambda}=\bigotimes_{k=0}^{\infty}\left|\mathbf{B}(k) \right\rangle,\qquad\mathbf{B}(k)=\sum_{j=1}^{n}B_{j}(k)\mathbf{e}_{j},\qquad B_{j}(k) =\left\{\begin{array}{ll}1,&\quad k\in\nu^{(j)},\\ 0,&\quad\text{otherwise}.\end{array}\right. \tag{3.4}\]
**Definition 3.4**.: Let \(\lambda=(\lambda_{1},\dots,\lambda_{n})\) be a composition, and fix two \(\lambda\)-coloured compositions \(\mu\in\mathcal{S}_{\lambda}\) and \(\nu\in\mathcal{S}_{\lambda}\). Let the corresponding vectors in \(\mathbb{V}(\infty)\), \(\left|\mu\right\rangle_{\lambda}\) and \(\left|\nu\right\rangle_{\lambda}\), be given by (3.3) and (3.4) respectively. For any integer \(p\geqslant 1\) we define the following family of symmetric rational functions:
\[(-s)^{|\mu|-|\nu|}\cdot G_{\mu/\nu}(\lambda;x_{1},\dots,x_{p})=\left\langle\nu \right|_{\lambda}\prod_{i=1}^{p}\mathcal{C}_{0}(x_{i})\left|\mu\right\rangle_{ \lambda}. \tag{3.5}\]
In the case \(\lambda=(1,\dots,1)=1^{n}\), we drop the notational dependence on \(\lambda\), and write
\[G_{\mu/\nu}(1^{n};x_{1},\dots,x_{p})\equiv G_{\mu/\nu}(x_{1},\dots,x_{p}).\]
The symmetry in \((x_{1},\dots,x_{p})\) follows from the commutativity of the \(\mathcal{C}_{0}(x_{i})\) operators; for a proof of the latter fact, see [1, Theorem 3.2.1].
Translating the row operators in (3.5) into their graphical form, we obtain the following partition function representation of \(G_{\mu/\nu}\):
(3.6) \[\begin{split}\begin{split} x_{p}&\to\ 0\\ (-s)^{|\mu|-|\nu|}\cdot G_{\mu/\nu}(\lambda;x_{1},\dots,x_{p})= \end{split}\begin{split} x_{p}&\to\ 0\\ \vdots&\vdots\\ x_{2}&\to\ 0\\ x_{1}&\to\ 0\end{split}\begin{split} \begin{split} \begin{split} \
The factor of \((-s)^{|\mu|}\) introduced into the definition (3.7) has analogous origins to the factor \((-s)^{|\mu|-|\nu|}\) in (3.6); see the explanation in the paragraph immediately following (3.6).
_Remark 3.6_.: In the case \(\lambda=(1,\dots,1)=1^{n}\), we drop the notational dependence on \(\lambda\), and write
\[f_{\mu}(1^{n};x_{1},\dots,x_{n})\equiv f_{\mu}(x_{1},\dots,x_{n}).\]
The function \(f_{\mu}(x_{1},\dots,x_{n})\) then matches identically with the family of _non-symmetric spin Hall-Littlewood functions_ defined in [1, Section 3.4]; see Definition 3.4.3 therein. The reason for the match is the fact that when \(\lambda=1^{n}\), each colour \(\{1,\dots,n\}\) enters the partition function (3.9) exactly once, which is precisely the regime when the weights (2.3) and those of [1, Sections 2.2 and 2.5] agree (see the discussion below equation (2.3)).
_Remark 3.7_.: In the case \(\lambda=(1,\dots,1)=1^{n}\), and assuming a _weakly increasing_ rainbow composition \(\mu=(\mu_{1}\leqslant\dots\leqslant\mu_{n})\), one has the factorization
\[f_{\mu}(1^{n};x_{1},\dots,x_{n})=\frac{\prod_{j\geqslant 0}(s^{2};q)_{\#_{j}( \mu)}}{\prod_{i=1}^{n}(1-sx_{i})}\prod_{i=1}^{n}\left(\frac{x_{i}-s}{1-sx_{i}} \right)^{\mu_{i}}, \tag{3.10}\]
where \(\#_{j}(\mu)\) denotes the number of parts in \(\mu\) which are equal to \(j\). This is proved by a simple freezing argument applied to the partition function (3.9); we refer the reader to [1, Section 5.1].
**Definition 3.8**.: Let \(\lambda=(\lambda_{1},\dots,\lambda_{n})\) be a composition of weight \(m\), and fix a \(\lambda\)-coloured composition \(\mu\in\mathcal{S}_{\lambda}\). Write \(\ell_{k}=\sum_{i=1}^{k}\lambda_{i}\) for the \(k\)-th partial sum of \(\lambda\). Define a further family of non-symmetric rational functions:
\[(-s)^{-|\mu|}\cdot g_{\mu}(\lambda;x_{1},\dots,x_{m})=\left\langle\mu\right|_ {\lambda}\prod_{j\in[1,\ell_{1}]}\mathcal{B}_{1}(x_{j})\prod_{j\in(\ell_{1}, \ell_{2}]}\mathcal{B}_{2}(x_{j})\cdots\prod_{j\in(\ell_{n-1},\ell_{n}]} \mathcal{B}_{n}(x_{j})\left|\emptyset\right\rangle, \tag{3.11}\]
where \(\left\langle\mu\right|_{\lambda}\in\mathbb{V}(\infty)^{*}\) is the dual of the vector (3.3), and \(\left|\emptyset\right\rangle\in\mathbb{V}(\infty)\) denotes the vacuum state
\[\left|\emptyset\right\rangle=\bigotimes_{k=0}^{\infty}\left|\boldsymbol{e}_{0 }\right\rangle. \tag{3.12}\]
Translating the row operators in (3.11) into their graphical form, we obtain the following partition function representation of \(g_{\mu}\):
(3.13) \[(-s)^{-|\mu|}\cdot g_{\mu}(\lambda;x_{1},\dots,x_{m})=\begin{array}{c} \includegraphics[scale=0.5]{fig/f_{\mu}(1^{n};x_{1},\dots,x_{n})}\end{array} \begin{array}{c}\includegraphics[scale=0.5]{fig/f_{\mu}(1^{n};x_{1},\dots,x_ {n})}\end{array}\begin{array}{c}\includegraphics[scale=0.5]{fig/f_{\mu}(1^{n} ;x_{1},\dots,x_{n})}\end{array}\begin{array}{c}\includegraphics[scale=0.5]{fig/ f_{\mu}(1^{n};x_{1},\dots,x_{n})}\end{array}\begin{array}{c}\includegraphics[scale=0.
### Permuted boundary conditions
**Definition 3.10**.: Let \(\lambda=(\lambda_{1},\ldots,\lambda_{n})\) be a composition of weight \(m\), and fix a \(\lambda\)-coloured composition \(\mu\in\mathcal{S}_{\lambda}\). Fix also a vector \(\sigma=(\sigma_{1},\ldots,\sigma_{m})\) such that \(|\{k:\sigma_{k}=i\}|=\lambda_{i}\) for all \(1\leqslant i\leqslant n\). We define the following families of non-symmetric rational functions:
\[(-s)^{|\mu|}\cdot f_{\mu}^{\sigma}(\lambda;x_{1},\ldots,x_{m}) =\left\langle\emptyset\right|\prod_{j=1}^{m}\mathcal{C}_{\sigma_{j}}(x_{j}) \left|\mu\right\rangle_{\lambda}, \tag{3.15}\] \[(-s)^{-|\mu|}\cdot g_{\mu}^{\sigma}(\lambda;x_{1},\ldots,x_{m}) =\left\langle\mu\right|_{\lambda}\prod_{j=1}^{m}\mathcal{B}_{ \sigma_{j}}(x_{j})\left|\emptyset\right\rangle. \tag{3.14}\]
The first family (3.14) matches with that of Definition 3.5, and the second family (3.15) matches with that of Definition 3.8, when \(\sigma=(1^{\lambda_{1}},2^{\lambda_{2}},\ldots,n^{\lambda_{n}})\).
### Hecke generators and recursion relations
Recall the definition of the Hecke algebra of type \(A_{n-1}\). It is the algebra generated by a family \(T_{1},\ldots,T_{n-1}\), modulo the relations
\[(T_{i}-q)(T_{i}+1)=0,\quad 1\leqslant i\leqslant n-1,\qquad T_{i}T_{i+1}T_{i}= T_{i+1}T_{i}T_{i+1},\quad 1\leqslant i\leqslant n-2, \tag{3.16}\]
as well as the commutativity property
\[[T_{i},T_{j}]=0,\quad\forall\;i,j\;\text{ such that }\;|i-j|>1. \tag{3.17}\]
Introduce the simple transpositions \(\mathfrak{s}_{i}\), acting on arbitrary functions \(h\) of the alphabet \((x_{1},\ldots,x_{n})\):
\[\mathfrak{s}_{i}\cdot h(x_{1},\ldots,x_{n})=h(x_{1},\ldots,x_{i+1},x_{i}, \ldots,x_{n}),\quad 1\leqslant i\leqslant n-1.\]
Making use of these, we define the _Demazure-Lusztig operators_
\[T_{i}=q-\frac{x_{i}-qx_{i+1}}{x_{i}-x_{i+1}}(1-\mathfrak{s}_{i}),\quad 1 \leqslant i\leqslant n-1, \tag{3.18}\]
which provide a faithful representation of the Hecke algebra on the field of rational functions \(\mathbb{Q}(x_{1},\ldots,x_{n})\).9 From the quadratic identity \((T_{i}-q)(T_{i}+1)=0\), multiplied by \(T_{i}^{-1}\), one gets an explicit formula for inverse Hecke generators:
Footnote 9: Normally, one takes the operators \(T_{i}\) to act on polynomials in the alphabet \((x_{1},\ldots,x_{n})\), since they preserve polynomiality. In this work our partition functions are _a priori_ rational, which poses no problem, since the action (3.18) is still faithful on \(\mathbb{Q}(x_{1},\ldots,x_{n})\).
\[T_{i}^{-1}=q^{-1}(T_{i}-q+1)=q^{-1}\left(1-\frac{x_{i}-qx_{i+1}}{x_{i}-x_{i+1}}( 1-\mathfrak{s}_{i})\right),\quad 1\leqslant i\leqslant n-1.\]
In what follows, we will need another version of the Demazure-Lusztig operators (3.18) in which the variables \((x_{i},x_{i+1})\) get reciprocated. We reserve a special notation for this:
\[\tilde{T}_{i}=q-\frac{x_{i+1}-qx_{i}}{x_{i+1}-x_{i}}(1-\mathfrak{s}_{i}),\quad \tilde{T}_{i}^{-1}=q^{-1}\left(1-\frac{x_{i+1}-qx_{i}}{x_{i+1}-x_{i}}(1- \mathfrak{s}_{i})\right),\quad 1\leqslant i\leqslant n-1, \tag{3.19}\]
Clearly, the generators \(\tilde{T}_{i}\) also satisfy the basic relations (3.16)-(3.17) of the Hecke algebra.
**Theorem 3.11**.: _Fix an integer \(1\leqslant i\leqslant n\) and a composition \(\mu=(\mu_{1}|\mu_{2}|\ldots|\mu_{n})\in\mathcal{S}_{1^{n}}\) such that \(\mu_{i}<\mu_{i+1}\). The functions \(f_{\mu}(1^{n};x_{1},\ldots,x_{n})\equiv f_{\mu}(x_{1},\ldots,x_{n})\) and \(g_{\mu}(1^{n};x_{1},\ldots,x_{n})\equiv g_{\mu}(x_{1},\ldots,x_{n})\) transform under the action of (3.18)-(3.19) in the following way:_
\[T_{i}\cdot f_{\mu}(x_{1},\ldots,x_{n}) =f_{\mathfrak{s}_{i}\cdot\mu}(x_{1},\ldots,x_{n}), \tag{3.21}\] \[\tilde{T}_{i}\cdot g_{\mu}(x_{1},\ldots,x_{n}) =q\cdot g_{\mathfrak{s}_{i}\cdot\mu}(x_{1},\ldots,x_{n}), \tag{3.20}\]
_where \(\mathfrak{s}_{i}\cdot\mu\) denotes the composition obtained by switching \(\mu_{i}\) and \(\mu_{i+1}\)._
Proof.: Both statements (3.20) and (3.21) are proved in [1]; see equations (5.3.1) and (8.2.24) therein, respectively.
**Theorem 3.12**.: _Fix a coloured composition \(\nu\in\mathcal{S}_{\lambda}\), where \(\lambda=(\lambda_{1},\dots,\lambda_{n})\) is a composition such that \(|\lambda|=m\), as well as a vector \(\sigma=(\sigma_{1},\dots,\sigma_{m})\) such that \(|\{k:\sigma_{k}=i\}|=\lambda_{i}\) for all \(1\leqslant i\leqslant n\). Assuming that \(\sigma_{j}<\sigma_{j+1}\) for some \(1\leqslant j\leqslant m-1\), there holds_
\[T_{j}\cdot f_{\nu}^{\mathfrak{s}_{j}\cdot\sigma}(\lambda;x_{1},\dots,x_{m}) =q\cdot f_{\nu}^{\sigma}(\lambda;x_{1},\dots,x_{m}), \tag{3.23}\] \[\tilde{T}_{j}\cdot g_{\nu}^{\sigma}(\lambda;x_{1},\dots,x_{m}) =g_{\nu}^{\mathfrak{s}_{j}\cdot\sigma}(\lambda;x_{1},\dots,x_{m}), \tag{3.22}\]
_where \(\mathfrak{s}_{j}\cdot\sigma\) denotes the vector obtained by switching \(\sigma_{j}\) and \(\sigma_{j+1}\)._
Proof.: The proof of (3.22) is by isolating the action of \(T_{j}^{-1}\) on the pair of operators \(\mathcal{C}_{\sigma_{j}}(x_{j})\mathcal{C}_{\sigma_{j+1}}(x_{j+1})\), which is the only place that \(f_{\nu}^{\sigma}(\lambda;x_{1},\dots,x_{m})\) depends on \((x_{j},x_{j+1})\). Using the explicit form of \(T_{j}^{-1}\), we have
\[q\cdot T_{j}^{-1}\cdot\mathcal{C}_{\sigma_{j}}(x_{j})\mathcal{C}_{\sigma_{j+ 1}}(x_{j+1})=\frac{(q-1)x_{j+1}}{x_{j}-x_{j+1}}\mathcal{C}_{\sigma_{j}}(x_{j}) \mathcal{C}_{\sigma_{j+1}}(x_{j+1})+\frac{x_{j}-qx_{j+1}}{x_{j}-x_{j+1}} \mathcal{C}_{\sigma_{j}}(x_{j+1})\mathcal{C}_{\sigma_{j+1}}(x_{j}). \tag{3.24}\]
In view of the fact that \(\sigma_{j}<\sigma_{j+1}\), we may use the commutation relation (2.14) to combine the right hand side of (3.24) into a single term:
\[q\cdot T_{j}^{-1}\cdot\mathcal{C}_{\sigma_{j}}(x_{j})\mathcal{C}_{\sigma_{j+ 1}}(x_{j+1})=\mathcal{C}_{\sigma_{j+1}}(x_{j})\mathcal{C}_{\sigma_{j}}(x_{j+1 }).\]
Substitution of this identity into (3.14) immediately proves (3.22).
In a similar vein, one proves (3.23) by isolating the action of \(\tilde{T}_{j}^{-1}\) on the pair \(\mathcal{B}_{\sigma_{j+1}}(x_{j})\mathcal{B}_{\sigma_{j}}(x_{j+1})\), which is the only place that \(g_{\nu}^{\mathfrak{s}_{j}\cdot\sigma}(\lambda;x_{1},\dots,x_{m})\) depends on \((x_{j},x_{j+1})\). Using the explicit form of \(\tilde{T}_{j}^{-1}\), we have
\[\tilde{T}_{j}^{-1}\cdot\mathcal{B}_{\sigma_{j+1}}(x_{j})\mathcal{B}_{\sigma_{ j}}(x_{j+1})=\frac{(q-1)x_{j}}{q(x_{j+1}-x_{j})}\mathcal{B}_{\sigma_{j+1}}(x_{j}) \mathcal{B}_{\sigma_{j}}(x_{j+1})+\frac{x_{j+1}-qx_{j}}{q(x_{j+1}-x_{j})} \mathcal{B}_{\sigma_{j+1}}(x_{j+1})\mathcal{B}_{\sigma_{j}}(x_{j}). \tag{3.25}\]
Since \(\sigma_{j}<\sigma_{j+1}\), we use the commutation relation (2.15) to combine the right hand side of (3.25) into a single term:
\[\tilde{T}_{j}^{-1}\cdot\mathcal{B}_{\sigma_{j+1}}(x_{j})\mathcal{B}_{\sigma_{ j}}(x_{j+1})=\mathcal{B}_{\sigma_{j}}(x_{j})\mathcal{B}_{\sigma_{j+1}}(x_{j+1}).\]
Substitution of this identity into (3.15) proves (3.23).
### Antisymmetrization
A key property of the vertex models (2.1) and (2.4) is that of _colour-merging_; this is the combinatorial statement that partition functions in the models (2.1) and (2.4), with \(n\) colours, become equal to partition functions with \(m<n\) colours under a certain antisymmetrization procedure applied to the boundary conditions. The most general colour-merging statement is given and proved as [1, Theorem 5.2.2]; here we will reproduce this statement only at the level that we need, namely, for two of the families of rational functions that we have defined.
To state our antisymmetrization results, we require some definitions.
**Definition 3.13** (Rainbow recolouring).: Let \(\lambda=(\lambda_{1},\dots,\lambda_{n})\) be a composition such that \(|\lambda|=m\), and fix a coloured composition \(\mu\in\mathcal{S}_{\lambda}\). Denoting
\[\mu=\Big{(}\mu_{1}^{(1)}<\dots<\mu_{\lambda_{1}}^{(1)}\Big{|}\mu_{1}^{(2)}<\dots <\mu_{\lambda_{2}}^{(2)}\Big{|}\dots\Big{|}\mu_{1}^{(n)}<\dots<\mu_{\lambda_{n }}^{(n)}\Big{)},\]
we associate to this a rainbow composition \(\tilde{\mu}=(\tilde{\mu}_{1}|\tilde{\mu}_{2}|\dots|\tilde{\mu}_{m})\in \mathcal{S}_{1^{m}}\) such that for each \(1\leqslant i\leqslant m\) we have
\[\tilde{\mu}_{i}=\mu_{j}^{(k)},\]
where \(1\leqslant k\leqslant n\), \(1\leqslant j\leqslant\lambda_{k}\) are the unique integers such that
\[i=j+\sum_{a=1}^{k-1}\lambda_{a}.\]
In simpler terms, \(\tilde{\mu}\) is the composition obtained from recolouring the parts of \(\mu\) sequentially from \(1\) to \(m\) into pairwise distinct colours, while keeping the magnitude of all parts fixed.
**Definition 3.14**.: Fix a positive integer \(m\) and let \(\lambda=(\lambda_{1},\dots,\lambda_{n})\) be a composition such that \(|\lambda|=m\), with partial sums \(\ell_{k}=\sum_{i=1}^{k}\lambda_{i}\). We say that \(\sigma\in\mathfrak{S}_{\lambda}\subset\mathfrak{S}_{m}\) provided that \(\sigma\) fixes \((\ell_{k-1},\ell_{k}]\) for each integer \(1\leqslant k\leqslant n\), that is,
\[\sigma\in\mathfrak{S}_{[1,\ell_{1}]}\times\mathfrak{S}_{(\ell_{1},\ell_{2}]} \times\dots\times\mathfrak{S}_{(\ell_{n-1},\ell_{n}]}.\]
**Proposition 3.15**.: _Fix a coloured composition \(\nu\in\mathcal{S}_{\lambda}\) and let \(\check{\nu}\) denote its rainbow recolouring, as in Definition 3.13. We then have the following result, relating the functions (3.11) for rainbow colour profiles with those of non-rainbow type:_
\[\sum_{\sigma\in\mathfrak{S}_{\lambda}}(-1)^{\operatorname{inv}(\sigma)}g_{ \sigma(\check{\nu})}(1^{m};x_{1},\dots,x_{m})=g_{\nu}(\lambda;x_{1},\dots,x_{ m}),\]
_where the sum is taken over all elements in \(\mathfrak{S}_{\lambda}\). Here we have defined \(\operatorname{inv}(\sigma)=\operatorname{card}\{(i,j):i<j,\ \sigma_{i}>\sigma_{j}\}\) and \(\sigma(\check{\nu})=\left(\check{\nu}_{\sigma(1)}|\check{\nu}_{\sigma(2)}| \cdots|\check{\nu}_{\sigma(m)}\right)\)._
**Proposition 3.16**.: _Fix two coloured compositions \(\mu,\nu\in\mathcal{S}_{\lambda}\) and let \(\check{\mu},\check{\nu}\) denote their respective rainbow recolouring, as in Definition 3.13. The functions (3.5) have the following sum property:_
\[\sum_{\sigma\in\mathfrak{S}_{\lambda}}(-1)^{\operatorname{inv}(\sigma)}G_{ \check{\mu}/\sigma(\check{\nu})}(1^{m};x_{1},\dots,x_{p})=G_{\mu/\nu}(\lambda ;x_{1},\dots,x_{p})\]
_where the sum is taken over all elements in \(\mathfrak{S}_{\lambda}\)._
A similar antisymmetrization result can be stated for the functions (3.7), but we omit it from this section since we shall not require it in what follows.
_Remark 3.17_.: Propositions 3.15 and 3.16 are both statements about partition functions constructed from \(M\)-weights, as defined in Section 2.3. In order to recover them as corollaries of [1, Theorem 5.2.2] one should first apply the symmetry (2.6), which converts them to statements about partition functions built from \(L\)-weights, and the matching with [1] then goes through in a straightforward way.
### Orthogonality
In this section we directly transcribe an orthogonality result for non-symmetric spin Hall-Littlewood functions, from [1, Chapter 8]. Throughout, we denote the imaginary unit by \(\mathtt{i}=\sqrt{-1}\). Let \(\{C_{1},\dots,C_{n}\}\) be a collection of contours in the complex plane, and fix two complex parameters \(q,s\in\mathbb{C}\). We say that the set \(\{C_{1},\dots,C_{n}\}\) is admissible with respect to \((q,s)\) if the following conditions are met:
* The contours \(\{C_{1},\dots,C_{n}\}\) are closed, positively oriented and pairwise non-intersecting;
* The contours \(C_{i}\) and \(q\cdot C_{i}\) are both contained within contour \(C_{i+1}\) for all \(1\leqslant i\leqslant n-1\), where \(q\cdot C_{i}\) denotes the image of \(C_{i}\) under multiplication by \(q\);
* All contours surround the point \(s\).
**Theorem 3.18**.: _Fix two rainbow compositions \(\mu,\nu\in\mathcal{S}_{1^{n}}\), and let \(\{C_{1},\dots,C_{n}\}\) be contours admissible with respect to \((q,s)\). We then have_
\[\left(\frac{1}{2\pi\mathtt{i}}\right)^{n}\oint_{C_{1}}\frac{dy_{1}}{y_{1}} \dots\oint_{C_{n}}\frac{dy_{n}}{y_{n}}\prod_{1\leqslant i<j\leqslant n}\left( \frac{y_{j}-y_{i}}{y_{j}-qy_{i}}\right)f_{\mu}(y_{1}^{-1},\dots,y_{n}^{-1})g_{ \nu}(y_{1},\dots,y_{n})=\frac{\mathbf{1}_{\mu=\nu}\cdot(q-1)^{n}}{q^{n(n+1)/2}}. \tag{3.26}\]
Proof.: This is Theorem 8.2.1 of [1, Chapter 8].
Closely related to the orthogonality statement (3.26), and in fact instrumental in its proof, is the following property of the Hecke generators (3.18), (3.19) with respect to such integrals:
**Proposition 3.19**.: _Fix an integer \(1\leqslant k\leqslant n-1\), and three functions \(a(y_{1},\dots,y_{n})\), \(b(y_{1},\dots,y_{n})\) and \(c(y_{1},\dots,y_{n})\), the last of which is symmetric in its alphabet \((y_{1},\dots,y_{n})\). We have the following equality of integrals:_
\[\oint_{C_{1}}\frac{dy_{1}}{y_{1}}\dots\oint_{C_{n}}\frac{dy_{n}}{ y_{n}}\prod_{1\leqslant i<j\leqslant n}\left(\frac{y_{j}-y_{i}}{y_{j}-qy_{i}} \right)(T_{k}\cdot a)(y_{1}^{-1},\dots,y_{n}^{-1})b(y_{1},\dots,y_{n})c(y_{1}, \dots,y_{n})\\ =\oint_{C_{1}}\frac{dy_{1}}{y_{1}}\dots\oint_{C_{n}}\frac{dy_{n}}{ y_{n}}\prod_{1\leqslant i<j\leqslant n}\left(\frac{y_{j}-y_{i}}{y_{j}-qy_{i}} \right)a(y_{1}^{-1},\dots,y_{n}^{-1})(\tilde{T}_{k}\cdot b)(y_{1},\dots,y_{n})c (y_{1},\dots,y_{n}), \tag{3.27}\]
_with \(T_{k}\), \(\tilde{T}_{k}\) given by (3.18), (3.19), respectively._
Proof.: The proof of this result, for \(c(y_{1},\ldots,y_{n})\equiv 1\), is given in Proposition 8.1.3 in [1, Chapter 8]. The extension of the result to generic symmetric functions \(c(y_{1},\ldots,y_{n})\) follows immediately, in view of the fact that acting with Hecke generators \(T_{k}\), \(\tilde{T}_{k}\) commutes with multiplication by functions which are symmetric in \((y_{k},y_{k+1})\).
### Cauchy identity
It is possible to derive a number of summation identities of Cauchy-type for the non-symmetric spin Hall-Littlewood functions; see [1, Chapter 4]. In this section we state a Cauchy identity that did not previously appear in that text, although it is similar in flavour to [1, Proposition 4.5.1], and proved in precisely the same fashion.
**Theorem 3.20**.: _Let \(\lambda=(\lambda_{1},\ldots,\lambda_{n})\) be a composition such that \(|\lambda|=m\). Fix a coloured composition \(\nu\in\mathcal{S}_{\lambda}\) and two alphabets \((x_{1},\ldots,x_{p})\), \((y_{1},\ldots,y_{m})\) of complex parameters satisfying the constraint_
\[\left|\frac{x_{i}-s}{1-sx_{i}}\cdot\frac{y_{j}-s}{1-sy_{j}}\right|<1,\qquad \forall\ 1\leqslant i\leqslant p,\ 1\leqslant j\leqslant m. \tag{3.28}\]
_The following summation identity holds:_
\[\sum_{\kappa\in\mathcal{S}_{\lambda}}G_{\kappa/\nu}(\lambda;x_{1},\ldots,x_{p })g_{\kappa}(\lambda;y_{1},\ldots,y_{m})=\prod_{i=1}^{p}\prod_{j=1}^{m}\frac{ 1-qx_{i}y_{j}}{1-x_{i}y_{j}}\cdot g_{\nu}(\lambda;y_{1},\ldots,y_{m}). \tag{3.29}\]
Proof.: The left hand side of (3.29) may be represented algebraically as
\[\sum_{\kappa\in\mathcal{S}_{\lambda}}G_{\kappa/\nu}(\lambda;x_{1 },\ldots,x_{p})g_{\kappa}(\lambda;y_{1},\ldots,y_{m})\\ =\left\langle\nu\right|_{\lambda}\prod_{i=1}^{p}\mathcal{C}_{0}(x _{i})\prod_{j\in[1,\ell_{1}]}\mathcal{B}_{1}(x_{j})\prod_{j\in(\ell_{1},\ell_{2 }]}\mathcal{B}_{2}(x_{j})\cdots\prod_{j\in(\ell_{n-1},\ell_{n}]}\mathcal{B}_{n }(x_{j})\left|\emptyset\right\rangle.\]
We use the commutation relation (2.17) in the case \(i=0\), \(j\geqslant 1\) to transfer all \(\mathcal{B}\)-operators to the left of the product; this results in the equation
\[\sum_{\kappa\in\mathcal{S}_{\lambda}}G_{\kappa/\nu}(\lambda;x_{1 },\ldots,x_{p})g_{\kappa}(\lambda;y_{1},\ldots,y_{m})\\ =\prod_{i=1}^{p}\prod_{j=1}^{m}\frac{1-qx_{i}y_{j}}{1-x_{i}y_{j}} \cdot\left\langle\nu\right|_{\lambda}\prod_{j\in[1,\ell_{1}]}\mathcal{B}_{1}( x_{j})\prod_{j\in(\ell_{1},\ell_{2}]}\mathcal{B}_{2}(x_{j})\cdots\prod_{j\in(\ell_{n-1}, \ell_{n}]}\mathcal{B}_{n}(x_{j})\left|\emptyset\right\rangle, \tag{3.30}\]
where we have used the fact that
\[\prod_{i=1}^{p}\mathcal{C}_{0}(x_{i})\left|\emptyset\right\rangle=\left| \emptyset\right\rangle.\]
The expression obtained, (3.30), matches with the right hand side of (3.29).
### Integral formula for \(G_{\mu/\nu}\)
Combining the results of Sections 3.7-3.8, we now obtain an integral formula10 for the rational symmetric functions (3.6):
Footnote 10: A more general version of this integral formula appears in [1, Proposition 11.3.1].
**Theorem 3.21**.: _We have the following integral formula for the function \(G_{\mu/\nu}(\lambda;x_{1},\ldots,x_{p})\):_
\[G_{\mu/\nu}(\lambda;x_{1},\ldots,x_{p}) =\frac{q^{m(m+1)/2}}{(q-1)^{m}}\cdot\left(\frac{1}{2\pi\mathrm{i} }\right)^{m}\oint_{C_{1}}\frac{dy_{1}}{y_{1}}\cdots\oint_{C_{m}}\frac{dy_{m}}{ y_{m}}\] \[\times\prod_{1\leqslant i<j\leqslant m}\left(\frac{y_{j}-y_{i}}{y_{ j}-qy_{i}}\right)f_{\bar{\mu}}(1^{m};y_{1}^{-1},\ldots,y_{m}^{-1})g_{\nu}( \lambda;y_{1},\ldots,y_{m})\prod_{i=1}^{p}\prod_{j=1}^{m}\frac{1-qx_{i}y_{j}}{1- x_{i}y_{j}}, \tag{3.31}\]
_where \(\{C_{1},\ldots,C_{m}\}\) are contours admissible with respect to \((q,s)\)._
Proof.: This result is essentially given by [1, Corollary 11.3.2], though we reproduce its proof here for the reader's convenience.
We begin by proving (3.31) in the case where \(\mu,\nu\) are rainbow compositions. Start from the Cauchy identity (3.29) with \(\lambda=1^{m}\), multiply it by \(f_{\mu}(y_{1}^{-1},\ldots,y_{m}^{-1})\), prior to integrating as in the left hand side of (3.26).11 In view of the orthogonality property (3.26), this filters the \(\kappa=\mu\) term from the sum and we read off the identity
Footnote 11: The summation convergence in (3.29) is uniform on compact sets as long as the inequalities (3.28) are satisfied.
\[G_{\mu/\nu}(1^{m};x_{1},\ldots,x_{p}) =\frac{q^{m(m+1)/2}}{(q-1)^{m}}\cdot\left(\frac{1}{2\pi\mathtt{i} }\right)^{m}\oint_{C_{1}}\frac{dy_{1}}{y_{1}}\cdots\oint_{C_{m}}\frac{dy_{m}}{ y_{m}}\] \[\times\prod_{1\leqslant i<j\leqslant m}\left(\frac{y_{j}-y_{i}}{ y_{j}-qy_{i}}\right)f_{\mu}(1^{m};y_{1}^{-1},\ldots,y_{m}^{-1})g_{\nu}(1^{m};y_{1}, \ldots,y_{m})\prod_{i=1}^{p}\prod_{j=1}^{m}\frac{1-qx_{i}y_{j}}{1-x_{i}y_{j}}. \tag{3.32}\]
This proves (3.31) in the case \(\mu,\nu\in\mathcal{S}_{1^{m}}\).
The general case (3.31) then follows by antisymmetrization of (3.32); the left hand side antisymmetrization is obtained using Proposition 3.15, while that of the right hand side is carried out using Proposition 3.16.
## 4. Fusion
In this section we briefly recall some of the basics regarding the fusion procedure, when applied to the model (2.1). For full details, we refer the reader to [1, Chapter 3] and [1, Appendices B and C].
### Definition of fused vertices
To define fused vertices we require some additional notation; introduce _column vertices_ by taking towers of height \(N\) of the \(L\)-weights (2.1). In particular, for all \(\mathbf{A},\mathbf{C}\in\{0,1\}^{n}\) and \(b_{1},\ldots,b_{N},d_{1},\ldots,d_{N}\in[0,n]\) we define
\[\tilde{L}_{z}^{(s)}\Big{(}\mathbf{A},(b_{1},\ldots,b_{N});\mathbf{C},(d_{1},\ldots,d _{N})\Big{)}= \tag{4.1}\]
where the spectral parameters associated to horizontal lines, read from bottom to top, form the geometric progression \((z,qz,\ldots,q^{N-1}z)\).
**Definition 4.1**.: Fix four binary strings \(\mathbf{A}=(A_{1},\ldots,A_{n})\), \(\mathbf{B}=(B_{1},\ldots,B_{n})\), \(\mathbf{C}=(C_{1},\ldots,C_{n})\) and \(\mathbf{D}=(D_{1},\ldots,D_{n})\) in \(\{0,1\}^{n}\). Choose an integer \(N\geqslant 1\) and introduce the notation \(r=q^{-N/2}\). We define _fused vertex weights_ as follows:
\[\tilde{L}_{z}^{(r,s)}(\mathbf{A},\mathbf{B};\mathbf{C},\mathbf{D})=\frac{1}{Z_{q}(N;\mathbf{B})} \sum_{\begin{subarray}{c}(b_{1},\ldots,b_{N})\\ (d_{1},\ldots,d_{N})\end{subarray}}q^{\operatorname{inv}(b_{1},\ldots,b_{N}) }\tilde{L}_{z}^{(s)}\Big{(}\mathbf{A},(b_{1},\ldots,b_{N});\mathbf{C},(d_{1},\ldots,d _{N})\Big{)}, \tag{4.2}\]
where the sum is taken over vectors \((b_{1},\ldots,b_{N})\) and \((d_{1},\ldots,d_{N})\) such that \(\sum_{i=1}^{N}\boldsymbol{e}_{b_{i}}=\boldsymbol{B}\) and \(\sum_{i=1}^{N}\boldsymbol{e}_{d_{i}}=\boldsymbol{D}\), we recall that \(\operatorname{inv}(b_{1},\ldots,b_{N})=\operatorname{card}\{(i,j):i<j,\ b_{i}>b_{j}\}\), and where the normalization takes the form
\[Z_{q}(N;\boldsymbol{B})=\frac{(q;q)_{N}}{(q;q)_{B_{0}}(q;q)_{B_{1}}\ldots(q;q) _{B_{n}}},\quad B_{0}=N-\sum_{i=1}^{n}B_{i}.\]
We represent the fused vertices (4.2) graphically as follows:
\[\tilde{L}_{z}^{(r,s)}(\boldsymbol{A},\boldsymbol{B};\boldsymbol{C}, \boldsymbol{D})=\begin{array}{c}(z;r)\to\boldsymbol{B}\\ \end{array}\qquad\boldsymbol{A},\boldsymbol{B},\boldsymbol{C},\boldsymbol{D} \in\{0,1\}^{n}.\]
**Proposition 4.2**.: _For all integers \(b,d\in[0,n]\) and binary strings \(\boldsymbol{A},\boldsymbol{C}\in\{0,1\}^{n}\), one has_
\[\tilde{L}_{z}^{(r,s)}(\boldsymbol{A},\boldsymbol{e}_{b};\boldsymbol{C}, \boldsymbol{e}_{d})\Big{|}_{r=q^{-1/2}}=\tilde{L}_{z}^{(s)}(\boldsymbol{A},b; \boldsymbol{C},d).\]
Proof.: Setting \(r=q^{-1/2}\) is equivalent to taking \(N=1\). For \(N=1\) the sum on the right hand side of (4.2) trivializes and \(Z_{q}(1;\boldsymbol{e}_{b})=1\) for all \(b\in[0,n]\); the claimed equality is then manifest.
### Fused vertex weights
The fused vertex weights (4.2) were explicitly evaluated as [1, Theorem 4.3.2]; we recall this explicit formula here.
For any pair of vectors \(\boldsymbol{A},\boldsymbol{B}\in\mathbb{Z}^{n}\), define the function
\[\varphi(\boldsymbol{A},\boldsymbol{B})=\sum_{1\leqslant i<j\leqslant n}A_{i}B _{j}. \tag{4.3}\]
Fix binary strings \(\boldsymbol{A}=(A_{1},\ldots,A_{n})\), \(\boldsymbol{B}=(B_{1},\ldots,B_{n})\), \(\boldsymbol{C}=(C_{1},\ldots,C_{n})\), \(\boldsymbol{D}=(D_{1},\ldots,D_{n})\). Construct another vector \(\boldsymbol{V}=(V_{1},\ldots,V_{n})\), where \(V_{i}=\min\{A_{i},B_{i},C_{i},D_{i}\}\) for \(i\in[1,n]\). The fused weights (4.2) are then given by12
Footnote 12: We state this formula for spectral parameter \(sz\), rather than \(z\), in order to match with [1, Theorem 4.3.2].
\[\tilde{L}_{sz}^{(r,s)}(\boldsymbol{A},\boldsymbol{B};\boldsymbol{C}, \boldsymbol{D})=\boldsymbol{1}_{\boldsymbol{A}+\boldsymbol{B}=\boldsymbol{C}+ \boldsymbol{D}}\\ \times(-1)^{|\boldsymbol{V}|}z^{|\boldsymbol{D}|-|\boldsymbol{B}|}r ^{-2|\boldsymbol{A}|}s^{2|\boldsymbol{D}|}q^{-\varphi(\boldsymbol{A}, \boldsymbol{V})-|\boldsymbol{V}|}\frac{(r^{-2}q^{-|\boldsymbol{V}|+1}z;q)_{| \boldsymbol{V}|}}{(s^{2}r^{-2}q^{-|\boldsymbol{V}|}z;q)_{|\boldsymbol{V}|}}W_ {z}^{(r,s)}(\boldsymbol{A},\boldsymbol{B}-\boldsymbol{V};\boldsymbol{C}, \boldsymbol{D}-\boldsymbol{V}) \tag{4.4}\]
where we have defined
\[W_{z}^{(r,s)}(\boldsymbol{A},\boldsymbol{B}-\boldsymbol{V}; \boldsymbol{C},\boldsymbol{D}-\boldsymbol{V})=\\ \sum_{\boldsymbol{P}}\Phi(\boldsymbol{C}-\boldsymbol{P},\boldsymbol {C}+\boldsymbol{D}-\boldsymbol{V}-\boldsymbol{P};s^{2}r^{-2}q^{-|\boldsymbol{ V}|}z,s^{2})\Phi(\boldsymbol{P},\boldsymbol{B}-\boldsymbol{V};r^{2}q^{| \boldsymbol{V}|}z^{-1},r^{2}q^{|\boldsymbol{V}|}), \tag{4.5}\]
with the sum over all \(\boldsymbol{P}=(P_{1},\ldots,P_{n})\) such that \(P_{i}\leqslant\min\{C_{i},B_{i}-V_{i}\}\) for all \(i\in[1,n]\). The functions appearing in the summand of (4.5) are defined for any two vectors \(\boldsymbol{S}=(S_{1},\ldots,S_{n})\), \(\boldsymbol{T}=(T_{1},\ldots,T_{n})\) such that \(S_{i}\leqslant T_{i}\) for all \(i\in[1,n]\):
\[\Phi(\boldsymbol{S},\boldsymbol{T};u,v)=\frac{(u;q)_{|\boldsymbol{S}|}(v/u;q)_ {|\boldsymbol{T}|-|\boldsymbol{S}|}}{(v;q)_{|\boldsymbol{T}|}}(v/u)^{| \boldsymbol{S}|}q^{\varphi(\boldsymbol{T}-\boldsymbol{S},\boldsymbol{S})} \prod_{i=1}^{n}\binom{T_{i}}{S_{i}}_{q},\]
where we have used the standard \(q\)-binomial coefficient
\[\binom{b}{a}_{q}=\frac{(q;q)_{b}}{(q;q)_{a}(q;q)_{b-a}},\quad a\leqslant b.\]
The weights (4.4) provide an explicit evaluation of the right hand side of (4.2), under the identification \(r=q^{-N/2}\); however, the formula (4.4) makes sense for arbitrary values of \(r\in\mathbb{C}\) (that is, as a rational function in \(r\)), and we tacitly assume this in what follows.
### Master Yang-Baxter equation
The fused vertex weights satisfy a master Yang-Baxter equation, that contains the previous three Yang-Baxter relations (2.9)-(2.11) as special cases.
**Theorem 4.3**.: _Fix a collection of binary strings \(\mathbf{A}(1),\mathbf{A}(2),\mathbf{A}(3),\mathbf{B}(1),\mathbf{B}(2),\mathbf{B}(3)\in\{0,1\}^{n}\) and arbitrary parameters \(x,y,r,s,t\in\mathbb{C}\). The weights (4.4) satisfy the equation_
\[\sum_{\mathbf{C}(1),\mathbf{C}(2),\mathbf{C}(3)}\tilde{L}^{(r,s)}_{sx/y} \Big{(}\mathbf{A}(2),\mathbf{A}(1);\mathbf{C}(2),\mathbf{C}(1)\Big{)}\tilde{L}^{(r,t)}_{x} \Big{(}\mathbf{A}(3),\mathbf{C}(1);\mathbf{C}(3),\mathbf{B}(1)\Big{)}\tilde{L}^{(s,t)}_{y} \Big{(}\mathbf{C}(3),\mathbf{C}(2);\mathbf{B}(3),\mathbf{B}(2)\Big{)}\] \[=\sum_{\mathbf{C}(1),\mathbf{C}(2),\mathbf{C}(3)}\tilde{L}^{(s,t)}_{y} \Big{(}\mathbf{A}(3),\mathbf{A}(2);\mathbf{C}(3),\mathbf{C}(2)\Big{)}\tilde{L}^{(r,t)}_{x} \Big{(}\mathbf{C}(3),\mathbf{A}(1);\mathbf{B}(3),\mathbf{C}(1)\Big{)}\tilde{L}^{(r,s)}_{sx/y} \Big{(}\mathbf{C}(2),\mathbf{C}(1);\mathbf{B}(2),\mathbf{B}(1)\Big{)}, \tag{4.6}\]
_where \(\mathbf{C}(1),\mathbf{C}(2),\mathbf{C}(3)\) are summed over all binary strings in \(\{0,1\}^{n}\)._
Proof.: See [1, Proposition 5.1.4] for full details.
The master Yang-Baxter equation (4.6) reduces to the three given earlier, namely (2.9)-(2.11), by choosing any two of \(r,s,t\) to be equal to \(q^{-1/2}\), keeping the remaining parameter arbitrary (and up to further relabelling of the spectral parameters \(x\) and \(y\)). Details of these reductions, for the bosonic counterpart of the models discussed in the current text, may be found in [1, Appendix C]. In what follows, we will make use of yet another reduction:
**Corollary 4.4**.: _Fix two integers \(a,b\in[0,n]\) and binary strings \(\mathbf{A}(2),\mathbf{A}(3),\mathbf{B}(2),\mathbf{B}(3)\in\{0,1\}^{n}\). The weights (2.3) and (4.4) satisfy the equation_
\[\sum_{c,\mathbf{C}(2),\mathbf{C}(3)}\tilde{L}^{(r)}_{rx/y}\Big{(}\mathbf{A}(2 ),a;\mathbf{C}(2),c\Big{)}\tilde{L}^{(s)}_{x}\Big{(}\mathbf{A}(3),c;\mathbf{C}(3),b\Big{)} \tilde{L}^{(r,s)}_{y}\Big{(}\mathbf{C}(3),\mathbf{C}(2);\mathbf{B}(3),\mathbf{B}(2)\Big{)}\] \[=\sum_{c,\mathbf{C}(2),\mathbf{C}(3)}\tilde{L}^{(r,s)}_{y}\Big{(}\mathbf{A}(3 ),\mathbf{A}(2);\mathbf{C}(3),\mathbf{C}(2)\Big{)}\tilde{L}^{(s)}_{x}\Big{(}\mathbf{C}(3),a; \mathbf{B}(3),c\Big{)}\tilde{L}^{(r)}_{rx/y}\Big{(}\mathbf{C}(2),c;\mathbf{B}(2),b\Big{)},\]
_where \(c\) is summed over all integers in \([0,n]\) and \(\mathbf{C}(2),\mathbf{C}(3)\) are summed over all binary strings in \(\{0,1\}^{n}\). This equation has the following graphical version:_
(4.7)
Proof.: This is the reduction \(r=q^{-1/2}\), \(\mathbf{A}(1)=\mathbf{e}_{a}\), \(\mathbf{B}(1)=\mathbf{e}_{b}\) of equation (4.6), followed by the relabelling \(s\mapsto r\), \(t\mapsto s\).
### Fused row operators
For any integer \(N\geqslant 0\) and non-empty set \(I\subset[0,n]\), define the following analogue of the row operators (2.12):
\[\mathcal{D}_{I}(x;r) :\bigotimes_{k=0}^{N}|\mathbf{B}(k)\rangle\] \[\mapsto\sum_{\mathbf{A}(0),\dots,\mathbf{A}(N)\in\{0,1\}^{n}}\left( \begin{array}{ccccc}\mathbf{B}(0)&\dots&\dots&\dots&\dots&\mathbf{B}(N)\\ (x;r)\to\mathbf{e}_{0}&\dots&\dots&\dots&\dots&\mathbf{A}(N)\end{array}\right)\bigotimes _{k=0}^{N}|\mathbf{A}(k)\rangle\,, \tag{4.8}\]
where the quantity
\[(x;r)\to\,\boldsymbol{e}_{0}\]
is a one-row partition function using the vertex weights (4.4).13
Footnote 13: Note that unless \(I=\{0\}\), it is now essential for \(N\) to be finite, unlike in the definitions of \(\mathcal{C}_{i}(x)\) and \(\mathcal{B}_{i}(x)\) where \(N\) is taken to \(\infty\).
When \(r\) is specialized to \(r=q^{-p/2}\), with \(p\in[1,n]\), we refer to \(\mathcal{D}_{I}(x;r)\) as a row operator of _width_\(p\); this is in reference to the fact that the horizontal line of the row operator can now carry at most \(p\) paths. The case \(r=q^{-1/2}\) has a particular significance in what follows; in this case the capacity constraint of the horizontal line imposes that \(|I|=1\), and we have
\[\mathcal{D}_{\{i\}}(x;q^{-1/2})\equiv\mathcal{D}_{i}(x)=T_{0,i}^{\to}(x;N), \tag{4.9}\]
for all \(i\in[0,n]\), where the operator on the right hand side is given by (2.12).
### Commutation relations
This subsection documents several types of commutation relations between the fused row operators (4.8) of varying widths. The majority of these results will not be needed until Section 7 of the text, where they are used to compute a certain class of partition functions that play a role in our subsequent probability distributions. The reader may prefer to skip this subsection and return to it, as needed, in Section 7.
**Proposition 4.5**.: _Fix an integer \(i\in[1,n]\) and a set \(J\subset[1,n]\) such that \(i\in J\). We have the following exchange relation between fused row operators (4.8) and their unfused counterparts (4.9):_
\[\mathcal{D}_{i}(x)\mathcal{D}_{J}(y;r)=\tilde{L}_{rx/y}^{(r)}(\boldsymbol{e}_ {J},i;\boldsymbol{e}_{J},i)\cdot\mathcal{D}_{J}(y;r)\mathcal{D}_{i}(x), \tag{4.10}\]
_where the coefficient appearing on the right hand side is given by the top-middle entry of the table (2.3)._
Proof.: We give the proof in the case of row operators of unit length, namely, for \(N=0\); however, for generic \(N\) the proof follows in exactly the same way. Starting from the relation (4.7), we set \(a=0\), \(\boldsymbol{A}(2)=\boldsymbol{e}_{0}\), \(b=i\), \(\boldsymbol{B}(2)=\boldsymbol{e}_{J}\), keeping \(\boldsymbol{A}(3)\) and \(\boldsymbol{B}(3)\) arbitrary. The diagonally-oriented vertex on the left hand side freezes; it is given by \(\tilde{L}_{rx/y}^{(r)}(\boldsymbol{e}_{0},0;\boldsymbol{e}_{0},0)=1\). Due to the fact that \(i\in J\), the diagonally-oriented vertex on the right hand side also freezes; colour \(i\) is present in both of the outgoing edges of this vertex, meaning that it must be present in both of the incoming edges (since \(\boldsymbol{C}(2)\) is a binary string). The weight of this frozen vertex is \(\tilde{L}_{rx/y}^{(r)}(\boldsymbol{e}_{J},i;\boldsymbol{e}_{J},i)\), completing the proof.
**Proposition 4.6**.: _Fix an integer \(i\in[1,n]\) and a set \(J\subset[1,n]\) such that \(i\not\in J\). We have the following exchange relation between fused row operators (4.8) and their unfused counterparts (4.9):_
\[\mathcal{D}_{i}(x)\mathcal{D}_{J}(y;r)=\sum_{j\in\{0,i\}\cup J}\tilde{L}_{rx/y }^{(r)}(\boldsymbol{e}_{J}+\boldsymbol{e}_{i}-\boldsymbol{e}_{j},j; \boldsymbol{e}_{J},i)\cdot\mathcal{D}_{\{i\}\cup J\setminus\{j\}}(y;r) \mathcal{D}_{j}(x), \tag{4.11}\]
_where the coefficients appearing in the sum are given by the bottom-middle and bottom-right entries of the table (2.3)._
Proof.: Similarly to the proof of Proposition 4.6, one starts from the relation (4.7) and sets \(a=0\), \(\boldsymbol{A}(2)=\boldsymbol{e}_{0}\), \(b=i\), \(\boldsymbol{B}(2)=\boldsymbol{e}_{J}\), keeping \(\boldsymbol{A}(3)\) and \(\boldsymbol{B}(3)\) arbitrary. The diagonally-oriented vertex on the left hand side again freezes with weight \(1\). This time, however, the diagonally-oriented vertex on the right hand side is not frozen; this is due to the fact that \(i\not\in J\), meaning that colour \(i\) is only present in one of the outgoing edges of this vertex. The weight of the diagonally-oriented vertex is seen to be \(\tilde{L}_{rx/y}^{(r)}(\boldsymbol{e}_{J}+\boldsymbol{e}_{i}-\boldsymbol{e}_ {j},j;\boldsymbol{e}_{J},i)\), and the result follows by summing over all possible values of \(c=j\).
Combining Propositions 4.5 and 4.6 we obtain the following important result:
**Proposition 4.7**.: _Fix two integers \(p,i\in[1,n]\) and a set \(J\subset[1,n]\) of cardinality \(|J|=p\). We then have the commutation relation_
\[\mathcal{D}_{i}(x)\mathcal{D}_{J}(x;q^{-p/2})=\frac{1-q}{1-q^{p}}\cdot\left\{ \begin{array}{rl}q^{\alpha_{i}(J)}\mathcal{D}_{J}(x;q^{-p/2})\mathcal{D}_{i }(x),&i\in J,\\ \sum_{j\in J}\!\!q^{\alpha_{i}\big{(}J_{ij}^{+-}\big{)}}\mathcal{D}_{J_{ ij}^{+-}}(x;q^{-p/2})\mathcal{D}_{j}(x),&i\not\in J,\end{array}\right. \tag{4.12}\]
_between row operators of width 1 and width \(p\) respectively. Here we have defined \(J_{ij}^{+-}=\{i\}\cup J\backslash\{j\}\) and \(\alpha_{i}(K)\) denotes the number of elements in the set \(K\subset\mathbb{N}\) which exceed \(i\); namely, \(\alpha_{i}(K)=|\{k\in K:k>i\}|\)._
Proof.: We analyse the cases \(i\in J\) and \(i\not\in J\) separately. For \(i\in J\) we use (4.10) with \(x=y\), \(r=q^{-p/2}\); under this choice of parameters the coefficient on the right hand side reads
\[\tilde{L}_{rx/y}^{(r)}(\boldsymbol{e}_{J},i;\boldsymbol{e}_{J},i)\Big{|}_{x= y}\Big{|}_{r=q^{-p/2}}=\frac{r^{2}(qx/y-1)q^{\alpha_{i}(J)}}{1-r^{2}x/y}\Big{|}_{ x=y}\Big{|}_{r=q^{-p/2}}=\frac{1-q}{1-q^{p}}\cdot q^{\alpha_{i}(J)},\]
and we recover the first line of (4.12).
For \(i\not\in J\) we use (4.11) with \(x=y\), \(r=q^{-p/2}\); this allows two of the terms on the right hand of (4.11) to be eliminated. First, we may eliminate the \(j=0\) term from the summation; this follows from the fact that \(\mathcal{D}_{\{i\}\cup J}(y;r)=0\) at \(r=q^{-p/2}\), since \(|\boldsymbol{e}_{J}+\boldsymbol{e}_{i}|=p+1\). Second, we may eliminate the \(j=i\) term from the summation, since for \(i\not\in J\) one has
\[\tilde{L}_{rx/y}^{(r)}(\boldsymbol{e}_{J},i;\boldsymbol{e}_{J},i)\Big{|}_{x= y}\Big{|}_{r=q^{-p/2}}=\frac{r^{2}(1-x/y)q^{\alpha_{i}(J)}}{1-r^{2}x/y}\Big{|}_{ x=y}\Big{|}_{r=q^{-p/2}}=0.\]
The remaining terms in the summation are those for which \(j\in J\); for those we obtain
\[\tilde{L}_{rx/y}^{(r)}(\boldsymbol{e}_{J}+\boldsymbol{e}_{i}-\boldsymbol{e} _{j},j;\boldsymbol{e}_{J},i)\Big{|}_{x=y}\Big{|}_{r=q^{-p/2}}=\frac{1-q}{1-q^{ p}}\cdot q^{\alpha_{i}\big{(}J_{ij}^{+-}\big{)}},\]
and the second line of (4.12) holds.
**Proposition 4.8**.: _Fix an integer \(p\in[1,n]\) and a set \(I=\{i_{1},\ldots,i_{p}\}\subset[1,n]\) of cardinality \(|I|=p\). We have the following "peeling" property between the fused row operators (4.8) and their unfused counterparts (4.9):_
\[\mathcal{D}_{I}(x;q^{-p/2})=\sum_{j\in I}\mathcal{D}_{I\backslash\{j\}}\left( qx;q^{-(p-1)/2}\right)\mathcal{D}_{j}(x), \tag{4.13}\]
_allowing us to extract a row operator of width 1 from a row operator of width \(p\)._
Proof.: Extending the definition (4.2) of fused vertex weights to row operators (_cf._[1, Appendix B]), one finds that
\[\mathcal{D}_{I}(x;q^{-p/2})=\sum_{\sigma\in\mathfrak{S}_{p}}\mathcal{D}_{i_{ \sigma(1)}}(x)\mathcal{D}_{i_{\sigma(2)}}(qx)\cdots\mathcal{D}_{i_{\sigma(p)}} (q^{p-1}x), \tag{4.14}\]
where the objects appearing on the right hand side are unfused row operators (4.9). In particular, for row operators of unit length (namely, for \(N=0\)), the relation (4.14) matches precisely with the definition (4.2) for \(r=q^{-p/2}\), \(\boldsymbol{B}=\boldsymbol{e}_{0}\) and \(\boldsymbol{D}=\boldsymbol{e}_{I}\). Converting the sum over \(\mathfrak{S}_{p}\) into summation over \(\mathfrak{S}_{p-1}\) subgroups, and re-fusing the final \(p-1\) operators in the resulting summand, we may rewrite (4.14) as
\[\mathcal{D}_{I}(x;q^{-p/2})=\sum_{i\in I}\mathcal{D}_{i}(x)\mathcal{D}_{I \backslash\{i\}}\left(qx;q^{-(p-1)/2}\right). \tag{4.15}\]
Now from (4.11) with \(J=I\backslash\{i\}\) and \(r=q^{-(p-1)/2}\), one has that
\[\mathcal{D}_{i}(x)\mathcal{D}_{I\backslash\{i\}}\left(qx;q^{-(p-1)/2}\right)= \sum_{j\in I}\left(\left.\tilde{L}_{rq^{-1}}^{(r)}(\boldsymbol{e}_{I}- \boldsymbol{e}_{j},j;\boldsymbol{e}_{I}-\boldsymbol{e}_{i},i)\right|_{r=q^{-(p -1)/2}}\right)\mathcal{D}_{I\backslash\{j\}}\left(qx;q^{-(p-1)/2}\right) \mathcal{D}_{j}(x); \tag{4.16}\]
the \(j=0\) term was dropped from the above sum because \(\mathcal{D}_{I}(qx;q^{-(p-1)/2})=0\), in view of the fact that this row operator has width \(p-1\) while \(I\) has cardinality \(p\). Summing both sides of (4.16) over \(i\in I\) yields
\[\sum_{i\in I}\mathcal{D}_{i}(x)\mathcal{D}_{I\setminus\{i\}}\left(qx;q^{-(p-1) /2}\right)=\sum_{j\in I}\mathcal{D}_{I\setminus\{j\}}\left(qx;q^{-(p-1)/2} \right)\mathcal{D}_{j}(x), \tag{4.17}\]
in view of the stochasticity property \(\sum_{i\in I}\tilde{L}_{rq^{-1}}^{(r)}(\boldsymbol{e}_{I}-\boldsymbol{e}_{j},j;\boldsymbol{e}_{I}-\boldsymbol{e}_{i},i)=1\) (see [1, Proposition 2.5.1]). Combining (4.15) and (4.17), we have completed the proof.
## 5. LLT measures and Plancherel specialization
In this section we introduce the probability measures that will be central to this text; they are based on the Lascoux-Leclerc-Thibon polynomials [11, 1] and their associated Cauchy identity [1, 1], and accordingly we refer to them as _LLT measures_. In analogy with the Schur and Macdonald processes [1, 2], one may introduce a class of Markov kernels that preserve the form of the LLT measure when they act upon it. Acting consecutively with these Markov kernels, we obtain \(n\)-tuples of random Gelfand-Tsetlin patterns; one Gelfand-Tsetlin pattern is produced for each of the \(n\) colours in our partition functions. The main result of this paper is a complete description of the behaviour of these patterns under a certain asymptotic regime of the underlying measure; this is carried out in Section 6.
The layout of this section is as follows. In Sections 5.1-5.3 we recall a partition function representation for the LLT polynomials, recently obtained in [1, 1], and use it to present an integral formula for the latter. In Section 5.4 we apply the Plancherel specialization of the ring of symmetric functions to the integral obtained in Section 5.3, yielding an integral formula for the Plancherel-specialized LLT polynomials. In Sections 5.5-5.6 we recall the (skew) Cauchy identity for LLT polynomials, and use it to define our LLT measures and associated Markov kernels.
### Functions \(\mathbb{G}_{\lambda/\mu}\) and reduction to LLT polynomials
In Section 3.9 we introduced the symmetric rational functions \(G_{\lambda/\mu}\) as matrix elements of products of the row operators (2.12); we now generalize these, by replacing the row operators in the algebraic construction with their fused analogues (4.8).
**Definition 5.1**.: Let \(\lambda=(\lambda_{1},\ldots,\lambda_{n})\) be a composition of weight \(m\), and fix two \(\lambda\)-coloured compositions \(\mu\in\mathcal{S}_{\lambda}\) and \(\nu\in\mathcal{S}_{\lambda}\). Let the corresponding vectors in \(\mathbb{V}(\infty)\), \(\left|\mu\right\rangle_{\lambda}\) and \(\left|\nu\right\rangle_{\lambda}\), be given by (3.3) and (3.4), respectively. For any integer \(p\geqslant 1\) we define the following family of symmetric rational functions:
\[(-s)^{|\mu|-|\nu|}\cdot\mathbb{G}_{\mu/\nu}(\lambda;x_{1},\ldots,x_{p};r_{1}, \ldots,r_{p})=\left\langle\nu\right|_{\lambda}\prod_{i=1}^{p}\mathcal{D}_{\{ 0\}}(x_{i};r_{i})\left|\mu\right\rangle_{\lambda}, \tag{5.1}\]
where the operators \(\mathcal{D}_{\{0\}}(x_{i};r_{i})\) are given by (4.8).
In graphical notation, the definition (5.1) reads
\[\begin{split}(x_{p};r_{p})\to&\boldsymbol{e}_{0}\\ &\vdots\;\;\vdots\\ (-s)^{|\mu|-|\nu|}\cdot\mathbb{G}_{\mu/\nu}(\lambda;x_{1},\ldots,x_{p};r_{ 1},\ldots,r_{p})=&\vdots\;\;\vdots\\ (x_{2};r_{2})\to&\boldsymbol{e}_{0}\\ (x_{1};r_{1})\to&\boldsymbol{e}_{0}\end{split} \tag{5.2}\]
where the vectors \(\boldsymbol{A}(k),\boldsymbol{B}(k)\), \(k\geqslant 0\) are given by (3.3)-(3.4).
Two reductions of (5.1) are of interest. The first is obtained by setting \(r_{i}=q^{-1/2}\) for all \(1\leqslant i\leqslant p\); in this case, each fused row operator reduces to its unfused analogue, as described in equation (4.9), and we find that
\[\mathbb{G}_{\mu/\nu}(\lambda;x_{1},\ldots,x_{p};q^{-1/2},\ldots,q^{-1/2})=G_{ \mu/\nu}(\lambda;x_{1},\ldots,x_{p}).\]
The second we record as a theorem, below.
**Theorem 5.2**.: _The function \(\mathbb{G}_{\mu/\nu}(\lambda;x_{1},\ldots,x_{p};r_{1},\ldots,r_{p})\) has well-defined \(s\to 0\) and \(r_{1},\ldots,r_{p}\to\infty\) limits. Under these limits, it becomes a polynomial in \((x_{1},\ldots,x_{p})\) with monomial coefficients living in \(\mathbb{N}[q]\)._
Proof.: To compute the limit \(s\to 0\), we divide both sides of (5.1) by \((-s)^{|\mu|-|\nu|}\); on the right hand side, we may distribute the resulting \((-s)^{|\nu|-|\nu|}\) factor within the partition function by assigning a factor of \((-s)^{-1}\) to each horizontal unit step by a path. By [1, Corollary 8.3.6],
\[\lim_{r\to\infty}\lim_{s\to 0}(-s)^{-|\mathbf{D}|}\tilde{L}_{x}^{(r,s)}(\mathbf{A}, \mathbf{B};\mathbf{C},\mathbf{D})=\mathbf{1}_{\mathbf{C}+\mathbf{D}\in\{0,1\}^{n}}\cdot x^{|\mathbf{D}|}q^ {\varphi(\mathbf{D},\mathbf{C})+\varphi(\mathbf{D},\mathbf{D})},\qquad\forall\ \mathbf{A},\mathbf{B},\mathbf{C},\mathbf{D}\in\{0,1\}^{n}. \tag{5.3}\]
Since this limit exists at the level of the individual vertices, it follows that the \(s\to 0\) and \(r_{1},\ldots,r_{p}\to\infty\) limits exist when applied to the whole partition function. The fact that the resulting function is a polynomial in \((x_{1},\ldots,x_{p})\), with nonnegative polynomial coefficients in \(q\), is manifest from the right hand side of (5.3).
Throughout the remainder of the text, we refer to
\[\mathbb{G}_{\mu/\nu}(\lambda;x_{1},\ldots,x_{p};\infty,\ldots,\infty)\Big{|}_{ s\to 0}\equiv\mathbb{G}_{\mu/\nu}(x_{1},\ldots,x_{p}) \tag{5.4}\]
as a Lascoux-Leclerc-Thibon (LLT) polynomial14, and tacitly assume that the \(s\to 0\) and \(r_{1},\ldots,r_{p}\to\infty\) limits have been taken, unless it is specifically stated otherwise.
Footnote 14: The LLT polynomials have two combinatorial definitions; either in terms of ribbon tilings of a Young diagram, or in terms of \(n\)-tuples of semi-standard Young tableaux. For both of these definitions, we refer to [1, Sections 9.1 and 9.2]; for the matching of \(\mathbb{G}_{\mu/\nu}(\lambda;x_{1},\ldots,x_{p};\infty,\ldots,\infty)\Big{|}_{ s\to 0}\) with the resulting polynomials we refer to [1, Theorem 9.3.2 (1)].
### Padding and shifting LLT polynomials
To this point, LLT polynomials were indexed by coloured compositions, as given by Definiton 3.1. There is a natural way to extend their definition to allow indexing by _coloured signatures_ (the extension of the set (3.1) that allows parts to take any integer values, including negative ones) which will be convenient when we come to stating the Cauchy identity for LLT polynomials (see Section 5.5).
One may consider the effect of appending an extra column to the left of the partition function (5.2), with the boundary conditions at the top and bottom of this column prescribed as \(\mathbf{A}(-1)=1^{n}\) and \(\mathbf{B}(-1)=1^{n}\), respectively. One sees that the appended column freezes with weight \(1\) (assuming the limit where the weights (5.3) are used), and therefore does not contribute to the overall evaluation of the partition function. This invariance property may be expressed as
\[\mathbb{G}_{-1\cup\mu/-1\cup\nu}(x_{1},\ldots,x_{p})=\mathbb{G}_{\mu/\nu}(x_{ 1},\ldots,x_{p}), \tag{5.5}\]
for any \(\mu,\nu\in\mathcal{S}_{\lambda}\), where \(-1\cup\mu\) and \(-1\cup\nu\) mean prepending a part of size \(-1\) in each of the \(n\) blocks of \(\mu\) and \(\nu\), respectively (similarly to Definition 3.2). The procedure (5.5) may clearly be iterated, allowing us to prepend arbitrarily many negative parts to the coloured compositions in question. One also notes that, on the resulting coloured signatures, there holds
\[\mathbb{G}_{(\mu-1)/(\nu-1)}(x_{1},\ldots,x_{p})=\mathbb{G}_{\mu/\nu}(x_{1}, \ldots,x_{p}), \tag{5.6}\]
where \((\mu-1)\) and \((\nu-1)\) mean subtracting \(1\) from every part of \(\mu\) and \(\nu\), respectively. It is then easy to see that (5.5) and (5.6) completely determine the value of \(\mathbb{G}_{\mu/\nu}(x_{1},\ldots,x_{p})\) for any coloured signatures \(\mu\) and \(\nu\) (possibly containing infinitely many negative parts15).
### Integral formula for LLT polynomials
Applying the fusion procedure to the integral formula obtained in Section 3.9, one may obtain an integral formula for the LLT polynomials; we reproduce that result here, in essentially the same form as it appeared in [1, Corollary 11.5.3].
**Theorem 5.3**.: _Fix a composition \(\lambda=(\lambda_{1},\dots,\lambda_{n})\) such that \(|\lambda|=m\), and choose two coloured compositions \(\mu,\nu\in\mathcal{S}_{\lambda}\). The LLT polynomials (5.4) are given by the following integral expression:_
\[\mathbb{G}_{\mu/\nu}(x_{1},\dots,x_{p})=\frac{q^{m(m+1)/2}}{(q-1)^ {m}}\cdot\left(\frac{1}{2\pi\mathtt{i}}\right)^{m}\oint_{C_{1}}\frac{dy_{1}}{ y_{1}}\dots\oint_{C_{m}}\frac{dy_{m}}{y_{m}}\\ \times\prod_{1\leqslant i<j\leqslant m}\left(\frac{y_{j}-y_{i}}{ y_{j}-qy_{i}}\right)f_{\tilde{\mu}}(1^{m};y_{1}^{-1},\dots,y_{m}^{-1})g_{\nu}( \lambda;y_{1},\dots,y_{m})\prod_{i=1}^{p}\prod_{j=1}^{m}\frac{1}{1-x_{i}y_{j}}, \tag{5.7}\]
_where the contours \(\{C_{1},\dots,C_{m}\}\) are admissible with respect to \((q,0)\). It is implicit that \(s=0\) in the functions \(f_{\tilde{\mu}}\) and \(g_{\nu}\)._
Proof.: We focus on the proof for \(p=1\), as this captures the essence of the proof for generic \(p\). Fix an integer \(P\geqslant 1\). Using (5.1), the one-variable function \(\mathbb{G}_{\mu/\nu}(\lambda;x;q^{-P/2})\) is given by
\[(-s)^{|\mu|-|\nu|}\cdot\mathbb{G}_{\mu/\nu}(\lambda;x;q^{-P/2}) =\left\langle\nu\right|_{\lambda}\mathcal{D}_{\{0\}}(x;q^{-P/2}) \left|\mu\right\rangle_{\lambda}\] \[=\left\langle\nu\right|_{\lambda}\mathcal{D}_{0}(x)\mathcal{D}_{ 0}(qx)\dots\mathcal{D}_{0}(q^{P-1}x)\left|\mu\right\rangle_{\lambda}=(-s)^{| \mu|-|\nu|}G_{\mu/\nu}(\lambda;x,qx,\dots,q^{P-1}x)\]
where we have replaced the fused row operator \(\mathcal{D}_{\{0\}}(x;q^{-P/2})\) by the bundle of unfused row operators (2.12) which comprise it. From the integral formula (3.31), we then have that
\[\mathbb{G}_{\mu/\nu}(\lambda;x;q^{-P/2})=G_{\mu/\nu}(\lambda;x, qx,\dots,q^{P-1}x)=\frac{q^{m(m+1)/2}}{(q-1)^{m}}\cdot\left(\frac{1}{2\pi \mathtt{i}}\right)^{m}\oint_{C_{1}}\frac{dy_{1}}{y_{1}}\dots\oint_{C_{m}}\frac {dy_{m}}{y_{m}}\\ \times\prod_{1\leqslant i<j\leqslant m}\left(\frac{y_{j}-y_{i}}{ y_{j}-qy_{i}}\right)f_{\tilde{\mu}}(1^{m};y_{1}^{-1},\dots,y_{m}^{-1})g_{\nu}( \lambda;y_{1},\dots,y_{m})\prod_{j=1}^{m}\frac{1-q^{P}xy_{j}}{1-xy_{j}}.\]
This yields an expression for \(\mathbb{G}_{\mu/\nu}(\lambda;x;r)\) by performing the analytic continuation \(q^{P}\mapsto r^{-2}\); the \(p=1\) case of (5.7) then follows by sending \(r\to\infty\). The generic \(p\) version of (5.7) may be proved along similar lines; namely, we split each of the \(p\) fused operators \(\mathcal{D}_{\{0\}}(x_{i};q^{-P/2})\) in (5.1) into a bundle of \(P\) unfused row operators, and carry out the analysis above on each of the bundles.
### Plancherel specialization
Let \(\Lambda\) denote the ring of symmetric functions in the (infinite) alphabet \(x:=(x_{1},x_{2},\dots)\). As described in [19, Chapter I], the power sum basis of \(\Lambda\) is the set of functions
\[p_{\lambda}(x):=\prod_{i\geqslant 1}p_{\lambda_{i}}(x),\qquad p_{k}(x):=\sum_{i \geqslant 1}x_{i}^{k},\ \ \forall\ k\geqslant 0,\]
where \(\lambda\) ranges over all partitions. Any function in \(\Lambda\) is expressed as a unique linear combination of the functions \(p_{\lambda}(x)\).
Fix an indeterminate \(t\in\mathbb{C}\). The Plancherel specialization of \(\Lambda\) is the map \(\mathrm{Pl}_{t}:\Lambda\to\mathbb{C}\) under which the power sums transform as follows:
\[p_{k}(x)\mapsto\left\{\begin{array}{ll}t,&\quad k=1,\\ 0,&\quad k\geqslant 2.\end{array}\right.\]
Following standard notational practice for specializations of the ring of symmetric functions, we denote the image of a function \(f\in\Lambda\) under \(\mathrm{Pl}_{t}\) by \(f(\mathrm{Pl}_{t})\).
The LLT polynomials (5.4) admit a natural lift to \(\Lambda\), obtained by replacing the finite alphabet \((x_{1},\dots,x_{p})\) by the infinite one \(x=(x_{1},x_{2},\dots)\). Making this replacement in (5.7), we have
\[\mathbb{G}_{\mu/\nu}(x_{1},x_{2},\dots)=\frac{q^{m(m+1)/2}}{(q-1)^{m}}\cdot \left(\frac{1}{2\pi\mathtt{i}}\right)^{m}\oint_{C_{1}}\frac{dy_{1}}{y_{1}} \dots\oint_{C_{m}}\frac{dy_{m}}{y_{m}}\\ \times\prod_{1\leqslant i<j\leqslant m}\left(\frac{y_{j}-y_{i}}{y _{j}-qy_{i}}\right)f_{\tilde{\mu}}(1^{m};y_{1}^{-1},\dots,y_{m}^{-1})g_{\nu}( \lambda;y_{1},\dots,y_{m})\prod_{j=1}^{m}\exp\left(\sum_{k\geqslant 1}\frac{p_{k}(x )y_{j}^{k}}{k}\right) \tag{5.8}\]
where we have used the fact that (as a formal power series) there holds
\[\prod_{i\geqslant 1}\frac{1}{1-x_{i}y}=\exp\left(-\sum_{i\geqslant 1}\log(1-x_{ i}y)\right)=\exp\left(\sum_{i\geqslant 1}\sum_{k=1}^{\infty}\frac{x_{i}^{k}y^{k} }{k}\right)=\exp\left(\sum_{k\geqslant 1}\frac{p_{k}(x)y^{k}}{k}\right).\]
We then read off the Plancherel specialization of \(\mathbb{G}_{\mu/\nu}\):
\[\mathbb{G}_{\mu/\nu}(\mathrm{Pl}_{t})=\frac{q^{m(m+1)/2}}{(q-1)^{m }}\cdot\left(\frac{1}{2\pi\mathtt{i}}\right)^{m}\oint_{C_{1}}\frac{dy_{1}}{y_{1 }}\dots\oint_{C_{m}}\frac{dy_{m}}{y_{m}}\\ \times\prod_{1\leqslant i<j\leqslant m}\left(\frac{y_{j}-y_{i}}{y _{j}-qy_{i}}\right)f_{\tilde{\mu}}(1^{m};y_{1}^{-1},\dots,y_{m}^{-1})g_{\nu}( \lambda;y_{1},\dots,y_{m})\prod_{j=1}^{m}e^{ty_{j}}. \tag{5.9}\]
In what follows, we shall further restrict to \(t\in\mathbb{R}_{\geqslant 0}\), where \(t\) could be viewed as playing the role of continuous time.
### Skew Cauchy identity for LLT polynomials
Up until now we dealt with coloured compositions of arbitrary colour profile \(\lambda=(\lambda_{1},\dots,\lambda_{n})\). Throughout the rest of the paper we shall restrict our attention to the case \(\lambda_{i}=N\) for all \(1\leqslant i\leqslant n\), where \(N\) is some given positive integer; this means that each colour within a coloured composition \(\mu\) is represented exactly \(N\) times. We denote the corresponding set of coloured compositions as follows:
\[\mathcal{S}_{N^{n}}=\Big{\{}\mu=\Big{(}\mu_{1}^{(1)}<\dots<\mu_{N}^{(1)} \Big{|}\mu_{1}^{(2)}<\dots<\mu_{N}^{(2)}\Big{|}\dots\Big{|}\mu_{1}^{(n)}<\dots <\mu_{N}^{(n)}\Big{)}\Big{\}}.\]
One element of \(\mathcal{S}_{N^{n}}\) plays a special role; this is the element in which all parts of a coloured composition are as small as they can be. We assign this element the notation \(\Delta\):
\[\Delta=(0,1,\dots,N-1|0,1,\dots,N-1|\dots|0,1,\dots,N-1)\in\mathcal{S}_{N^{n}}. \tag{5.10}\]
Whenever the lower coloured composition \(\nu\) in an LLT polynomial \(\mathbb{G}_{\mu/\nu}\) is set equal to \(\Delta\), we employ the lighter notation
\[\mathbb{G}_{\mu/\Delta}(x_{1},\dots,x_{p})\equiv\mathbb{G}_{\mu}(x_{1},\dots, x_{p}).\]
**Definition 5.4**.: For any coloured composition \(\mu=\Big{(}\mu_{1}^{(1)}<\dots<\mu_{N}^{(1)}\Big{|}\dots\Big{|}\mu_{1}^{(n)}< \dots<\mu_{N}^{(n)}\Big{)}\in\mathcal{S}_{N^{n}}\) we define the statistic
\[\psi(\mu)=\frac{1}{2}\sum_{1\leqslant i<j\leqslant n}\ \sum_{a\in\mu^{(i)}}\ \sum_{b\in\mu^{(j)}}\mathbf{1}_{a>b}.\]
**Theorem 5.5**.: _Fix two positive integers \(p\) and \(N\), and two alphabets \((x_{1},\dots,x_{p})\) and \((y_{1},\dots,y_{N})\). Let \(\nu\in\mathcal{S}_{N^{n}}\) be a coloured composition. The LLT polynomials (5.4) satisfy the Cauchy summation identity_
\[\sum_{\mu\in\mathcal{S}_{N^{n}}}q^{-2\psi(\mu)}\mathbb{G}_{\mu/\nu}(x_{1}, \dots,x_{p})\mathbb{G}_{\mu}(y_{1},\dots,y_{N})=\prod_{i=1}^{p}\prod_{j=1}^{N} \frac{1}{(x_{i}y_{j};q)_{n}}\cdot q^{-2\psi(\nu)}\mathbb{G}_{\nu}(y_{1},\dots,y _{N}), \tag{5.11}\]
_where \(\mathbb{G}_{\mu}(y_{1},\dots,y_{N})\equiv\mathbb{G}_{\mu/\Delta}(y_{1},\dots, y_{N})\) and \(\mathbb{G}_{\nu}(y_{1},\dots,y_{N})\equiv\mathbb{G}_{\nu/\Delta}(y_{1},\dots,y_{N})\). This holds either as a formal power series, or as a numeric equality as long as \(|q|<1\) and \(|x_{i}y_{j}|<1\) for all \(i,j\)._
Proof.: This was originally obtained in [13, Theorem 35]. For a formulation in terms of the vertex model setup of the current text, we refer to [1, Corollary 9.4.1] and [1, Proposition 6.12].
We make a small but important adjustment to the Cauchy identity (5.11). For any \(N\geqslant 1\), introduce the set of coloured signatures
\[\mathcal{S}(N)=\left\{\mu=\left(\mu^{(1)}|\mu^{(2)}|\cdots|\mu^{(n)}\right) \right\}, \tag{5.12}\]
where for all \(1\leqslant i\leqslant n\) the components
\[\mu^{(i)}=\left(\cdots<\mu_{-1}^{(i)}<\mu_{0}^{(i)}<\mu_{1}^{(i)}<\cdots<\mu_ {N}^{(i)}\right) \tag{5.13}\]
are left-infinite strict signatures such that \(\mu_{k}^{(i)}\neq k-1\) for only finitely many \(k\in\mathbb{Z}\). We continue to denote by \(\Delta\) the unique element in \(\mathcal{S}(N)\) in which all signature parts are minimal. For any fixed \(\nu\in\mathcal{S}(N)\), one then has that
\[\sum_{\mu\in\mathcal{S}(N)}q^{-2\psi(\mu,\nu)}\mathbb{G}_{\mu/\nu}(x_{1}, \ldots,x_{p})\mathbb{G}_{\mu}(y_{1},y_{2},\ldots)=\prod_{i=1}^{p}\prod_{j \geqslant 1}\frac{1}{(x_{i}y_{j};q)_{n}}\mathbb{G}_{\nu}(y_{1},y_{2},\ldots), \tag{5.14}\]
in which the size of the alphabet \((y_{1},y_{2},\ldots)\) is now infinite.16 Here \(\mathbb{G}_{\mu}(y_{1},y_{2},\ldots)\equiv\mathbb{G}_{\mu/\Delta}(y_{1},y_{2},\ldots)\) and \(\mathbb{G}_{\nu}(y_{1},y_{2},\ldots)\equiv\mathbb{G}_{\nu/\Delta}(y_{1},y_{2},\ldots)\) as previously, and
Footnote 16: In particular, this will allow us to Plancherel-specialize this alphabet.
\[\psi(\mu,\nu)=\frac{1}{2}\sum_{1\leqslant i<j\leqslant n}\ \sum_{a\in\mu^{(i)}}\ \sum_{b\in\mu^{(j)}}\mathbf{1}_{a>b>m}-\frac{1}{2}\sum_{1\leqslant i<j\leqslant n }\ \sum_{a\in\nu^{(i)}}\ \sum_{b\in\nu^{(j)}}\mathbf{1}_{a>b>m}, \tag{5.15}\]
with \(m\) chosen to be any integer such that \(\mu_{k}^{(i)}=\nu_{k}^{(i)}=k-1\) for all \(k\leqslant m\) and \(1\leqslant i\leqslant n\). Equation (5.14) holds as an identity of formal power series, which converges if \(|q|<1\) and \(|x_{i}y_{j}|<1\) for all \(i,j\).
The claim (5.14) is established by taking (5.11) with \(N\) becoming arbitrarily large, and applying (5.5) and (5.6) appropriately to convert the indices of all functions to members of the set (5.12). It is easily verified that the quantity \(\psi(\mu)-\psi(\nu)\) is invariant under such paddings and shifts, and may be written in the form (5.15).
### Markov kernels
We proceed to introduce probability measures from the skew Cauchy identity (5.11). Normalizing so that the right hand side of (5.11) is equal to \(1\), we have
\[\sum_{\mu\in\mathcal{S}(N)}\prod_{i=1}^{p}\prod_{j\geqslant 1}(x_{i}y_{j};q)_{n }\cdot q^{-2\psi(\mu,\nu)}\mathbb{G}_{\mu/\nu}(x_{1},\ldots,x_{p})\frac{ \mathbb{G}_{\mu}(y_{1},y_{2},\ldots)}{\mathbb{G}_{\nu}(y_{1},y_{2},\ldots)}=1. \tag{5.16}\]
In view of this sum-to-unity property, the summands in (5.16) may be viewed as probabilities of transitioning from an initial coloured signature \(\nu\in\mathcal{S}(N)\) to a final one \(\mu\in\mathcal{S}(N)\). Many choices of the parameters \((x_{1},\ldots,x_{p})\) and \((y_{1},y_{2},\ldots)\) are possible, leading to a variety of interesting distributions, but in this work we focus on one particular choice; namely, we let \((x_{1},\ldots,x_{p})=(1,\ldots,1)\equiv 1^{p}\) and take the \(\mathrm{Pl}_{t}\) specialization of the alphabet \((y_{1},y_{2},\ldots)\). Under this choice, (5.16) becomes
\[\sum_{\mu\in\mathcal{S}(N)}\exp\left(-\frac{p(1-q^{n})}{1-q}t\right)\mathbb{G} _{\mu/\nu}(1^{p})q^{-2\psi(\mu,\nu)}\frac{\mathbb{G}_{\mu}(\mathrm{Pl}_{t})}{ \mathbb{G}_{\nu}(\mathrm{Pl}_{t})}=1.\]
From this we introduce the Markov kernel \(\mathbb{P}_{t,p}:V(\mathcal{S}(N))\to V(\mathcal{S}(N))\) with matrix elements given by
\[\mathbb{P}_{t,p}(\nu\to\mu)=q^{-2\psi(\mu,\nu)}\exp\left(-\frac{p(1-q^{n})}{1- q}t\right)\mathbb{G}_{\mu/\nu}(1^{p})\frac{\mathbb{G}_{\mu}(\mathrm{Pl}_{t})}{ \mathbb{G}_{\nu}(\mathrm{Pl}_{t})}, \tag{5.17}\]
where \(V(\mathcal{S}(N))\) denotes the complex linear span of the elements of \(\mathcal{S}(N)\). Abusing notation slightly, whenever we write \(\mathbb{P}_{t,p}(\nu)\) for some \(\nu\in\mathcal{S}(N)\), this means a random coloured signature \(\mu\in\mathcal{S}(N)\) sampled from the distribution (5.17).
_Remark 5.6_.: Throughout the rest of the text, we shall only be concerned with evaluating the kernel (5.17) on coloured signatures \(\mu,\nu\) such that \(\mu_{k}^{(i)}=\nu_{k}^{(i)}=k-1\) for all \(k\leqslant 0\) and \(1\leqslant i\leqslant n\). When coloured signatures have such a property, by slight abuse of notation we continue to write \(\mu,\nu\in\mathcal{S}_{N^{n}}\subset\mathcal{S}(N)\) and shall still refer to these objects as coloured compositions.
Below we collect some elementary facts about the Markov kernel (5.17).
**Proposition 5.7**.: _For any integer \(p\geqslant 1\), real parameter \(t\in\mathbb{R}_{\geqslant 0}\) and coloured composition \(\mu\in\mathcal{S}_{N^{n}}\), we have_
\[\mathbb{P}_{t,p}(\Delta\to\mu)=q^{-2\psi(\mu)+\binom{n}{2}\binom{N}{2}}\exp \left(-\frac{p(1-q^{n})}{1-q}t\right)\mathbb{G}_{\mu}(1^{p})\mathbb{G}_{\mu}( \mathrm{Pl}_{t}). \tag{5.18}\]
Proof.: This is just the \(\nu=\Delta\) case of (5.17), noting that \(\mathbb{G}_{\Delta}=1\) and \(\psi(\Delta)=\frac{1}{2}\binom{n}{2}\binom{N}{2}\).
**Proposition 5.8**.: _For any two integers \(p_{1},p_{2}\geqslant 1\) and real parameter \(t\in\mathbb{R}_{\geqslant 0}\), the maps \(\mathbb{P}_{t,p_{1}}\) and \(\mathbb{P}_{t,p_{2}}\) compose according to the rule_
\[\mathbb{P}_{t,p_{1}}\circ\mathbb{P}_{t,p_{2}}=\mathbb{P}_{t,p_{1}+p_{2}}. \tag{5.19}\]
Proof.: For fixed \(\lambda,\nu\in\mathcal{S}_{N^{n}}\) one computes
\[\sum_{\mu\in\mathcal{S}_{N^{n}}}\mathbb{P}_{t,p_{1}}(\nu\to\mu)\mathbb{P}_{t,p _{2}}(\mu\to\lambda)\]
\[=q^{-2(\psi(\lambda)-2\psi(\nu))}\exp\left(-\frac{(p_{1}+p_{2})(1-q^{n})}{1-q} t\right)\frac{\mathbb{G}_{\lambda}(\mathrm{Pl}_{t})}{\mathbb{G}_{\nu}( \mathrm{Pl}_{t})}\sum_{\mu\in\mathcal{S}_{N^{n}}}\mathbb{G}_{\mu/\nu}(1^{p_{1} })\mathbb{G}_{\lambda/\mu}(1^{p_{2}})\]
\[=q^{-2(\psi(\lambda)-2\psi(\nu))}\exp\left(-\frac{(p_{1}+p_{2})(1-q^{n})}{1-q} t\right)\frac{\mathbb{G}_{\lambda}(\mathrm{Pl}_{t})}{\mathbb{G}_{\nu}( \mathrm{Pl}_{t})}\mathbb{G}_{\lambda/\nu}(1^{p_{1}+p_{2}})=\mathbb{P}_{t,p_{1} +p_{2}}(\nu\to\lambda),\]
where we have used the branching rule for LLT polynomials (see [1, Remark 9.1.1]) to produce the second equality.
In view of the property (5.19), we may view the Markov kernel \(\mathbb{P}_{t,p}\) as the composition of \(p\) kernels \(\mathbb{P}_{t,1}\). Starting from the trivial state \(\Delta\in\mathcal{S}_{N^{n}}\), we may either act directly with \(\mathbb{P}_{t,p}\) to obtain a random coloured composition \(\lambda^{[p]}=\mathbb{P}_{t,p}(\Delta)\), distributed according to (5.18), or we may act \(p\) times with \(\mathbb{P}_{t,1}\), producing a chain of \(p\) random coloured compositions
\[\Delta\xrightarrow{\mathbb{P}_{t,1}}\lambda^{[1]}\xrightarrow{\mathbb{P}_{t, 1}}\lambda^{[2]}\xrightarrow{\mathbb{P}_{t,1}}\cdots\xrightarrow{\mathbb{P}_{ t,1}}\lambda^{[p]}. \tag{5.20}\]
Our goal in the following section is to study the asymptotic behaviour of the distribution of the whole sequence \((\lambda^{[1]},\dots,\lambda^{[p]})\), as \(t\to\infty\), with \(p\) kept finite.
## 6. Asymptotics
In this section we carry out an asymptotic analysis of the Markov kernel (5.17) with \(p=1\), as \(t\to\infty\); this analysis proceeds in several steps. We begin by rewriting coloured compositions in terms of a pair of vectors \(\vec{\ell}\) and \(\vec{c}\) in Section 6.1; \(\vec{\ell}\) encodes the coordinates of the parts in a coloured composition, while \(\vec{c}\) encodes the colour sequencing of its parts. In Section 6.2, we specify a particular time-dependent scaling of the coordinates associated to \(\mu\) and \(\nu\) within the function \(\mathbb{P}_{t,1}\). We also impose certain interlacing constraints on the coordinates of \(\mu\) and \(\nu\); for finite \(t\), these constraints prohibit certain coloured compositions on which the measure is non-zero, but it later transpires that as \(t\to\infty\) these forbidden compositions naturally occur with vanishingly small probability, allowing us to omit them from our considerations.
Having fixed our choice of scaling and our interlacing assumptions, we proceed to the analysis of the individual factors in the measure (5.17). Section 6.5 deals with the factor \(\mathbb{G}_{\mu/\nu}(1)\), whose analysis can be accessed by direct combinatorial means, while Section 6.6 deals with the factors \(\mathbb{G}_{\mu}(\mathrm{Pl}_{t})\) and \(\mathbb{G}_{\nu}(\mathrm{Pl}_{t})\), which are analysed by steepest descent method applied to the integral formula (5.9). Our final formula is presented in Section 6.7; we show that in the \(t\to\infty\) limit being studied, the measure (5.17) degenerates into the product of transition densities for \(n\) independent GUE corners processes, multiplied by a discrete measure that is valued on colour sequences.
### Coordinate and colour sequence notation
**Definition 6.1**.: To every coloured composition \(\mu=\left(\mu_{1}^{(1)}<\cdots<\mu_{N}^{(1)}|\cdots|\mu_{1}^{(n)}<\cdots<\mu_{N}^{ (n)}\right)\in\mathcal{S}_{N^{n}}\) we associate three vectors \(\vec{\ell}=(\ell_{1},\ldots,\ell_{nN})\in\mathbb{N}^{nN}\), \(\vec{c}=(c_{1},\ldots,c_{nN})\in[1,n]^{nN}\), \(\vec{b}=(b_{1},\ldots,b_{nN})\in[1,N]^{nN}\) satisfying the relation
\[\ell_{i}=\mu_{b_{i}}^{(c_{i})},\qquad\forall\ 1\leqslant i\leqslant nN,\]
and satisfying the properties **(a)**\(\ell_{i}\leqslant\ell_{i+1}\) for all \(1\leqslant i<nN\); **(b)**\(c_{i}<c_{i+1}\) if \(\ell_{i}=\ell_{i+1}\), for all \(1\leqslant i<nN\); **(c)**\(b_{i}\neq b_{j}\) if \(c_{i}=c_{j}\), for all \(1\leqslant i<j<nN\).
More informally, \(\vec{\ell}\) is the unique vector obtained by sorting the parts of \(\mu\) in increasing order; we refer to it as the _coordinate vector_ of \(\mu\). The vector \(\vec{c}\) records the colours of the parts of \(\mu\) once it has been sorted in increasing order, with an increasing criterion imposed on these colours in the case of ties; we refer to it as the _colour sequence_ of \(\mu\). The vector \(\vec{b}\) has been introduced only for the purpose of the making our definitions unambiguous, and plays no role in the rest of the paper.
### Starting assumptions and scaling
Throughout the rest of this section we will be concerned with the analysis of the Markov kernel \(\mathbb{P}_{t,1}\left(\nu\to\mu\right)\) as given by (5.17), with \(\nu=0\cup\lambda^{[m]}\) and \(\mu=\lambda^{[m+1]}\), where we have chosen \(\lambda^{[m]}\in\mathcal{S}_{m^{n}}^{+}\) and \(\lambda^{[m+1]}\in\mathcal{S}_{(m+1)^{n}}^{+}\) (we remind the reader that the meaning of these notations is given by Definition 3.2). Under such choices, the kernel (5.17) becomes
\[\mathbb{P}_{t,1}\left(0\cup\lambda^{[m]}\to\lambda^{[m+1]}\right)=q^{-2\left( \psi\left(\lambda^{[m+1]}\right)-\psi\left(0\cup\lambda^{[m]}\right)\right)} \exp\left(-\frac{1-q^{n}}{1-q}t\right)\mathbb{G}_{\lambda^{[m+1]}/0\cup\lambda ^{[m]}}(1)\frac{\mathbb{G}_{\lambda^{[m+1]}}(\mathrm{Pl}_{t})}{\mathbb{G}_{0 \cup\lambda^{[m]}}(\mathrm{Pl}_{t})}.\]
Noting that
\[\mathbb{G}_{0\cup\lambda^{[m]}}(\mathrm{Pl}_{t})\equiv\mathbb{G}_{0\cup \lambda^{[m]}/\Delta}(\mathrm{Pl}_{t})=\mathbb{G}_{\lambda^{[m]}-1}(\mathrm{ Pl}_{t}),\]
where \(\lambda^{[m]}-1\) means subtraction of \(1\) from every part of \(\lambda^{[m]}\), we have that
\[\mathbb{P}_{t,1}\left(0\cup\lambda^{[m]}\to\lambda^{[m+1]}\right)=q^{-2\left( \psi\left(\lambda^{[m+1]}\right)-\psi\left(0\cup\lambda^{[m]}\right)\right)} \exp\left(-\frac{1-q^{n}}{1-q}t\right)\mathbb{G}_{\lambda^{[m+1]}/0\cup\lambda ^{[m]}}(1)\frac{\mathbb{G}_{\lambda^{[m+1]}}(\mathrm{Pl}_{t})}{\mathbb{G}_{ \lambda^{[m]}-1}(\mathrm{Pl}_{t})}. \tag{6.1}\]
We shall make some assumptions concerning the coloured compositions \(\lambda^{[m]}\in\mathcal{S}_{m^{n}}^{+}\) and \(\lambda^{[m+1]}\in\mathcal{S}_{(m+1)^{n}}^{+}\) appearing within this formula. Following Definition 6.1 we represent them in terms of their corresponding coordinate vectors and colour sequences:
\[\lambda^{[m]}\leftrightarrow\left(\ell_{1}^{[m]},\ldots,\ell_{nm}^{[m]}\Big{|}c _{1}^{[m]},\ldots,c_{nm}^{[m]}\right),\qquad\lambda^{[m+1]}\leftrightarrow\left( \ell_{1}^{[m+1]},\ldots,\ell_{n(m+1)}^{[m+1]}\Big{|}c_{1}^{[m+1]},\ldots,c_{n(m+ 1)}^{[m+1]}\right), \tag{6.2}\]
and we work directly with these vectors in what follows. Our first assumption is that the coordinates \(\{\ell_{i}^{[m]}\}_{1\leqslant i\leqslant nm}\) and \(\{\ell_{j}^{[m+1]}\}_{1\leqslant j\leqslant n(m+1)}\) are strictly increasing and obey the interlacing constraints
\[\ell_{j(m+1)+i}^{[m+1]}<\ell_{jm+i}^{[m]}<\ell_{j(m+1)+i+1}^{[m+1]},\qquad \forall\ i\in[1,m],\quad j\in[0,n-1]. \tag{6.3}\]
Informally, this means that the coordinates \(\{\ell_{i}^{[m]}\}_{1\leqslant i\leqslant nm}\) and \(\{\ell_{j}^{[m+1]}\}_{1\leqslant j\leqslant n(m+1)}\) are each grouped into \(n\) bundles of equal size, and coordinates within those bundles interlace; see Figure 1.
We will subsequently see that (6.1) depends on the coordinates \(\{\ell_{i}^{[m]}\}_{1\leqslant i\leqslant nm}\) and \(\{\ell_{j}^{[m+1]}\}_{1\leqslant j\leqslant n(m+1)}\) analytically. Our second assumption will be that these coordinates are analytically continued to real values, by setting
\[\ell_{i}^{[k]}\mapsto q^{n-\lceil i/k\rceil}t+(q^{n-\lceil i/k\rceil}t)^{\frac{ 1}{2}}x_{i}^{[k]},\qquad 1\leqslant i\leqslant nk,\qquad k\in\{m,m+1\}, \tag{6.4}\]
with \(\lceil i/k\rceil\) denoting the ceiling function, and where
\[\left(x_{1}^{[m]}<\cdots<x_{nm}^{[m]}\right)\in\mathbb{R}^{nm},\qquad\left(x_{1}^ {[m+1]}<\cdots<x_{n(m+1)}^{[m+1]}\right)\in\mathbb{R}^{n(m+1)} \tag{6.5}\]
are sequences of reals that obey the interlacing constraints
\[x_{j(m+1)+i}^{[m+1]}<x_{jm+i}^{[m]}<x_{j(m+1)+i+1}^{[m+1]},\qquad\forall\ i\in[1,m ],\quad j\in[0,n-1]. \tag{6.6}\]
Note that (6.6) is simply the translation of the earlier interlacing constraint (6.3) to the real variables that now parametrize our coordinates.
We note that there exist choices of the coordinates \(\{\ell_{j}^{[m+1]}\}_{1\leqslant j\leqslant n(m+1)}\) which violate the constraints (6.3) and yet have non-zero weight in the measure (6.1). We refer to such choices as _unfavourable coordinates_. Our main result will be to show that under the scaling (6.4), unfavourable coordinates do not occur with probability converging to \(1\) as \(t\to\infty\). We do this by showing that as \(t\to\infty\) the quantity (6.1) weakly converges to the product of a continuous transition density \(\rho_{\mathrm{GUE}}\left(x^{[m]}\to x^{[m+1]}\right)\) valued on interlacing real sequences (6.5) and a discrete transition probability \(\mathbb{P}_{\mathrm{col}}\left(c^{[m]}\to c^{[m+1]}\right)\) valued on colour sequences (6.2). In demonstrating that the resulting quantity integrates to unity, we prove that (6.4) captures the correct law of large numbers of the coordinates, with \(\rho_{\mathrm{GUE}}\left(x^{[m]}\to x^{[m+1]}\right)\) providing the fluctuations.
### Main result
**Definition 6.2** (GUE corners process).: The Gaussian Unitary Ensemble (GUE) of rank \(m\) is the collection of \(m\times m\) Hermitian matrices \(M=(M_{ij})_{i,j=1}^{m}\), where \(M=(X+X^{*})/2\) and \(X=(X_{ij})_{i,j=1}^{m}\) denotes an \(m\times m\) matrix of i.i.d. complex Gaussian random variables \(X_{ij}\sim\mathcal{N}(0,1)+\mathrm{i}\mathcal{N}(0,1)\). For all \(1\leqslant k\leqslant m\), write the eigenvalues of the \(k\times k\) top-left sub-matrix of \(M\) as \(\theta_{1}^{[k]}\leqslant\cdots\leqslant\theta_{k}^{[k]}\). The joint law of the eigenvalues \(\theta_{i}^{[j]}\), \(1\leqslant i\leqslant j\), \(j\in[1,m]\) is called the _GUE corners process_ of rank \(m\).
Following [1, Theorem 20.1], one has the following explicit formula for the density of the GUE corners process:
**Proposition 6.3**.: _The array \(\theta_{i}^{[j]}\), \(1\leqslant i\leqslant j\), \(j\in[1,m]\) has joint density_
\[\rho\left(\theta_{i}^{[j]}=x_{i}^{[j]},1\leqslant i\leqslant j\leqslant m \right)=\mathbf{1}_{x^{[1]}\prec\cdots\prec x^{[m]}}\left(\frac{1}{2\pi} \right)^{m/2}\prod_{1\leqslant i<j\leqslant m}(x_{j}^{[m]}-x_{i}^{[m]})\prod _{i=1}^{m}e^{-\frac{1}{2}\left(x_{i}^{[m]}\right)^{2}} \tag{6.7}\]
_with respect to the \(m(m+1)/2\)-dimensional Lebesgue measure._
Proposition 6.3 implies (see also [1, Equation (20.2)]) the conditional probability density for the eigenvalues \(\theta_{i}^{[m+1]}\), \(1\leqslant i\leqslant m+1\) of the \((m+1)\times(m+1)\) top-left sub-matrix, given those of the \(m\times m\) one:
\[\rho\left(\theta_{i}^{[m+1]}=x_{i}^{[m+1]},1\leqslant i \leqslant m+1\Big{|}\theta_{i}^{[m]}=x_{i}^{[m]},1\leqslant i \leqslant m\right)\\ =\mathbf{1}_{x^{[m+1]}\succ x^{[m]}}\frac{1}{(2\pi)^{1/2}}\frac{ \prod_{1\leqslant i<j\leqslant m+1}\left(x_{j}^{[m+1]}-x_{i}^{[m+1]}\right) \cdot\prod_{i=1}^{m+1}e^{-\frac{1}{2}\left(x_{i}^{[m+i]}\right)^{2}}}{\prod _{1\leqslant i<j\leqslant m}\left(x_{j}^{[m]}-x_{i}^{[m]}\right)\cdot\prod_{i =1}^{m}e^{-\frac{1}{2}\left(x_{i}^{[m]}\right)^{2}}},\]
and for notational compactness, we shall write
\[\rho_{\mathrm{GUE}}\left(x^{[1]}\prec\cdots\prec x^{[m]}\right) :=\rho\left(\theta_{i}^{[j]}=x_{i}^{[j]},1\leqslant i\leqslant j \leqslant m\right),\] \[\rho_{\mathrm{GUE}}\left(x^{[m]}\to x^{[m+1]}\right) :=\rho\left(\theta_{i}^{[m+1]}=x_{i}^{[m+1]},1\leqslant i \leqslant m+1\Big{|}\theta_{i}^{[m]}=x_{i}^{[m]},1\leqslant i\leqslant m \right).\]
Figure 1. A schematic representation of successive application of the Markov kernel \(\mathbb{P}_{t,1}^{+}\) to the empty state \(\Delta\), in the case \(n=3\); each vertical unit step corresponds to such an application. As \(t\to\infty\), the paths drift into bundles located a distance \(q^{n-i}t\) from the origin, for \(1\leqslant i\leqslant n\). The \(i\)-th bundle tends to a GUE corners process centred at \(q^{n-i}t\), with fluctuations on the order of \((q^{n-i}t)^{1/2}\).
**Theorem 6.4**.: _In the asymptotic regime described by (6.4), the Markov kernel (6.1) weakly converges to a product of \(n\) independent probability measures with densities in the GUE corners process, multiplied by a factor that depends only on the colour sequences (6.2):_
\[\mathbb{P}_{t,1}\left(0\cup\lambda^{[m]}\to\lambda^{[m+1]}\right)\\ \to\prod_{i=1}^{n}\rho_{\mathrm{GUE}}\left(x_{(i-1)m+1}^{[m]}, \ldots,x_{im}^{[m]}\to x_{(i-1)(m+1)+1}^{[m+1]},\ldots,x_{i(m+1)}^{[m+1]}\right) dx^{[m+1]}\cdot\mathbb{P}_{\mathrm{col}}\left(c^{[m]}\to c^{[m+1]}\right) \tag{6.8}\]
_as \(t\to\infty\), where \(dx^{[m+1]}\) denotes the \(n(m+1)\)-dimensional Lebesgue measure. The final multiplicative factor in (6.8) is given explicitly by equation (7.23) below, and defines a discrete transition probability in a process on colour sequences:_
\[\sum_{c^{[m+1]}}\mathbb{P}_{\mathrm{col}}\left(c^{[m]}\to c^{[m+1]}\right)=1, \tag{6.9}\]
_where the sum is taken over all \(c^{[m+1]}=\left(c_{1}^{[m+1]},\ldots,c_{n(m+1)}^{[m+1]}\right)\in[1,n]^{n(m+1)}\)._
**Corollary 6.5**.: _Let \(\mathbb{P}_{t,N}(\Delta\to\lambda^{[1]}\to\cdots\to\lambda^{[N]})\) denote the joint distribution of coloured compositions \(\lambda^{[1]},\ldots,\lambda^{[N]}\) generated by \(N\) applications of the kernel (6.1) to the trivial state \(\Delta\). In the asymptotic regime described by (6.4), we have the following weak convergence of measures:_
\[\mathbb{P}_{t,N}\left(\Delta\to\lambda^{[1]}\to\cdots\to\lambda^{[ N]}\right)\\ \to\prod_{i=1}^{n}\rho_{\mathrm{GUE}}\left((x^{[1]})_{i}\prec(x^ {[2]})_{i}\prec\cdots\prec(x^{[N]})_{i}\right)dx^{[1,N]}\cdot\mathbb{P}_{ \mathrm{col}}\left(c^{[1]}\prec c^{[2]}\prec\cdots\prec c^{[N]}\right)\]
_as \(t\to\infty\), with \(dx^{[1,N]}=\prod_{i=1}^{N}dx^{[i]}\) denoting the \(nN(N+1)/2\)-dimensional Lebesgue measure. Here we have introduced the shorthand_
\[\left(x^{[k]}\right)_{i}=\left(x_{(i-1)k+1}^{[k]},\ldots,x_{ik}^{[k]}\right), \qquad\forall\ 1\leqslant i\leqslant n,\ \ 1\leqslant k\leqslant N,\]
_and \(\mathbb{P}_{\mathrm{col}}(c^{[1]}\prec c^{[2]}\prec\cdots\prec c^{[N]})\) is a joint distribution on colour sequences given explicitly by (7.24) below._
The remainder of the paper is devoted to the proof of this theorem. Throughout the rest of Section 6, we exhibit the splitting of the Markov kernel (6.1) as shown on the right hand side of (6.8); the proof of the sum-to-unity property (6.9) is deferred to Section 7.
### Functions \(\psi(\lambda^{[m+1]})\) and \(\psi(0\cup\lambda^{[m]})\)
We begin by studying the exponents \(\psi(\lambda^{[m+1]})\) and \(\psi(0\cup\lambda^{[m]})\) that appear within (6.1). Under the set of assumptions (6.4)-(6.5), the coordinates \(\{\ell_{i}^{[m]}\}_{1\leqslant i\leqslant nm}\) and \(\{\ell_{j}^{[m+1]}\}_{1\leqslant j\leqslant n(m+1)}\) are strictly increasing. This makes the computation of \(\psi(\lambda^{[m+1]})\) and \(\psi(0\cup\lambda^{[m]})\) particularly simple; one easily sees that
\[2\psi\left(\lambda^{[m+1]}\right)=\mathrm{inv}\left(c^{[m+1]}\right),\qquad 2 \psi\left(0\cup\lambda^{[m]}\right)=\mathrm{inv}\left((1,...,n)\cup c^{[m]} \right)=\mathrm{inv}\left(c^{[m]}\right)+m\binom{n}{2}, \tag{6.10}\]
where \((1,...,n)\cup c^{[m]}\) means concatenation of the two participating vectors.
### Factor \(\mathbb{G}_{\lambda^{[m+1]}/0\cup\lambda^{[m]}}(1)\)
Next, we analyse the quantity \(\mathbb{G}_{\lambda^{[m+1]}/0\cup\lambda^{[m]}}(1)\) within (6.1). It is given by the one-row partition function
(6.11)
where within the area marked \(\lambda^{[k]}\), \(k\in\{m,m+1\}\), the vector \(\boldsymbol{e}_{c_{i}^{[k]}}\) is present at coordinate \(\ell_{i}^{[k]}\) (in other words, a path of colour \(c_{i}^{[k]}\) is present at position \(\ell_{i}^{[k]}\)), for all \(1\leqslant i\leqslant kn\). Following (5.3), we have assumed the vertex weights
(6.12)
where the function \(\varphi\) is as defined in (4.3). Let us study each of the factors appearing in (6.12) individually. First, we note that the indicator function \(\mathbf{1}_{\boldsymbol{C}+\boldsymbol{D}\in\{0,1\}^{n}}\) prevents two paths of the same colour from traversing a vertex. Second, the factor \(q^{\varphi(\boldsymbol{D},\boldsymbol{D})}\) assigns one power of \(q\) for every pair of colours which pass through edge \(\boldsymbol{D}\) of a vertex; there are \(\binom{|\boldsymbol{D}|}{2}\) such pairs. Finally, the factor \(q^{\varphi(\boldsymbol{D},\boldsymbol{C})}\) assigns one power of \(q\) to each pair of colours \((i,j)\) passing through edges \((\boldsymbol{D},\boldsymbol{C})\), respectively, with \(i<j\).
Now we examine the contribution of each of the factors in the weights (6.12), when they are multiplied together to form the one-row partition function (6.11). Multiplying all indicator functions \(\mathbf{1}_{\boldsymbol{C}+\boldsymbol{D}\in\{0,1\}^{n}}\) yields the the property that paths of the same colour do not intersect; at the level of the coloured compositions \(\lambda^{[m]}\) and \(\lambda^{[m+1]}\), this translates into the condition that
\[\lambda_{j}^{[m+1](i)}<\lambda_{j}^{[m](i)}<\lambda_{j+1}^{[m+1](i)},\qquad \forall\ 1\leqslant i\leqslant n,\quad 1\leqslant j\leqslant m,\]
which we denote simply by writing \(c^{[m]}\prec c^{[m+1]}\).
Multiplying all factors \(q^{\varphi(\boldsymbol{D},\boldsymbol{D})}\) requires us to compute the total number of paths \(d_{i}\) going through the \(i\)-th horizontal edge of the partition function (6.11), for all \(i\geqslant 1\). The total contribution from these factors is then
\[\prod_{i\geqslant 1}q^{\binom{d_{i}}{2}}=\prod_{j=1}^{n}q^{\binom{j}{2}p_{j}},\]
where \(p_{j}\) counts the number of horizontal edges in (6.11) that are occupied by \(j\) paths. It is clear that the set \(\{p_{j}\}_{1\leqslant j\leqslant n}\) depends only on the values of the coordinates \(\{\ell_{i}^{[m]}\}_{1\leqslant i\leqslant nm}\) and \(\{\ell_{i}^{[m+1]}\}_{1\leqslant i\leqslant n(m+1)}\), and not on the colour sequences \(\{c_{i}^{[m]}\}_{1\leqslant i\leqslant nm}\) and \(\{c_{i}^{[m+1]}\}_{1\leqslant i\leqslant n(m+1)}\). In view of the interlacing (6.3) of the coordinates \(\{\ell_{i}^{[m]}\}_{1\leqslant i\leqslant nm}\), \(\{\ell_{i}^{[m+1]}\}_{1\leqslant i\leqslant n(m+1)}\), we may routinely compute \(p_{j}\) for all \(1\leqslant j\leqslant n\). We find that, _cf._ Figure 1,
\[p_{n} =\sum_{k=1}^{m+1}\ell_{k}^{[m+1]}-\sum_{k=1}^{m}\ell_{k}^{[m]},\] \[p_{n-j} =\sum_{k=1}^{m+1}\left(\ell_{j(m+1)+k}^{[m+1]}-\ell_{(j-1)(m+1)+k }^{[m+1]}\right)-\sum_{k=1}^{m}\left(\ell_{jm+k}^{[m]}-\ell_{(j-1)m+k}^{[m]} \right),\qquad\forall\ j\in[1,n-1].\]
We then have
\[\sum_{j=1}^{n}\binom{j}{2}p_{j} =\sum_{j=1}^{n}\left[\binom{n-j+1}{2}-\binom{n-j}{2}\right]\sum_ {k=1}^{m+1}\ell_{(j-1)(m+1)+k}^{[m+1]}-\sum_{j=1}^{n}\left[\binom{n-j+1}{2}- \binom{n-j}{2}\right]\sum_{k=1}^{m}\ell_{(j-1)m+k}^{[m]}\] \[=\sum_{j=1}^{n}(n-j)\sum_{k=1}^{m+1}\ell_{(j-1)(m+1)+k}^{[m+1]}- \sum_{j=1}^{n}(n-j)\sum_{k=1}^{m}\ell_{(j-1)m+k}^{[m]}\]
as the total exponent of \(q\) coming from factors of the form \(q^{\varphi(\boldsymbol{D},\boldsymbol{D})}\).
Finally we need to examine the contribution from all factors \(q^{\varphi(\boldsymbol{D},\boldsymbol{C})}\), when multiplying the weights of all vertices in the row (6.11). In direct contrast to the factors \(q^{\varphi(\boldsymbol{D},\boldsymbol{D})}\), this contribution only depends on the colour sequences \(\{c_{i}^{[m]}\}_{1\leqslant i\leqslant nm}\) and \(\{c_{i}^{[m+1]}\}_{1\leqslant i\leqslant n(m+1)}\), and not on the coordinates \(\{\ell_{i}^{[m]}\}_{1\leqslant i\leqslant nm}\) and
\(\{\ell_{i}^{[m+1]}\}_{1\leqslant i\leqslant n(m+1)}\). Rather than attempting to write down an explicit formula for this contribution, we express it in terms of the following diagram17:
Footnote 17: This diagram is not a partition function in the traditional sense, however it turns out to be quite expedient for our subsequent needs.
In the first step, we use the action (3.20) of the Hecke generators to express the function \(f_{\tilde{\lambda}^{[m]}-1}\) in terms of \(f_{\ell^{[m]}-1}\), noting that \(\ell^{[m]}\) is just obtained by sorting the parts of \(\tilde{\lambda}^{[m]}\) in increasing order. We have
\[\mathbb{G}_{\lambda^{[m]}-1}(\mathrm{Pl}_{t}) =\frac{q^{nm(nm+1)/2}}{(q-1)^{nm}}\cdot\left(\frac{1}{2\pi\hat{ \mathtt{i}}}\right)^{nm}\oint_{C_{1}}\frac{dy_{1}}{y_{1}}\cdots\oint_{C_{nm}} \frac{dy_{nm}}{y_{nm}}\] \[\times\prod_{1\leqslant i<j\leqslant nm}\left(\frac{y_{j}-y_{i}} {y_{j}-qy_{i}}\right)(T_{\sigma}\cdot f_{\ell^{[m]}-1})(1^{nm};y_{1}^{-1}, \ldots,y_{nm}^{-1})g_{\Delta}(m^{n};y_{1},\ldots,y_{nm})\prod_{j=1}^{nm}e^{ty_{ j}}, \tag{6.16}\]
where we have denoted \(T_{\sigma}=T_{a_{1}}\ldots T_{a_{p}}\) with \(T_{a}\) given by (3.18), and where \((a_{1},\ldots,a_{p})\in[1,nm)^{p}\) is a minimal-length word such that
\[\mathfrak{s}_{a_{1}}\cdots\mathfrak{s}_{a_{p}}\cdot\ell^{[m]}=\tilde{\lambda} ^{[m]}. \tag{6.17}\]
Using the property (3.27) of Hecke generators, and the fact that the product \(\prod_{j=1}^{nm}e^{ty_{j}}\) is symmetric with respect to \((y_{1},\ldots,y_{nm})\), we may recast this as
\[\mathbb{G}_{\lambda^{[m]}-1}(\mathrm{Pl}_{t}) =\frac{q^{nm(nm+1)/2}}{(q-1)^{nm}}\cdot\left(\frac{1}{2\pi\hat{ \mathtt{i}}}\right)^{nm}\oint_{C_{1}}\frac{dy_{1}}{y_{1}}\cdots\oint_{C_{nm}} \frac{dy_{nm}}{y_{nm}}\] \[\times\prod_{1\leqslant i<j\leqslant nm}\left(\frac{y_{j}-y_{i}} {y_{j}-qy_{i}}\right)f_{\ell^{[m]}-1}(1^{nm};y_{1}^{-1},\ldots,y_{nm}^{-1})( \tilde{T}_{\sigma}\cdot g_{\Delta})(m^{n};y_{1},\ldots,y_{nm})\prod_{j=1}^{nm }e^{ty_{j}}, \tag{6.18}\]
where we have denoted \(\tilde{T}_{\sigma}=\tilde{T}_{a_{p}}\ldots\tilde{T}_{a_{1}}\) with \(\tilde{T}_{a}\) given by (3.19), and where the word \((a_{1},\ldots,a_{p})\) is specified as in (6.17).
Finally, we note that if the coordinates \(\ell^{[m]}\) may be reordered to yield \(\tilde{\lambda}^{[m]}\) as in (6.17), it also follows that the corresponding colour sequence \(c^{[m]}\) reorders according to the rule
\[\mathfrak{s}_{a_{1}}\cdots\mathfrak{s}_{a_{p}}\cdot c^{[m]}=(1^{m},2^{m}, \ldots,n^{m}),\]
or equivalently, \(\mathfrak{s}_{a_{p}}\cdots\mathfrak{s}_{a_{1}}\cdot(1^{m},2^{m},\ldots,n^{m})= c^{[m]}\). Using this relation in (6.18), together with the action (3.23) of Hecke generators on the function \(g_{\Delta}(m^{n};y_{1},\ldots,y_{nm})\), we recover the formula
\[\mathbb{G}_{\lambda^{[m]}-1}(\mathrm{Pl}_{t})=\frac{q^{nm(nm+1)/ 2}}{(q-1)^{nm}}\cdot\left(\frac{1}{2\pi\hat{\mathtt{i}}}\right)^{nm}\oint_{C_ {1}}\frac{dy_{1}}{y_{1}}\cdots\oint_{C_{nm}}\frac{dy_{nm}}{y_{nm}}\\ \times\prod_{1\leqslant i<j\leqslant nm}\left(\frac{y_{j}-y_{i}} {y_{j}-qy_{i}}\right)\prod_{i=1}^{nm}y_{i}^{-\ell^{[m]}_{i}+1}g_{\Delta}^{c^{[ m]}}(m^{n};y_{1},\ldots,y_{nm})\prod_{j=1}^{nm}e^{ty_{j}}, \tag{6.19}\]
where \(g_{\Delta}^{c^{[m]}}(m^{n};y_{1},\ldots,y_{nm})\) denotes a permuted-boundary function of the form (3.15), and where we have also used the fact that \(f_{\ell^{[m]}-1}(1^{nm};y_{1}^{-1},\ldots,y_{nm}^{-1})\) factorizes as in (3.10) with \(s=0\), since \(\ell^{[m]}\) is increasing. The formula (6.19) explicitly separates the coordinates \(\ell^{[m]}\) and the colour sequence \(c^{[m]}\); in view of the analytic dependence on the former, we now use it to carry out steepest descent asymptotics.
#### 6.6.2. \(t\to\infty\) asymptotics via steepest descent
In this section we compute the \(t\to\infty\) asymptotics of the quantities
\[\tilde{H}_{\lambda^{[m]}}(t) :=\prod_{j=1}^{n}\prod_{k=1}^{m}q^{(n-j)\ell^{[m]}_{(j-1)m+k}} \cdot\mathbb{G}_{\lambda^{[m]}-1}(\mathrm{Pl}_{t}), \tag{6.21}\] \[H_{\lambda^{[m+1]}}(t) :=\prod_{j=1}^{n}\prod_{k=1}^{m+1}q^{(n-j)\ell^{[m+1]}_{(j-1)(m+1)+ k}}\cdot\mathbb{G}_{\lambda^{[m+1]}}(\mathrm{Pl}_{t}), \tag{6.20}\]
under the assumption that the coordinates \(\{\ell_{i}^{[m]}\}_{1\leqslant i\leqslant nm}\) and \(\{\ell_{i}^{[m+1]}\}_{1\leqslant i\leqslant n(m+1)}\) scale as (6.4). Note that, by virtue of (6.10) and the expression (6.14), the Markov kernel (6.1) may be expressed as
\[\mathbb{P}_{t,1}\left(0\cup\lambda^{[m]}\to\lambda^{[m+1]}\right)\\ =\mathbf{1}_{c^{[m]}\prec c^{[m+1]}}\cdot q^{\mathrm{inv}(c^{[m] })-\mathrm{inv}(c^{[m+1]})+m\binom{n}{2}}\Upsilon\left(c^{[m]};c^{[m+1]} \right)\frac{H_{\lambda^{[m+1]}}(t)}{\tilde{H}_{\lambda^{[m]}}(t)}\exp\left(- \frac{1-q^{n}}{1-q}t\right) \tag{6.22}\]
where the colour sequences \(c^{[m]}\), \(c^{[m+1]}\) are independent of \(t\); this means that the \(t\to\infty\) asymptotics of our Markov kernel is indeed recovered by analysis of (6.20) and (6.21).
We lighten our notation by writing \(\ell_{i}^{[m]}\equiv\ell_{i}\) and \(x_{i}^{[m]}\equiv x_{i}\) for all \(1\leqslant i\leqslant nm\). Distributing the \(q\)-dependent prefactor in (6.20) within the integral (6.19), we have
\[\tilde{H}_{\lambda^{[m]}}(t)=\frac{q^{nm(nm+1)/2}}{(q-1)^{nm}} \cdot\left(\frac{1}{2\pi\mathrm{i}}\right)^{nm}\oint_{C_{1}}dy_{1}\cdots \oint_{C_{nm}}dy_{nm}\\ \times\prod_{i=1}^{nm}\left(\frac{Q_{i}}{y_{i}}\right)^{\ell_{i} }e^{ty_{i}}\prod_{1\leqslant i<j\leqslant nm}\left(\frac{y_{j}-y_{i}}{y_{j}-qy _{i}}\right)g_{\Delta}^{c^{[m]}}(m^{n};y_{1},\ldots,y_{nm}), \tag{6.23}\]
where we have defined the vector \(\vec{Q}\in\mathbb{C}^{nm}\) by
\[\vec{Q}=(Q_{1},\ldots,Q_{nm})=\underbrace{(q^{n-1},\ldots,q^{n-1})}_{m\text{ times}}\cup\cdots\cup\underbrace{(q,\ldots,q)}_{m\text{ times}}\cup\underbrace{(1,\ldots,1)}_{m\text{ times}}. \tag{6.24}\]
Using the formula (6.4) and the notation (6.24), the coordinates \(\ell_{i}\) are written as \(\ell_{i}=Q_{i}t+(Q_{i}t)^{\frac{1}{2}}x_{i}\) for all \(1\leqslant i\leqslant nm\). Making use of this, the univariate factors in the integrand of (6.23) read
\[\left(\frac{Q_{i}}{y_{i}}\right)^{\ell_{i}}e^{ty_{i}}=\exp\left[ty_{i}-\ell_{i }\log y_{i}+\ell_{i}\log Q_{i}\right]=\exp\left[t(y_{i}-Q_{i}\log y_{i}+Q_{i} \log Q_{i})+O(t^{1/2})\right],\quad\text{as }\ t\to\infty. \tag{6.25}\]
The \(t\to\infty\) behaviour of (6.23) may now be recovered from steepest descent analysis applied to each of the \(nm\) integrals. Neglecting for the moment the \(O(t^{1/2})\) term above (which gives a sub-leading contribution to the \(t\to\infty\) behaviour), we evaluate the critical point18 of the function \(y_{i}-Q_{i}\log y_{i}+Q_{i}\log Q_{i}\), which is found to be \(y_{i}=Q_{i}\). Computing the corresponding Taylor series about this point, we have that
Footnote 18: The critical point is the value where the first derivative with respect to \(y_{i}\) vanishes.
\[\left(\frac{Q_{i}}{y_{i}}\right)^{\ell_{i}}e^{ty_{i}}=\exp\left[t\left(Q_{i}+ \frac{(y_{i}-Q_{i})^{2}}{2Q_{i}}+O(y_{i}-Q_{i})^{3}\right)+O(t^{1/2})\right], \quad\text{as }\ t\to\infty,\]
in a neighbourhood of the point \(y_{i}=Q_{i}\). Following standard steepest descent analysis, the dominant contribution to the \(t\to\infty\) asymptotics of the integral (6.23) is obtained by deforming each contour \(C_{i}\) to pass through \(Q_{i}\)19, and reducing the resulting contour integrals to line integrals over small segments \(D_{i}\subset C_{i}\) travelling through \(Q_{i}\) and traversed in the direction where the function \(\frac{(y_{i}-Q_{i})^{2}}{2Q_{i}}\) has zero imaginary part. Accordingly20, we may write
Footnote 19: We require that each integration contour \(C_{i}\) may be freely deformed to a contour passing through the corresponding critical point \(Q_{i}\), along which the real part of the exponent decreases as one travels away from \(Q_{i}\). One choice that meets this requirement is to take the \(C_{i}\) to be concentric circles of radii \(Q_{i}\), for all \(1\leqslant i\leqslant nm\).
Footnote 20: We will be slightly informal here, omitting the proofs of the tail estimates justifying the below approximations. The latter are fairly standard, and have already been applied numerous times in the literature.
\[\tilde{H}_{\lambda^{[m]}}(t)\sim\frac{q^{nm(nm+1)/2}}{(q-1)^{nm} }\cdot\left(\frac{1}{2\pi\mathrm{i}}\right)^{nm}\int_{Q_{1}-\mathrm{i}\epsilon}^ {Q_{1}+\mathrm{i}\epsilon}dy_{1}\cdots\int_{Q_{nm}-\mathrm{i}\epsilon}^{Q_{ nm}+\mathrm{i}\epsilon}dy_{nm}\\ \prod_{i=1}^{nm}\left(\frac{Q_{i}}{y_{i}}\right)^{\ell_{i}}e^{ty_{ i}}\prod_{1\leqslant i<j\leqslant nm}\left(\frac{y_{j}-y_{i}}{y_{j}-qy_{i}} \right)g_{\Delta}^{c^{[m]}}(m^{n};y_{1},\ldots,y_{nm}),\quad\text{as }\ t\to\infty, \tag{6.26}\]
where \(\epsilon\) is a small positive real number. Now switching to the local variables \(y_{i}=Q_{i}+z_{i}t^{-\frac{1}{2}}\), the univariate factors in (6.26) become
\[\left(\frac{Q_{i}}{y_{i}}\right)^{\ell_{i}}e^{ty_{i}}\Bigg{|}_{y_{i}=Q_{i}+z_{i }t^{-\frac{1}{2}}}=\exp\left[t(Q_{i}+z_{i}t^{-\frac{1}{2}})-(Q_{i}t+(Q_{i}t)^{ \frac{1}{2}}x_{i})\log(Q_{i}+z_{i}t^{-\frac{1}{2}})+(Q_{i}t+(Q_{i}t)^{\frac{1}{ 2}}x_{i})\log Q_{i}\right],\]
and using the fact that
\[\log(Q+\epsilon)=\log(Q)+\frac{\epsilon}{Q}-\frac{1}{2}\frac{\epsilon^{2}}{Q ^{2}}+O(\epsilon^{3}),\quad\text{as}\ \ \epsilon\to 0,\]
we have
\[\left(\frac{Q_{i}}{y_{i}}\right)^{\ell_{i}}e^{ty_{i}}\Bigg{|}_{y_{ i}=Q_{i}+z_{i}t^{-\frac{1}{2}}}\] \[=\exp\left[t(Q_{i}+z_{i}t^{-\frac{1}{2}})-(Q_{i}t+(Q_{i}t)^{ \frac{1}{2}}x_{i})\left(\log Q_{i}+\frac{z_{i}t^{-\frac{1}{2}}}{Q_{i}}-\frac{ 1}{2}\frac{z_{i}^{2}t^{-1}}{Q_{i}^{2}}+O(t^{-\frac{3}{2}})\right)+(Q_{i}t+(Q_ {i}t)^{\frac{1}{2}}x_{i})\log Q_{i}\right]\] \[=\exp\left[Q_{i}t-\frac{x_{i}z_{i}}{Q_{i}^{\frac{1}{2}}}+\frac{z_ {i}^{2}}{2Q_{i}}+O(t^{-\frac{1}{2}})\right],\quad\text{as}\ \ t\to\infty.\]
Turning to other terms in the integral (6.26), we have
\[\frac{y_{j}-y_{i}}{y_{j}-qy_{i}}\Bigg{|}_{\vec{y}=\vec{Q}+\vec{z}t^{-\frac{1}{ 2}}}=\frac{Q_{j}-Q_{i}+z_{j}t^{-1/2}-z_{i}t^{-1/2}}{Q_{j}-qQ_{i}+z_{j}t^{-1/2} -qz_{i}t^{-1/2}},\qquad 1\leqslant i<j\leqslant nm. \tag{6.27}\]
Using the fact that \(Q_{i}=q^{n-\lceil i/m\rceil}\), we see that \(Q_{j}-qQ_{i}\) in the denominator of (6.27) is always nonzero; on the other hand, \(Q_{j}-Q_{i}\) in the numerator vanishes whenever \(\lceil i/m\rceil=\lceil j/m\rceil\) (and is nonzero otherwise). The \(t\to\infty\) behaviour of (6.27) then splits into two cases:
\[\frac{y_{j}-y_{i}}{y_{j}-qy_{i}}\Bigg{|}_{\vec{y}=\vec{Q}+\vec{z}t^{-\frac{1} {2}}}\sim\frac{1}{q^{n-\lceil j/m\rceil}-q^{n-\lceil i/m\rceil+1}}\times \left\{\begin{array}{ll}t^{-1/2}(z_{j}-z_{i}),&\lceil i/m\rceil=\lceil j/m \rceil,\\ \\ q^{n-\lceil j/m\rceil}-q^{n-\lceil i/m\rceil},&\lceil i/m\rceil\neq\lceil j/m \rceil,\end{array}\right.\]
and multiplying these factors over all indices \(1\leqslant i<j\leqslant nm\), we have
\[\prod_{1\leqslant i<j\leqslant nm}\left(\frac{y_{j}-y_{i}}{y_{j} -qy_{i}}\right)\Bigg{|}_{\vec{y}=\vec{Q}+\vec{z}t^{-\frac{1}{2}}}\sim t^{- \frac{n}{2}\binom{m}{2}}(1-q)^{-n\binom{m}{2}}\prod_{j=1}^{n}q^{(j-n)\binom{m }{2}}\\ \times\prod_{i=0}^{n-1}\prod_{1\leqslant j<k\leqslant m}(z_{im+k }-z_{im+j})\prod_{1\leqslant i<j\leqslant n}\left(\frac{q^{n-j}-q^{n-i}}{q^{n-j }-q^{n-i+1}}\right)^{m^{2}}.\]
After accounting for telescopic cancellations in the final product of this expression, it may be written as
\[\prod_{1\leqslant i<j\leqslant n}\left(\frac{q^{n-j}-q^{n-i}}{q^{n-j}-q^{n-i+1 }}\right)^{m^{2}}=\left[\frac{(1-q)^{n}}{(q;q)_{n}}\right]^{m^{2}}\]
and accordingly we have that
\[\prod_{1\leqslant i<j\leqslant nm}\left(\frac{y_{j}-y_{i}}{y_{j}-qy_{i}} \right)\Bigg{|}_{\vec{y}=\vec{Q}+\vec{z}t^{-\frac{1}{2}}}\sim t^{-\frac{n}{2} \binom{m}{2}}(1-q)^{-n\binom{m}{2}}q^{-\binom{n}{2}}(\frac{(1-q)^{n}}{(q;q)_{n }}\right]^{m^{2}}\prod_{i=0}^{n-1}\prod_{1\leqslant j<k\leqslant m}(z_{im+k}-z_ {im+j}),\]
as \(t\to\infty\). Under the change to local variables, the remaining piece of the integral (6.26) (which is polynomial in the variables \(y_{1},\ldots,y_{nm}\)) becomes
\[g_{\Delta}^{c^{\lceil m\rceil}}(m^{n};y_{1},\ldots,y_{nm})\Big{|}_{\vec{y}= \vec{Q}+\vec{z}t^{-\frac{1}{2}}}\sim g_{\Delta}^{c^{\lceil m\rceil}}(m^{n}; \vec{Q}),\qquad\text{as}\ \ t\to\infty.\]
Combining everything (including a factor of the change of integration variables), we read off the \(t\to\infty\) asymptotic behaviour of \(\tilde{H}_{\lambda^{[m]}}(t)\):
\[\tilde{H}_{\lambda^{[m]}}(t)\sim t^{-nm/2}\exp\left[\frac{1-q^{n}}{ 1-q}mt\right]t^{-\frac{n}{2}\binom{m}{2}}(1-q)^{-n\binom{m}{2}}q^{-\binom{n}{2} \binom{m}{2}}\left[\frac{(1-q)^{n}}{(q;q)_{n}}\right]^{m^{2}}g_{\Delta}^{c^{[m ]}}(m^{n};\vec{Q})\] \[\times\frac{q^{nm(nm+1)/2}}{(q-1)^{nm}}\cdot\left(\frac{1}{2\pi \hat{\mathtt{i}}}\right)^{nm}\int_{-\mathfrak{i}\infty}^{\mathfrak{i}\infty}dz_ {1}\cdots\int_{-\mathfrak{i}\infty}^{\mathfrak{i}\infty}dz_{nm}\prod_{i=1}^{nm }\exp\left[-\frac{x_{i}z_{i}}{Q_{i}^{1/2}}+\frac{z_{i}^{2}}{2Q_{i}}\right] \prod_{i=0}^{n-1}\prod_{1\leqslant j<k\leqslant m}(z_{im+k}-z_{im+j}). \tag{6.28}\]
This simplifies to yield
\[\tilde{H}_{\lambda^{[m]}}(t) \sim(-1)^{nm}t^{-\frac{n}{2}\binom{m+1}{2}}q^{\binom{nm+1}{2}- \frac{1}{2}\binom{m+1}{2}\binom{n}{2}+m\binom{n}{2}}\exp\left[\frac{1-q^{n}}{ 1-q}mt\right]\frac{(1-q)^{n\binom{m}{2}}}{(q;q)_{n}^{m^{2}}}g_{\Delta}^{c^{[m ]}}(m^{n};\vec{Q})\] \[\times\left(\frac{1}{2\pi\hat{\mathtt{i}}}\right)^{nm}\int_{- \mathfrak{i}\infty}^{\mathfrak{i}\infty}dz_{1}\cdots\int_{-\mathfrak{i}\infty }^{\mathfrak{i}\infty}dz_{nm}\prod_{i=1}^{nm}\exp\left[-x_{i}z_{i}+\frac{z_{i} ^{2}}{2}\right]\prod_{i=0}^{n-1}\prod_{1\leqslant j<k\leqslant m}(z_{im+k}-z_{ im+j}), \tag{6.29}\]
where we have rescaled the integration variables \(z_{i}\mapsto Q_{i}^{1/2}z_{i}\) to obtain the final formula.
Repeating these steps, one easily finds that
\[H_{\lambda^{[m+1]}}(t)=\left(q^{-m\binom{n}{2}}\tilde{H}_{\lambda^{[m]}}(t) \right)_{m\mapsto m+1}, \tag{6.30}\]
which is to be interpreted as taking the right hand side of (6.29), modulo division by \(q^{m\binom{n}{2}}\), and replacing all instances of \(m\) by \(m+1\) in the obvious way.
The removal of the factor \(q^{m\binom{n}{2}}\) requires the following justification. From (6.20) and (6.21), we see that \(\tilde{H}_{\lambda^{[m]}}(t)\) depends on the function \(\mathbb{G}_{\lambda^{[m]}-1}(\mathrm{Pl}_{t})\) rather than \(\mathbb{G}_{\lambda^{[m]}}(\mathrm{Pl}_{t})\), the latter being the desired quantity that leads to \(H_{\lambda^{[m+1]}}(t)\) after the \(m\mapsto m+1\) relabelling. Consulting the integral formula (6.19), we see that the only difference between \(\mathbb{G}_{\lambda^{[m]}-1}(\mathrm{Pl}_{t})\) and \(\mathbb{G}_{\lambda^{[m]}}(\mathrm{Pl}_{t})\) is that the integrand used for the former contains an extra factor of \(\prod_{i=1}^{nm}y_{i}\) compared with that of the latter. Carrying through steepest descent analysis of \(\mathbb{G}_{\lambda^{[m]}}(\mathrm{Pl}_{t})\) therefore results in an overall factor \(\prod_{i=1}^{nm}Q_{i}=q^{m\binom{n}{2}}\) less compared with the calculations above, which is the reason that we divide out this factor in (6.30).
#### 6.6.3. Factorization into GUE corners
Up to the multiplicative terms in the first line, equation (6.29) reveals the factorization of our starting integral (6.23) into \(n\) identical \(m\)-dimensional integrals of the form
\[I(x_{1},\ldots,x_{m}) =\left(\frac{1}{2\pi\hat{\mathtt{i}}}\right)^{m}\int_{\mathfrak{i} \cdot\mathbb{R}}dz_{1}\cdots\int_{\mathfrak{i}\cdot\mathbb{R}}dz_{m}\prod_{1 \leqslant i<j\leqslant m}(z_{j}-z_{i})\prod_{i=1}^{m}e^{-x_{i}z_{i}z_{i}+\frac{ 1}{2}z_{i}^{2}},\] \[=\prod_{i=1}^{m}e^{-\frac{1}{2}x_{i}^{2}}\left(\frac{1}{2\pi\hat{ \mathtt{i}}}\right)^{m}\int_{\mathfrak{i}\cdot\mathbb{R}}dz_{1}\cdots\int_{ \mathfrak{i}\cdot\mathbb{R}}dz_{m}\prod_{1\leqslant i<j\leqslant m}(z_{j}-z_{ i})\prod_{i=1}^{m}e^{\frac{1}{2}(z_{i}-x_{i})^{2}}. \tag{6.31}\]
It is possible to explicitly evaluate the integral (6.31), as we now show. Replacing the Vandermonde factor in (6.31) by its determinant form and using the multilinearity of the determinant, we recover
\[I(x_{1},\ldots,x_{m}) =\prod_{i=1}^{m}e^{-\frac{1}{2}x_{i}^{2}}\left(\frac{1}{2\pi\hat {\mathtt{i}}}\right)^{m}\int_{\mathfrak{i}\cdot\mathbb{R}}dz_{1}\cdots\int_{ \mathfrak{i}\cdot\mathbb{R}}dz_{m}\det_{1\leqslant i,j\leqslant m}(z_{i}^{j-1 })\prod_{i=1}^{m}e^{\frac{1}{2}(z_{i}-x_{i})^{2}},\] \[=\prod_{i=1}^{m}e^{-\frac{1}{2}x_{i}^{2}}\det_{1\leqslant i,j \leqslant m}\left(\frac{1}{2\pi\hat{\mathtt{i}}}\int_{\mathfrak{i}\cdot \mathbb{R}}z^{j-1}e^{\frac{1}{2}(z-z_{i})^{2}}dz\right). \tag{6.32}\]
Making the change of integration variables \(\mathfrak{i}u=z-x_{i}\) within the second line of (6.32), we have that
\[I(x_{1},\ldots,x_{m})=\prod_{i=1}^{m}e^{-\frac{1}{2}x_{i}^{2}}\det_{1\leqslant i,j\leqslant m}\left(\frac{1}{2\pi}\int_{-\infty+\mathfrak{i}x_{i}}^{\infty+ \mathfrak{i}x_{i}}(\mathfrak{i}u+x_{i})^{j-1}e^{-\frac{1}{2}u^{2}}du\right).\]
Expanding the factor \((\mathtt{i}u+x_{i})^{j-1}\) as a polynomial in \(x_{i}\), this becomes
\[I(x_{1},\ldots,x_{m})=\prod_{i=1}^{m}e^{-\frac{1}{2}x_{i}^{2}}\det_{1\leqslant i,j\leqslant m}\left(\frac{x_{i}^{j-1}}{2\pi}\int_{-\infty+ix_{i}}^{\infty+ix_{i }}e^{-\frac{1}{2}u^{2}}du+O(x_{i}^{j-2})\right),\]
and the polynomial term of the form \(O(x_{i}^{j-2})\) can be removed by elementary column transformations. The final result is thus
\[I(x_{1},\ldots,x_{m})=\left(\frac{1}{2\pi}\right)^{\frac{m}{2}}\prod_{i=1}^{m} e^{-\frac{1}{2}x_{i}^{2}}\prod_{1\leqslant i<j\leqslant m}(x_{j}-x_{i}).\]
### Final formula
We are now in a position to write the full asymptotic behaviour of the Markov kernel \(\mathbb{P}_{t,1}(0\cup\lambda^{[m]}\to\lambda^{[m+1]})\) as \(t\to\infty\). Using (6.22) with \(H_{\lambda^{[m]}-1}(t)\) given by
\[H_{\lambda^{[m]}-1}(t)\sim(-1)^{nm}t^{-\frac{n}{2}\binom{m+1}{2}} q^{\binom{nm+1}{2}-\frac{1}{2}\binom{m+1}{2}\binom{n}{2}+m\binom{n}{2}}\] \[\times\exp\left[\frac{1-q^{n}}{1-q}mt\right]\frac{(1-q)^{n\binom{ m}{2}}}{(q;q)_{n}^{m^{2}}}g_{\Delta}^{c^{[m]}}(m^{n};\vec{Q}^{[m]})\prod_{i=0}^{n-1}I \left(x_{im+1}^{[m]},\ldots,x_{i(m+1)}^{[m]}\right)\]
and \(H_{\lambda^{[m+1]}}(t)\) given by (6.30), we obtain
\[\mathbb{P}_{t,1}(0\cup\lambda^{[m]}\to\lambda^{[m+1]})\sim\mathbf{1}_{c^{[m] }\sim c^{[m+1]}}\Upsilon\left(c^{[m]};c^{[m+1]}\right)q^{\mathrm{inv}(c^{[m]}) -\mathrm{inv}(c^{[m+1]})}\cdot(-1)^{n}t^{-\frac{n}{2}(m+1)}\times\\ q^{\binom{nm+n+1}{2}-\binom{nm+1}{2}-\frac{1}{2}(m+1)\binom{n}{2}} \frac{(1-q)^{nm}}{(q;q)_{n}^{2m+1}}\cdot\frac{g_{\Delta}^{c^{[m+1]}}\left((m+1 )^{n};\vec{Q}^{[m+1]}\right)}{g_{\Delta}^{c^{[m]}}\left(m^{n};\vec{Q}^{[m]} \right)}\prod_{i=1}^{n}\frac{I\left(x_{(i-1)(m+1)+1}^{[m+1]},\ldots,x_{i(m+1)}^ {[m+1]}\right)}{I\left(x_{(i-1)m+1}^{[m]},\ldots,x_{im}^{[m]}\right)}, \tag{6.33}\]
as \(t\to\infty\). Recall from (6.4) that there is a factor of \((q^{n-\lceil i/(m+1)\rceil}t)^{1/2}\) present in the change of variables from \(\ell_{i}^{[m+1]}\in\mathbb{Z}\) to \(x_{i}^{[m+1]}\in\mathbb{R}\), for all \(1\leqslant i\leqslant n(m+1)\). In order to obtain transition densities valid on the scale of the \(x_{i}^{[m+1]}\) variables, we must multiply the above formula by the product of all such factors; namely, by
\[t^{\frac{n}{2}(m+1)}\prod_{i=1}^{n(m+1)}(q^{n-\lceil i/(m+1)\rceil})^{1/2}=t^ {\frac{n}{2}(m+1)}q^{\frac{1}{2}(m+1)\binom{n}{2}}.\]
We then read off the result
\[\mathbb{P}_{t,1}(0\cup\lambda^{[m]}\to\lambda^{[m+1]})\to\prod_{i=1}^{n}\rho_ {\mathrm{GUE}}\left(x_{(i-1)m+1}^{[m]},\ldots,x_{im}^{[m]}\to x_{(i-1)(m+1)+1}^ {[m+1]},\ldots,x_{i(m+1)}^{[m+1]}\right)dx^{[m+1]}\\ \times\mathbf{1}_{c^{[m]}\sim c^{[m+1]}}(-1)^{n}\Upsilon\left(c^{[m ]};c^{[m+1]}\right)q^{\mathrm{inv}(c^{[m]})-\mathrm{inv}(c^{[m+1]})}q^{\binom{ nm+n+1}{2}-\binom{nm+1}{2}}\frac{(1-q)^{nm}}{(q;q)_{n}^{2m+1}}\frac{g_{\Delta}^{c^{[m+1]}} \left((m+1)^{n};\vec{Q}^{[m+1]}\right)}{g_{\Delta}^{c^{[m]}}\left(m^{n};\vec{ Q}^{[m]}\right)} \tag{6.34}\]
as \(t\to\infty\). The convergence in (6.34) is uniform provided that the \(x^{[m]}\) and \(x^{[m+1]}\) parameters are chosen to vary over compact subsets of \(\mathbb{R}\). This completes the first part of the proof of Theorem 6.4; it remains to show that the factors present in the second line of (6.34) constitute a valid probability distribution on colour sequences.
## 7. Distribution on colour sequences
In the previous section we showed (see (6.34) above) that the Plancherel-specialized LLT Markov kernel (6.1) splits, under the \(t\to\infty\) asymptotic regime studied, into a product of \(n\) independent GUE corners processes multiplied by a further factor valued on colour sequences. Our aim in this section is to show that this extra factor constitutes a discrete probability measure on colour sequences; in showing this, we demonstrate that the right hand side of (6.34) integrates to unity, validating the fact that the set of coloured compositions to which we have restricted our attention captures the full asymptotic behaviour as \(t\to\infty\).
Our primary task will be to better understand the ratio \(g_{\Delta}^{c^{[m+1]}}\Big{/}g_{\Delta}^{c^{[m]}}\) appearing in (6.34). To that end, we begin by defining a family of partition functions that are related to the functions \(g_{\Delta}^{c^{[m]}}\) via an explicit symmetry.
### Partition function \(Z\)
Fix two integers \(n,m\geqslant 1\) and a vector \(i^{[m]}=(i_{1},\ldots,i_{nm})\in[1,n]^{nm}\) such that for all \(1\leqslant k\leqslant n\) we have \(|\{a:i_{a}=k\}|=m\). We define the following partition function in the model (2.3):
(7.1)
where each vertex in the \(a\)-th row of the lattice is assigned rapidity parameter \(z=x_{a}\), for \(1\leqslant a\leqslant nm\). We may represent the partition function (7.1) algebraically, as follows:
\[Z\left(x_{1},\ldots,x_{nm};i^{[m]}\right)=\left\langle\boldsymbol{e}_{[1,n]} \right|^{\otimes m}\mathcal{D}_{i_{1}}(x_{1})\ldots\mathcal{D}_{i_{nm}}(x_{nm })\left|\boldsymbol{e}_{0}\right\rangle^{\otimes m},\]
where we recall the row operator definition \(\mathcal{D}_{i}(x)=T_{0,i}^{\rightarrow}(x;m-1)\) from Section 4.4. The partition functions thus defined may be related to those of (3.15), via the following symmetry:
**Proposition 7.1**.: _Recall the definition (5.10) of the trivial element \(\Delta\in\mathcal{S}_{m^{n}}\). For all vectors \(i^{[m]}\in[1,n]^{nm}\) we have that_
\[g_{\Delta}^{i^{[m]}}(m^{n};x_{1},\ldots,x_{nm};s)=(-s)^{n\binom{m}{2}}\cdot Z \left(x_{1}^{-1},\ldots,x_{nm}^{-1};i^{[m]}\right)\Big{|}_{q\to q^{-1},s \mapsto s^{-1}} \tag{7.2}\]
_where the variables \(q\), \(s\) are replaced by their reciprocals in the final partition function._
Proof.: This is an immediate consequence of the symmetry (2.6) between the vertex weights used to define (3.15) and (7.1).
**Corollary 7.2**.: _The \(s=0\) and_
\[(x_{1},\ldots,x_{nm})=\underbrace{(q^{n-1},\ldots,q^{n-1})}_{\text{$m$ times}}\cup\cdots\cup\underbrace{(q,\ldots,q)}_{\text{$m$ times}}\cup\underbrace{(1,\ldots,1)}_{\text{$m$ times}}\equiv\vec{Q}^{[m]} \tag{7.3}\]
_specializations of (7.2) are given by_
\[g_{\Delta}^{i^{[m]}}\left(m^{n};\vec{Q}^{[m]}\right)=\lim_{s\to \infty}(-s)^{-n\binom{m}{2}}\cdot Z\left(\vec{Q}^{[m]};i^{[m]}\right)\Big{|}_ {q\to q^{-1}}. \tag{7.4}\]
### Expansion formula
**Theorem 7.3**.: _Fix a vector \(i^{[m]}=(i_{1},\ldots,i_{nm})\in[1,n]^{nm}\) such that \(|\{a:i_{a}=k\}|=m\) for all \(1\leqslant k\leqslant n\). Then there exist explicit rational functions in \(q\), denoted \(\Theta\left(i^{[m]};j^{[m+1]}\right)\), such that the following expansion formula holds:_
\[g_{\Delta}^{i^{[m]}}\left(m^{n};\vec{Q}^{[m]}\right)=\sum_{j^{[m+1]}}\Theta \left(i^{[m]};j^{[m+1]}\right)g_{\Delta}^{j^{[m+1]}}\left((m+1)^{n};\vec{Q}^{ [m+1]}\right), \tag{7.5}\]
_where the sum is over vectors \(j^{[m+1]}=\left(j_{1},\ldots,j_{n(m+1)}\right)\in[1,n]^{n(m+1)}\) such that \(|\{a:j_{a}=k\}|=m+1\) for all \(1\leqslant k\leqslant n\)._
The proof of this theorem is split over the subsequent three subsections. In view of the relation (7.4), all properties of the functions \(g_{\Delta}^{i^{[m]}}\left(m^{n};\vec{Q}^{[m]}\right)\) and \(g_{\Delta}^{i^{[m+1]}}\left((m+1)^{n};\vec{Q}^{[m+1]}\right)\) may be deduced from those of \(Z\left(\vec{Q}^{[m]};i^{[m]}\right)\equiv Z\left(i^{[m]}\right)\) and \(Z\left(\vec{Q}^{[m+1]};j^{[m+1]}\right)\equiv Z\left(j^{[m+1]}\right)\); we adopt this approach in our proof of (7.5).
### Partition function \(\tilde{Z}\)
Our strategy for proving (7.5) is to define another type of partition function, similar to (7.1), and calculate it in two different ways; the two different evaluations effectively yield the left and right hand sides of (7.5). To that end, for all vectors \(i^{[m]}=(i_{1},\ldots,i_{nm})\in[1,n]^{nm}\) we introduce
(7.6) \[\begin{split}(su;r)&\to\;\mathbf{e}_{0}\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;
which allows us to determine that
\[(s^{2}u;q)_{n}^{m+1}\cdot\tilde{Z}\left(u;x_{1},\ldots,x_{nm};i^{[m]}\right)= \alpha\cdot u^{n}\cdot\prod_{k=1}^{nm}(su-qx_{k}) \tag{7.8}\]
where \(\alpha\) is independent of \(u\) but may depend on all other parameters. To determine \(\alpha\), we seek an appropriate choice for the parameter \(u\) in (7.8). Our choice is motivated by studying the vertex in the top-left corner of the partition function (7.6); this vertex is of the form
From the explicit form of this vertex weight we find that
\[\lim_{u\to s^{-2}q-n+1}(s^{2}u;q)_{n}\cdot\tilde{L}_{su}^{(r,s)}(\boldsymbol{A },\boldsymbol{e}_{0};\boldsymbol{e}_{0},\boldsymbol{A})=\boldsymbol{1}_{ \boldsymbol{A}=\boldsymbol{e}_{[1,n]}}\cdot q^{-(n-1)n}r^{-2n}(r^{2};q)_{n};\]
it follows that if we set \(u=s^{-2}q^{-n+1}\) in the left hand side of (7.8), this produces a freezing of the top row and leftmost column in the partition function (7.6):
\[\lim_{u\to s^{-2}q^{-n+1}}(s^{2}u;q)_{n}^{m+1}\cdot\tilde{Z}\left(u;x_{1}, \ldots,x_{nm};i^{[m]}\right)\\ =\lim_{u\to s^{-2}q^{-n+1}}\left((s^{2}u;q)_{n}^{m+1}\times \begin{array}{c}\boldsymbol{e}_{0}\\ \vdots\end{array}\right.\begin{array}{c}\boldsymbol{e}_{0}\\ \vdots\end{array}\right.\begin{array}{c}\boldsymbol{e}_{1},n\end{array} \tag{7.9}\]
where coloured lines flowing through the leftmost column and top row represent the vector \(\boldsymbol{e}_{[1,n]}\). The quantity (7.9) splits into several pieces; apart from the partition function \(Z(x_{1},\ldots,x_{nm};i^{[m]})\) which emerges in the bottom-right corner, there is a contribution from the vertex in the top-left corner, the remaining
vertices in the top row, and the remaining vertices in the leftmost column. This leads us to the expression
\[\lim_{u\to s^{-2}q^{-n+1}}(s^{2}u;q)_{n}^{m+1}\cdot\tilde{Z}\left(u;x_{1}, \ldots,x_{nm};i^{[m]}\right)=\lim_{u\to s^{-2}q^{-n+1}}\left[(s^{2}u;q)_{n}\times \ (su;r)\to\raisebox{-1.0pt}{\includegraphics[scale=0.5]{fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig//fig/fig/fig/fig/fig/fig//fig/fig/fig/fig/ fig//fig/fig/fig/fig/ fig//fig/fig/fig/ fig//fig/fig//fig/fig/ fig//fig/ fig//fig/fig/fig/ fig//fig/fig/ fig//fig/fig/fig/ fig//fig/ fig//fig/fig//fig/ fig//fig//fig/ fig//fig/ fig//fig//fig/ fig//fig/ fig//fig//fig/ fig// fig// fig//fig//fig/ fig// fig// fig// fig// fig// fig//fig/ fig// fig// fig//fig// fig//fig//fig/ fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig/// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig/// fig// fig/// fig// fig/// fig// fig/// fig// fig/// fig// fig// fig// fig/// fig/// fig/// fig/// fig// fig/// fig/// fig/// fig// fig/// fig// fig// fig/// fig/// fig/// fig// fig//// fig/// fig/// fig/// fig/// fig/// fig// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig// fig/// fig// fig/// fig// fig/// fig/// fig/// fig/// fig/// fig// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig// fig/// fig//// fig// fig/// fig/// fig/// fig// fig/// fig/// fig/// fig/// fig/// fig//// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig// fig/// fig/// fig// fig/// fig/// fig/// fig// fig/// fig/// fig/// fig// fig// fig/// fig/// fig// fig/// fig/// fig/// fig/// fig// fig/// fig/// fig/// fig/// fig// fig/// fig/// fig/// fig/// fig// fig/// fig// fig/// fig/// fig// fig// fig/// fig// fig// fig/// fig// fig/// fig/// fig// fig// fig// fig// fig/// fig/// fig// fig/// fig// fig/// fig// fig// fig// fig/// fig// fig/// fig// fig/// fig// fig// fig/// fig// fig/// fig// fig// fig/// fig// fig// fig// fig// fig/// fig// fig// fig// fig// fig/// fig// fig/// fig// fig// fig// fig// fig// fig// fig// fig// fig/// fig/// fig// fig// fig// fig/// fig// fig/// fig// fig/// fig// fig// fig// fig// fig/// fig// fig// fig// fig// fig/// fig// fig// fig// fig// fig// fig/// fig// fig/// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig/// fig// fig// fig// fig/ fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig/// fig// fig// fig// fig// fig// fig// fig/// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig/// fig/ fig// fig/// fig// fig// fig// fig// fig// fig// fig// fig/// fig// fig// fig// fig// fig// fig// fig// fig// fig/// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig/// fig// fig/ fig// fig// fig// fig// fig// fig// fig// fig/ fig// fig/// fig/// fig/// fig/// fig/// fig// fig// fig// fig// fig// fig// fig/// fig// fig// fig// fig// fig// fig// fig// fig/// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig/// fig// fig// fig/// fig// fig// fig// fig// fig// fig/// fig// fig// fig// fig// fig/// fig// fig// fig// fig/// fig// fig// fig// fig// fig/// fig// fig// fig/// fig/// fig/// fig// fig/// fig// fig/// fig// fig/// fig/// fig// fig/// fig// fig/// fig/// fig// fig// fig// fig/// fig// fig/// fig/// fig// fig/// fig/// fig/// fig// fig/// fig/// fig/// fig// fig// fig/// fig// fig// fig/// fig// fig/// fig/// fig// fig/// fig/// fig/// fig// fig/// fig/// fig// fig// fig/// fig// fig/// fig// fig// fig/// fig// fig// fig/// fig/// fig// fig// fig// fig/// fig/// fig/// fig/// fig// fig// fig// fig// fig// fig/// fig// fig// fig// fig// fig// fig/// fig/// fig// fig// fig// fig// fig// fig// fig// fig// fig/// fig// fig// fig// fig// fig// fig// fig// fig/// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig/// fig// fig// fig// fig// fig/// fig/ fig// fig/// fig// fig// fig// fig/// fig// fig/// fig// fig// fig// fig// fig// fig// fig/// fig// fig// fig/// fig// fig// fig/// fig// fig/// fig/// fig// fig/// fig// fig/// fig/// fig// fig// fig// fig/// fig// fig// fig/// fig/// fig/ fig/// fig// fig/// fig// fig// fig/// fig/// fig// fig/// fig/// fig// fig// fig/// fig// fig/// fig// fig// fig/// fig// fig// fig// fig/ fig/// fig// fig// fig// fig/// fig// fig// fig// fig// fig// fig// fig// fig// fig/// fig// fig// fig/// fig// fig// fig// fig// fig/// fig// fig// fig/// fig// fig// fig// fig// fig// fig/// fig/// fig// fig// fig/// fig// fig// fig// fig/// fig// fig// fig/// fig/// fig/// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig/// fig/ fig/// fig// fig// fig// fig// fig// fig// fig/// fig/ fig/// fig// fig// fig// fig/// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig/// fig// fig// fig/// fig// fig// fig/// fig// fig// fig// fig/// fig// fig// fig// fig// fig// fig// fig/// fig// fig// fig// fig/// fig// fig// fig// fig// fig// fig// fig/// fig// fig/// fig// fig// fig/// fig// fig// fig// fig// fig/// fig// fig// fig/ fig// fig// fig// fig/// fig/ fig// fig/ fig/// fig/ fig/// fig/ fig// fig/ fig// fig// fig// fig/ fig/// fig/ fig// fig// fig// fig// fig// fig// fig/ fig// fig// fig// fig/ fig/ fig// fig// fig/ fig/ fig// fig// fig// fig/ fig// fig// fig/ fig// fig// fig/ fig// fig/ fig// fig// fig// fig/ fig/ fig/ fig// fig// fig// fig/ fig/ fig/ fig// fig// fig/ fig/ fig// fig// fig// fig// fig/ fig// fig/ fig// fig/ fig/ fig// fig// fig/ fig// fig// fig/ fig// fig/// fig// fig/ fig/ fig// fig// fig/ fig// fig// fig// fig/ fig// fig/// fig/ fig/ fig/ fig// fig// fig/ fig// fig/ fig// fig/// fig/ fig// fig// fig// fig// fig/ fig// fig/ fig// fig/ fig// fig// fig// fig/ fig// fig// fig// fig/ fig/ fig// fig/ fig// fig/ fig// fig/ fig// fig/ fig// fig/ fig// fig// fig/ fig// fig// fig/ fig// fig/ fig/// fig/ fig// fig// fig/ fig// fig// fig// fig/ fig/ fig// fig/ fig// fig/ fig/ fig// fig// fig/ fig/ fig/ fig/ fig// fig/ fig/ fig// fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig// fig// fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig// fig/ fig/ fig// fig// fig/ fig/ fig/ fig/ fig/ fig// fig/ fig// fig/ fig// fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig// fig/ fig/ fig/ fig/ fig// fig/ fig/ fig// fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig// fig/ fig// fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig// fig/ fig// fig/ fig// fig// fig/ fig/ fig// fig/ fig
resulting operator by \(\mathcal{D}_{J(k-1)}(q^{n-k+1};q^{-(k-1)/2})\). Repeating this process for all \(k\in[1,n]\), beginning with \(k=n\) and reducing \(k\) by \(1\) at each step, it is straightforward to derive the following expansion:
\[\tilde{Z}\left(s^{-1};\vec{Q}^{[m]};i^{[m]}\right)\Big{|}_{r=q^{-n /2}}=\left[\frac{(1-q)^{n}}{(q;q)_{n}}\right]^{m}\\ \times\sum_{j^{[m+1]}}\Psi\left(i^{[m]};j^{[m+1]}\right)\left\langle \boldsymbol{e}_{[1,n]}\right|^{\otimes m+1}\prod_{k=1}^{n}\left[\mathcal{D}_{j _{(k-1)(m+1)+1}}(q^{n-k})\cdots\mathcal{D}_{j_{k(m+1)}}(q^{n-k})\right]| \boldsymbol{e}_{0}\rangle^{\otimes m+1} \tag{7.12}\]
where the sum is taken over all vectors \(j^{[m+1]}=(j_{1},\ldots,j_{n(m+1)})\in[1,n]^{m+1}\) and with the coefficients \(\Psi\left(i^{[m]};j^{[m+1]}\right)\) given by the following one-row partition function:
where colours are conserved in the direction of arrow flow, and with a weight of \(q\) assigned to each of the events
which denotes a path of colour \(c\) passing above a colour \(i\), with \(c>i\). Recasting (7.12) in terms of the family of partition functions (7.1), we recover the expansion
\[\tilde{Z}\left(s^{-1};\vec{Q}^{[m]};i^{[m]}\right)\Big{|}_{r=q^{-n /2}}=\left[\frac{(1-q)^{n}}{(q;q)_{n}}\right]^{m}\times\sum_{j^{[m+1]}} \mathbf{1}_{i^{[m]}\prec j^{[m+1]}}\Psi\left(i^{[m]};j^{[m+1]}\right)Z\left( \vec{Q}^{[m+1]};j^{[m+1]}\right). \tag{7.13}\]
### Comparing evaluations of \(\tilde{Z}\)
Comparing equations (7.11) and (7.13), we have shown that
\[Z\left(\vec{Q}^{[m]};i^{[m]}\right)=(-1)^{n(m+1)}s^{-n(m+1)}q^{- \binom{n}{2}(m+1)}\\ \times\left(\frac{(1-q)^{n}}{(s^{2};q)_{n}}\right)^{m}\left(\frac {(s;q)_{n}}{(q;q)_{n}}\right)^{2m+1}\sum_{j^{[m+1]}}\mathbf{1}_{i^{[m]} \prec j^{[m+1]}}\Psi\left(i^{[m]};j^{[m+1]}\right)Z\left(\vec{Q}^{[m+1]};j^{[m +1]}\right).\]
Inverting \(q\) in the previous equation and multiplying both sides by \((-s)^{-n\binom{m}{2}}\), we obtain
\[(-s)^{-n\binom{m}{2}}Z\left(\vec{Q}^{[m]};i^{[m]}\right)\Big{|} _{q\to q^{-1}}=q^{\binom{n}{2}(m+1)}(-s)^{-n}\left(\frac{(1-q^{-1})^{n}}{(s^ {2};q^{-1})_{n}}\right)^{m}\left(\frac{(s;q^{-1})_{n}}{(q^{-1};q^{-1})_{n}} \right)^{2m+1}\\ \times\sum_{j^{[m+1]}}\mathbf{1}_{i^{[m]}\prec j^{[m+1]}}\Psi \left(i^{[m]};j^{[m+1]}\right)\Big{|}_{q\to q^{-1}}(-s)^{-n\binom{m+1}{2}}Z \left(\vec{Q}^{[m+1]};j^{[m+1]}\right)\Big{|}_{q\to q^{-1}},\]
and we now take the limit \(s\to\infty\) using (7.4):
\[g_{\Delta}^{i^{[m]}}\left(m^{n};\vec{Q}^{[m]}\right)=\frac{(q^{-1}-1)^{nm}}{(q^ {-1};q^{-1})_{n}^{2m+1}}\sum_{j^{[m+1]}}\mathbf{1}_{i^{[m]}\prec j^{[m+1]}} \Psi\left(i^{[m]};j^{[m+1]}\right)\Big{|}_{q\to q^{-1}}g_{\Delta}^{i^{[m+1]}} \left((m+1)^{n};\vec{Q}^{[m+1]}\right). \tag{7.14}\]
Finally, rearranging the factors in (7.14), one recovers
\[g_{\Delta}^{i^{[m]}}\left(m^{n};\vec{Q}^{[m]}\right)=(-1)^{n}q^{ \binom{nm+n+1}{2}-\binom{nm+1}{2}}\frac{(1-q)^{nm}}{(q;q)_{n}^{2m+1}}\\ \times\sum_{j^{[m+1]}}\mathbf{1}_{i^{[m]}\prec j^{[m+1]}}\Psi \left(i^{[m]};j^{[m+1]}\right)\Big{|}_{q\to q^{-1}}g_{\Delta}^{j^{[m+1]}} \left((m+1)^{n};\vec{Q}^{[m+1]}\right).\]
This completes the proof of Theorem 7.3, with the coefficients in (7.5) identified as
\[\Theta\left(i^{[m]};j^{[m+1]}\right)=\mathbf{1}_{i^{[m]}\prec j^{[m+1]}}(-1)^{ n}q^{\binom{nm+n+1}{2}-\binom{nm+1}{2}}\frac{(1-q)^{nm}}{(q;q)_{n}^{2m+1}} \Psi\left(i^{[m]};j^{[m+1]}\right)\Big{|}_{q\to q^{-1}}. \tag{7.15}\]
### Completing the proof of Theorem 6.4
Rearrangement of (7.5), using (7.15), yields the fact that (7.16)
\[\sum_{j^{[m+1]}}\mathbf{1}_{i^{[m]}\prec j^{[m+1]}}(-1)^{n}q^{ \binom{nm+n+1}{2}-\binom{nm+1}{2}}\frac{(1-q)^{nm}}{(q;q)_{n}^{2m+1}}\Psi \left(i^{[m]};j^{[m+1]}\right)\Big{|}_{q\to q^{-1}}\frac{g_{\Delta}^{j^{[m+1]}} \left((m+1)^{n};\vec{Q}^{[m+1]}\right)}{g_{\Delta}^{i^{[m]}}\left(m^{n};\vec{Q }^{[m]}\right)}=1.\]
This is precisely the result that we need to complete the proof of Theorem 6.4; all that remains is to match the summand of (7.16) with the second line of equation (6.34). We observe that all factors match perfectly, modulo the following proposition that takes care of the factors that are not yet manifestly equal:
**Proposition 7.4**.: _Fix two colour sequences \(c^{[m]}\in[1,n]^{nm}\) and \(c^{[m+1]}\in[1,n]^{n(m+1)}\) which satisfy the constraints \(|\{a:c_{a}^{[m]}=k\}|=m\) and \(|\{a:c_{a}^{[m+1]}=k\}|=m+1\) for all \(1\leqslant k\leqslant n\). Assuming also that \(c^{[m]}\prec c^{[m+1]}\), the following relation holds:_
\[\Psi\left(c^{[m]};c^{[m+1]}\right)\Upsilon\left(c^{[m]};c^{[m+1]}\right)q^{ \mathrm{inv}(c^{[m]})-\mathrm{inv}(c^{[m+1]})}=1. \tag{7.17}\]
Proof.: This statement is equivalent to [1, Lemma 10.1.2], but rather than making a detailed match with that result, we give a standalone proof. As in [1], we shall proceed by induction on \(n\).
The \(n=1\) case of (7.17) is trivial; in that case we must have \(c^{[m]}=1^{m}\) and \(c^{[m+1]}=1^{m+1}\), which yields
\[\Psi\left(c^{[m]};c^{[m+1]}\right)=\Upsilon\left(c^{[m]};c^{[m+1]}\right)=q^{ \mathrm{inv}(c^{[m]})-\mathrm{inv}(c^{[m+1]})}=1.\]
We shall take as our inductive assumption that (7.17) is valid for \(n=p-1\), for given \(p\geqslant 2\). Then for any \(c^{[m]}\in[1,p-1]^{(p-1)m}\) and \(c^{[m+1]}\in[1,p-1]^{(p-1)(m+1)}\), we also have
\[\Psi\left(c^{[m]}\cup p^{m};c^{[m+1]}\cup p^{m+1}\right)\Upsilon\left(c^{[m]} \cup p^{m};c^{[m+1]}\cup p^{m+1}\right)q^{\mathrm{inv}(c^{[m]}\cup p^{m})- \mathrm{inv}(c^{[m+1]}\cup p^{m+1})}=1. \tag{7.18}\]
Indeed, one can verify that appending a bundle of (maximal) colours \(p\) to both \(c^{[m]}\) and \(c^{[m+1]}\) does not affect either of the partition functions \(\Psi\) and \(\Upsilon\), neither does it affect the value of the statistic \(\mathrm{inv}\). Equation (7.18) then constitutes a solution of (7.17) at \(n=p\); to prove that (7.17) holds generally at \(n=p\), we seek "local moves" that can be applied to \(c^{[m]}\cup p^{m}\) and \(c^{[m+1]}\cup p^{m+1}\) to bring them to generic colour sequences in \([1,p]^{pm}\) and \([1,p]^{p(m+1)}\), respectively. These local moves will have the property that they preserve the interlacing property of the colour sequences, and applying them to a solution of (7.17) will yield a new solution.
The first two local moves that one requires are jumps across bundles:
(7.19) \[\begin{array}{ccccccccc}\cdots&\cdots&a&p&\cdots&\cdots&\cdots&p&a&\cdots& \cdots\\ \cline{1-1}\cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6} \cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6} \cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6} \cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6} \cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6} \cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6} \cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6} \cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6} \cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6} \cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6} \cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6} \cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6} \cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6} \cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6} \cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6} \cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6} \cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6} \cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6} \cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6} \cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6} \cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6} \cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6} \cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6} \cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6} \cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6} \cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6}\cline{6-6}
where arrows at the bottom of the diagram indicate colours in \(c^{[m]}\), while those at the top indicate colours in \(c^{[m+1]}\). All marked colours \(a,b,c,p\) are assumed to be distinct (were they not distinct in the second case, the interlacing property of colours would be violated in at least one of the pictures shown), with \(p\) being the largest. All colours remain fixed under these moves, apart from \(a\) and \(p\), which switch places. Let \(\Psi_{\mathsf{L}/\mathsf{R}}\) and \(\Upsilon_{\mathsf{L}/\mathsf{R}}\) denote the contributions to the functions \(\Psi\) and \(\Upsilon\) coming only from the indicated colours and marked regions of the diagrams, on the left/right hand side of both (7.19) and (7.20).
In the case of (7.19), one finds that
\[\Psi_{\mathsf{L}}\cdot\Upsilon_{\mathsf{L}}=1,\qquad\Psi_{\mathsf{R}}\cdot \Upsilon_{\mathsf{R}}=q,\]
but since \(\operatorname{inv}(c^{[m+1]})\) also increases by \(1\) under the move (7.19), we find that (7.17) is preserved. In a similar vein, in the case of (7.20) one obtains
\[\Psi_{\mathsf{L}}\cdot\Upsilon_{\mathsf{L}}=q\cdot q^{\mathbf{1}_{b>c}+ \mathbf{1}_{b>a}+\mathbf{1}_{c>a}},\qquad\Psi_{\mathsf{R}}\cdot\Upsilon_{ \mathsf{R}}=q^{\mathbf{1}_{b>a}+\mathbf{1}_{c>a}}\cdot q^{\mathbf{1}_{b>c}},\]
and since \(\operatorname{inv}(c^{[m]})\) also increases by \(1\) under the move (7.20), we again find that (7.17) is preserved.
The remaining two local moves needed are jumps within bundles:
(7.21)
where all marked colours \(a,b,p\) are assumed to be distinct, with \(p\) being the largest. As previously, all colours remain fixed under these moves apart from \(a\) and \(p\), which exchange their places. We again let \(\Psi_{\mathsf{L}/\mathsf{R}}\) and \(\Upsilon_{\mathsf{L}/\mathsf{R}}\) denote the contributions to the functions \(\Psi\) and \(\Upsilon\) coming only from the indicated colours and marked regions of the diagrams, on the left/right hand side of both (7.21) and (7.22).
In the case of (7.21), we get
\[\Psi_{\mathsf{L}}\cdot\Upsilon_{\mathsf{L}}=q^{\mathbf{1}_{a>b}}\cdot q, \qquad\Psi_{\mathsf{R}}\cdot\Upsilon_{\mathsf{R}}=q\cdot q^{1+\mathbf{1}_{a>b }},\]
with the discrepancy between left and right hand sides cured by the fact that \(\operatorname{inv}(c^{[m+1]})\) increases by \(1\) under the move in question; hence (7.21) preserves solutions of (7.17). Finally, in the case of (7.22) one finds that
\[\Psi_{\mathsf{L}}\cdot\Upsilon_{\mathsf{L}}=q\cdot q^{\mathbf{1}_{a<b}}, \qquad\Psi_{\mathsf{R}}\cdot\Upsilon_{\mathsf{R}}=q^{\mathbf{1}_{a<b}}\cdot 1;\]
left and right hand sides match after accounting for the fact that \(\operatorname{inv}(c^{[m]})\) is increased by \(1\) under this move. Therefore, (7.22) also preserves solutions of (7.17).
This suffices to prove (7.17) generally at \(n=p\), since we have already exhibited one solution (7.18), and it is clear that successive application of the four local moves generates all possible colour sequences. The proof of (7.17) is completed, and with it, the proof of Theorem 6.4.
### Probability distribution on interlacing triangular arrays
The results of this section allow us to conclude that the quantity
\[\mathbb{P}_{\mathrm{col}}\left(c^{[m]}\to c^{[m+1]}\right)\] \[:=\mathbf{1}_{c^{[m]}\prec c^{[m+1]}}(-1)^{n}q^{\binom{nm+n+1}{2} -\binom{nm+1}{2}}\frac{(1-q)^{nm}}{(q;q)_{n}^{2m+1}}\Psi\left(c^{[m]};c^{[m+1] }\right)\Big{|}_{q\to q^{-1}}\frac{g_{\Delta}^{c^{[m+1]}}\left((m+1)^{n}; \vec{Q}^{[m+1]}\right)}{g_{\Delta}^{c^{[m]}}\left(m^{n};\vec{Q}^{[m]}\right)} \tag{7.23}\]
defines a transition probability on colour sequences; this allows one to construct a discrete-time Markov process living on _interlacing triangular arrays_, as we define below.
**Definition 7.5** (Interlacing triangular array).: Fix integers \(n\geqslant 1\), \(N\geqslant 1\). For all \(1\leqslant i\leqslant n\), \(1\leqslant j\leqslant k\leqslant N\), fix positive integers \(T_{j,k}^{(i)}\in[1,n]\) subject to two constraints:
**(a)** For each \(k\in[1,N]\), the collection \(\{T_{j,k}^{(i)}\}_{1\leqslant i\leqslant n,1\leqslant j\leqslant k}\) is equal to \(\{1^{k}\}\cup\{2^{k}\}\cup\cdots\cup\{n^{k}\}\), with the equality being at the level of sets;
**(b)** Let the _horizontal coordinate_ of the integer \(T_{j,k}^{(i)}\) be defined as \(c(i,j,k)=iN+j-(N+k)/2\). If \(T_{j,k}^{(i)}=T_{j^{\prime},k}^{(i^{\prime})}=a\in[1,n]\), \(c(i,j,k)<c(i^{\prime},j^{\prime},k)\) for some \(i,j,i^{\prime},j^{\prime}\) and \(1<k\leqslant N\), then we assume that there exists \(i^{\prime\prime},j^{\prime\prime}\) such that \(T_{j^{\prime\prime},k-1}^{(i^{\prime\prime})}=a\) and \(c(i,j,k)<c(i^{\prime\prime},j^{\prime\prime},k-1)<c(i^{\prime},j^{\prime},k)\); this is the _interlacing_ property of our array.
We refer to such a collection of positive integers as an _interlacing triangular array_ of _rank_\(n\) and _height_\(N\). Let \(\mathcal{T}_{N}(n)\) denote the set of all interlacing triangular arrays of rank \(n\) and height \(N\).
_Remark 7.6_.: Every interlacing triangular array in \(\mathcal{T}_{N}(n)\) is in one-to-one correspondence with a string \(c^{[1]}\prec c^{[2]}\prec\cdots\prec c^{[N]}\) of interlacing colour sequences \(c^{[k]}\in[1,n]^{nk}\). The colour sequence \(c^{[k]}\) is obtained simply by reading off the \(k\)-th row of the interlacing triangular array.
**Example 7.7** (\(n=2\), \(N=3\)).: A permissible interlacing triangular array of rank \(2\) and height \(3\):
Recall that the integers in the array are written collectively in the form \(T_{j,k}^{(i)}\). In this picture, the index \(1\leqslant i\leqslant 2\) increases from left to right and labels individual triangular arrays; the index \(1\leqslant k\leqslant 3\) labels the row in question, and the index \(1\leqslant j\leqslant k\) is used to label diagonals in each triangular array.
Reading the numbers in the \(k\)-th row we recover the elements of the set \(\{1^{k}\}\cup\{2^{k}\}\), which is constraint **(a)**; also, the interlacing constraint **(b)** holds separately both for the coordinates of the numbers \(1\) and \(2\).
In this example one has \(c^{[1]}=(2,1)\), \(c^{[2]}=(2,1,2,1)\), \(c^{[3]}=(2,1,1,2,2,1)\).
**Example 7.8** (\(n=3\), \(N=3\)).: Under the translation red=1, green=2 and blue=3, Figure 1 corresponds with the following interlacing triangular array:
In this example one has \(c^{[1]}=(2,3,1)\), \(c^{[2]}=(2,1,3,3,2,1)\), \(c^{[3]}=(2,1,2,3,3,1,3,2,1)\).
**Corollary 7.9**.: _Let \(T\in\mathcal{T}_{n}(N)\) be an interlacing triangular array generated by \(N\) successive applications of the Markov kernel (7.23). Then the array \(T_{j,k}^{(i)}\), \(1\leqslant i\leqslant n\), \(1\leqslant j\leqslant k\leqslant N\), has joint distribution_
\[\mathbb{P}_{\mathrm{col}}\left(T_{j,k}^{(i)}=c_{(i-1)k+j}^{[k]} ;1\leqslant i\leqslant n,\ 1\leqslant j\leqslant k\leqslant N\right)\\ =\mathbf{1}_{c^{[1]}\prec\cdots\prec c^{[N]}}(-1)^{nN}q^{{nN+1 \choose 2}}\frac{(1-q)^{n{N\choose 2}}}{(q;q)_{n}^{N^{2}}}g_{\Delta}^{c^{[N]}} \left(N^{n};\bar{Q}^{[N]}\right)\prod_{i=1}^{N}\Psi\left(c^{[i-1]};c^{[i]} \right)\Big{|}_{q\to q^{-1}}. \tag{7.24}\]
### Explicit calculations
In this subsection we document some explicit calculations in the case \(n=2\), based on direct application of the formula (7.23). All factors appearing in (7.23) are straightforwardly computed, with the exception of the functions \(g_{\Delta}^{c^{[m]}}\) and \(g_{\Delta}^{c^{[m+1]}}\). To evaluate the latter, we make use of the following factorization result, which follows from [1, Proposition 11.6.1]:
**Proposition 7.10**.: _Choosing \(c^{[m]}\) to be the increasing colour sequence \(1^{m}\cup\dots\cup n^{m}\in[1,n]^{nm}\), one has the following formula:_
\[g_{\Delta}^{1^{m}\cup\dots\cup n^{m}}(m^{n};x_{1},\dots,x_{nm})=q^{-m^{2}{n \choose 2}}(1-q^{-1})^{nm}\prod_{k=0}^{n-1}\prod_{1\leqslant i<j\leqslant m}(q ^{-1}x_{mk+j}-x_{mk+i}). \tag{7.25}\]
The initial condition (7.25), used in conjunction with the exchange relation (3.23), allows \(g_{\Delta}^{c^{[m]}}\) and \(g_{\Delta}^{c^{[m+1]}}\) to be efficiently implemented on a computer. We are then in a position to explicitly evaluate the transition probabilities (7.23); see Figure 2.
### A positivity conjecture
Studying the probabilities that appear in Figure 2, one notices that they are always positive polynomials in \(q\) over a common denominator that is easily predicted. Analysis of examples for \(n\geqslant 3\) reveals that this structure appears to hold generally. This leads us to formulate the following positivity conjecture:
**Conjecture 7.11**.: _Fix integers \(m,n\geqslant 1\) and a colour sequence \(c^{[m]}\in[1,n]^{nm}\). Let \(\mathbb{P}_{\mathrm{col}}(c^{[m]})\) denote the probability of arriving at the colour sequence \(c^{[m]}\) after \(m\) applications of the Markov kernel (7.23) to the trivial sequence \(c^{[0]}=\emptyset\). Then one has that_
\[\mathbb{P}_{\mathrm{col}}\left(c^{[m]}\right)=\mathcal{P}\left(c^{[m]}\right) \cdot\left(\prod_{i=1}^{n}\frac{1-q}{1-q^{i}}\right)^{m^{2}}\quad\text{where }\ \mathcal{P}\left(c^{[m]}\right)\in\mathbb{N}[q].\]
## Appendix A Interlacing triangles and graph colourings
In this appendix we turn to the problem of enumerating the number of elements in the set \(\mathcal{T}_{N}(n)\) from Definition 7.5 (that is, computing the size of the support of \(\mathbb{P}_{\mathrm{col}}\)). This turns out to be a triviality for \(n=1\) and \(n=2\); for \(n=3\) and \(n=4\) we are able to conjecture a relation between the cardinality of \(\mathcal{T}_{N}(n)\) and certain graph colourings. An elegant bijective proof of the \(n=3\) conjecture was already given in [GG]. The \(n=4\) case remains open.
For \(n=1\), an interlacing triangular array consists of a single triangle filled with the number \(1\); as there is only one such arrangement, it follows that \(|\mathcal{T}_{N}(1)|=1\) for all \(N\geqslant 1\).
For \(n=2\), we have the following elementary result:
**Proposition A.1**.: _For all \(N\geqslant 1\), \(|\mathcal{T}_{N}(2)|=2^{N}\)._
Proof.: The only triangular arrays in rank \(2\) which respect the interlacing constraint **(b)** are those in which numbers remain constant along diagonals in the left triangle, and along anti-diagonals in the right; further, once we choose the numbers assigned to diagonals in the left triangle, this completely determines the right one, in view of constraint **(a)** (and the fact that numbers remain constant along its anti-diagonals). Hence there are exactly \(2^{N}\) possibilities.
For \(n=3\) and \(n=4\) we have no direct results around enumeration. We do, however, make two conjectures relating the cardinality of the set \(\mathcal{T}_{N}(n)\) to colourings of certain families of graphs.
**Definition A.2**.: Given a graph \(G\) and an integer \(m\geqslant 1\), an \(m\)-colouring of \(G\) is an assignment of a label \(l\in[1,m]\) to each vertex \(v\in G\) such that \(l\neq l^{\prime}\) if \(v\) and \(v^{\prime}\) are connected by an edge.
Figure 2. The result of calculations in the case \(n=2\). Each node at level \(m\) corresponds to a colour sequence \(c^{[m]}\); the function written below it is the probability \(\mathbb{P}_{\mathrm{col}}(c^{[m]})\) of arriving at the colour sequence \(c^{[m]}\) after \(m\) applications of the Markov kernel (7.23) to the trivial sequence \(c^{[0]}=\emptyset\). Connected nodes indicate colour sequences that interlace; the resulting graph is a complete binary tree, meaning that each colour sequence \(c^{[m]}\) has unique ancestry back to the root \(\emptyset\). This uniqueness property ceases to hold for \(n\geqslant 3\).
**Conjecture A.3**.: _Let \(G_{N}^{\triangle}\) denote the triangular graph_
_where the number of vertices along one side of the triangle is equal to \(N+1\). Let \(\mathfrak{g}_{N}^{\triangle}(4)\) denote the number of \(4\)-colourings of \(G_{N}^{\triangle}\).21 We conjecture that_
Footnote 21: The sequence \(\mathfrak{g}_{N}^{\triangle}(4)\) appears as A153467 in the Online Encyclopaedia of Integer Sequences; [https://oeis.org/A153467](https://oeis.org/A153467).
\[4\cdot|\mathcal{T}_{N}(3)|=\mathfrak{g}_{N}^{\triangle}(4),\qquad\forall\ N\geqslant 1.\]
**Example A.4** (\(n=3\), \(N=1\)).: For \(n=3\) and \(N=1\), triangular tuples just correspond with arrangements of \(\{1,2,3\}\) along a line; hence \(|\mathcal{T}_{1}(3)|=|\mathfrak{S}_{3}|=6\). On the other hand, the possible \(4\)-colourings of the graph \(G_{1}^{\triangle}=\) (in which we fix the top vertex to have label \(1\), which means an undercounting by an overall factor of \(4\)) are
and indeed \(4\cdot|\mathcal{T}_{1}(3)|=\mathfrak{g}_{1}^{\triangle}(4)\).
**Conjecture A.5**.: _Let \(G_{N}^{\times}\) denote the graph_
_where two vertices share an edge if they are connected via a king move on the chessboard (that is, they a connected via a unit horizontal, vertical, or diagonal step), and the number of vertices along one side of the square is equal to \(N+1\). Let \(\mathfrak{g}_{N}^{\times}(5)\) denote the number of \(5\)-colourings of \(G_{N}^{\times}\).22 We conjecture that_
Footnote 22: The sequence \(\mathfrak{g}_{N}^{\times}(5)\) appears as A068294 in the Online Encyclopaedia of Integer Sequences; [https://oeis.org/A068294](https://oeis.org/A068294).
\[5\cdot|\mathcal{T}_{N}(4)|=\mathfrak{g}_{N}^{\times}(5),\qquad\forall\ N \geqslant 1.\]
|
2309.13874 | DDTSE: Discriminative Diffusion Model for Target Speech Extraction | Diffusion models have gained attention in speech enhancement tasks, providing
an alternative to conventional discriminative methods. However, research on
target speech extraction under multi-speaker noisy conditions remains
relatively unexplored. Moreover, the superior quality of diffusion methods
typically comes at the cost of slower inference speed. In this paper, we
introduce the Discriminative Diffusion model for Target Speech Extraction
(DDTSE). We apply the same forward process as diffusion models and utilize the
reconstruction loss similar to discriminative methods. Furthermore, we devise a
two-stage training strategy to emulate the inference process during model
training. DDTSE not only works as a standalone system, but also can further
improve the performance of discriminative models without additional retraining.
Experimental results demonstrate that DDTSE not only achieves higher perceptual
quality but also accelerates the inference process by 3 times compared to the
conventional diffusion model. | Leying Zhang, Yao Qian, Linfeng Yu, Heming Wang, Hemin Yang, Long Zhou, Shujie Liu, Yanmin Qian | 2023-09-25T04:58:38Z | http://arxiv.org/abs/2309.13874v2 | # Diffusion Conditional Expectation Model
###### Abstract
Target Speech Extraction (TSE) is a crucial task in speech processing that focuses on isolating the clean speech of a specific speaker from complex mixtures. While discriminative methods are commonly used for TSE, they can introduce distortion in terms of speech perception quality. On the other hand, generative approaches, particularly diffusion-based methods, can enhance speech quality perceptually but suffer from slower inference speed. We propose an efficient generative approach named Diffusion Conditional Expectation Model (DCDM) for TSE. It can handle multi- and single-speaker scenarios in both noisy and clean conditions. Additionally, we introduce Regenerate-DCEM (R-DCEM) that can regenerate and optimize speech quality based on pre-processed speech from a discriminative model. Our method outperforms conventional methods in terms of both intrusive and non-intrusive metrics and demonstrates notable strengths in inference efficiency and robustness to unseen tasks. Audio examples are available online1.
Footnote 1: The first author conducted this work during internship at Microsoft
Leying Zhang\({}^{1,2,\dagger}\), Yao Qian\({}^{2}\), Linfeng Xu\({}^{1}\), Heming Wang\({}^{3}\), Xinkai Wang\({}^{1}\),
Hemin Yang\({}^{2}\), Long Zhou\({}^{2}\), Shujie Liu\({}^{2}\), Yanmin Qian\({}^{1}\), Michael Zeng\({}^{2}\)\({}^{1}\)Shanghai Jiao Tong University, China \({}^{2}\) Microsoft, USA \({}^{3}\) The Ohio State University, USA target speech extraction, diffusion model, deep generative model, speech enhancement
Footnote 2: [https://vivain556123.github.io/decm](https://vivain556123.github.io/decm)
Footnote 3: © 20XX IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
## 1 Introduction
The cocktail party effect, also known as "selective hearing", is the ability to focus on a single speaker or conversation in a noisy environment [1, 2]. Target Speech Extraction (TSE) aims to emulate human capability by isolating the clean speech of the target speaker from a noisy mixture. It serves as a valuable tool for enhancing downstream tasks like speech recognition and speaker verification, attracting significant research interests [3, 4].
Discriminative and generative models are two widely used approaches for TSE task. The former learns the best mapping between inputs and outputs, while the latter learns the target distribution, allowing multiple valid estimates [5]. TSE primarily relies on discriminative methods [6, 7]. Despite the many advances gained from past research, discriminative methods still suffer from producing unnatural speech [8, 9].
Generative methods are capable of generating natural and diverse speech, especially diffusion models attracting much attention [10, 11]. Previous work has explored the use of diffusion models on speech-processing tasks, but dealing with multi-speaker scenarios and accelerating inference efficiency remain key challenges. For example, SGMSE+ [5, 9, 12] can only be applied to single-speaker scenarios, DiffSep [13] only handles fixed number of speakers without background noise, and DiffTSE [14] only focuses on clean multi-speaker scenarios. Additionally, these methods are based on the score-based generative model (SGM), so the inference speed is still unsatisfactory [15].
To address constraints of limited application scenarios and inference efficiency, we introduce Diffusion Conditional Expectation Model (DCEM), which features the parameterization of the model by directly predicting clean data. To consolidate DCDM with conventional models, we propose Regenerate-DCEM (R-DCEM) inference strategy, which empowers our model to handle not just end-to-end TSE, but also the optimization of speech from existing results.
Through extensive experiments, we observe that the proposed DCDM achieves higher perceptual quality than discriminative methods, and is robust for unseen tasks. Meanwhile, R-DCEM consistently outperforms discriminative models and exhibits potential as a plug-in. Experiments demonstrate the efficiency of our approaches, accelerating inference 2x faster than other diffusion-based methods.
Our contributions can be summarized as follows: (1) We propose DCDM, which generates speech with high perception quality efficiently; (2) We propose R-DCEM inference strategy that allows the model to optimize discriminative methods in a plug-in fashion; (3) We evaluate model performance under three complex scenarios, and analyze its robustness against other generative methods; (4) We accelerate the inference process and alleviate the conflict between speech quality and inference speed.
## 2 Related Work
Generative models, especially diffusion models have attracted much attention [10, 11]. Recent studies using diffusion models for speech enhancement can be categorized into two types. The first approach is to use fully generative method. For example, SGMSE+ [5, 9, 12], DiffSep [13] and DiffTSE [14] employ score-based generative model on the complex Short-Time Fourier Transform (STFT) domain. The second approach is to refine the existing model with generative methods. StoRM [8] and Diffinter [16] introduce the guidance of pre-processed speech during training of diffusion-based model to refine perceptual speech quality.
In contrast, our proposed methods provide a different way of training by directly estimating the clean speech instead of the gradient of data distribution. Besides, our method not only serves as a fully generative TSE model, but also can refine multiple types of existing models without further training or adaptation.
## 3 Methodology
To solve the quality-speed dilemma of TSE task, we introduce Diffusion Conditional Expectation Model (DCEM). Our model takes a mixture \(\mathbf{y}\), containing multiple speakers and noise, as input, and aims to obtain clean speech \(\mathbf{x}_{0}\) of target speaker. To handle multi-speaker scenarios, we collect an enrollment speech from the target speaker and extract the speaker embedding \(s\) for model conditioning.
### Training and Inference
Our diffusion model contains two processes: the forward process and the conditional expectation based reverse process. The forward process \(q(\mathbf{x}_{t}|\mathbf{x}_{0},\mathbf{y})\) is fixed to a Markov chain defined in Eq.1 that gradually adds Gaussian noise to the clean data \(\mathbf{x}_{0}\) with timestep \(t\in\{0,...,T\}\). We let the forward process turn the clean speech \(\mathbf{x}_{0}\) into a noisy mixture \(\mathbf{y}\) to mimic speech corruption as in [5, 9, 12].
\[q(\mathbf{x}_{t}|\mathbf{x}_{0},\mathbf{y})=\mathcal{N}(\mathbf{x }_{t};\mu(\mathbf{x}_{0},\mathbf{y},t),\sigma(t)^{2}) \tag{1}\] \[\mu(\mathbf{x}_{0},\mathbf{y},t)=\exp^{-\gamma t}\mathbf{x}_{0}+( 1-\exp^{-\gamma t}\mathbf{y})\] (2) \[\sigma(t)^{2}=\frac{\sigma_{\min}^{2}\left((\sigma_{\max}/\sigma_ {\min})^{2t}-\exp^{-2\gamma t}\right)\log\left(\sigma_{\max}/\sigma_{\min} \right)}{\gamma+\log\left(\sigma_{\max}/\sigma_{\min}\right)} \tag{3}\]
The reverse process reverses the transformation from a speech with Gaussian noise \(\mathbf{x}_{t}\) to clean speech with timestep \(t\in\{T,...,0\}\), conditioning on noisy mixture \(\mathbf{y}\) and speaker embedding \(s\). The reverse process in prior works is to solve stochastic differential equations (SDEs) by predicting the conditional score [5, 9, 13, 14]. However, due to high dimensionality, it lacks global approximation, which can degrade speech quality [17, 18]. To address this, we employ the model \(f_{\theta}\) to directly predict the clean speech \(\mathbf{x}_{0}\) by minimizing the conditional expectation as follows:
\[\min_{\theta}\mathbb{E}_{(\mathbf{x}_{0},\mathbf{x}_{t})\sim q(\mathbf{x}_{0} )q(\mathbf{x}_{t}|\mathbf{x}_{0},\mathbf{y}),s}\left[\|\mathbf{x}_{0}-f_{ \theta}(\mathbf{x}_{t},s)\|_{2}^{2}\right]\]
With this training objective, the overall training process is divided into two stages: the first training stage and the second continual learning stage. Algorithm 1 displays the first stage, where at each step, \(\mathbf{x}_{t}\) is first obtained through the forward process defined in Eq.1, and then clean speech \(\mathbf{x}_{0t}\) is predicted by the model. We denote \(\mathbf{x}_{0t}\) the predicted clean speech of timestep \(t\). \(\mu\), \(\sigma\) are defined in Eq.2 and Eq. 3 respectively. We incorporate \(\lambda(t)>0\) to control the weight of loss at different timestep [10, 17]. \(d\) measures L2 distance. The second stage of the training method is explained in Section 3.2.
```
1: Sample \(\mathbf{x}_{0},\mathbf{y}\), \(t\sim\mathcal{U}[0,1]\), \(\mathbf{z}\sim\mathcal{N}(0,\mathbf{I})\)
2: Update \(\mathbf{x}_{t}\leftarrow\mu(\mathbf{x}_{0},\mathbf{y},t)+\sigma(t)^{2}\mathbf{z}\)
3: Update \(\hat{\mathbf{x}}_{0t}\gets f_{\theta}\left(\mathbf{x}_{t},s\right), \lambda(t)\leftarrow\left(e^{t}-1\right)^{-1}\)
4: Take gradient descent step on \(\nabla_{\theta}(\lambda(t)d(\hat{\mathbf{x}}_{0t},\mathbf{x}_{0}))\)
5:until converged
```
**Algorithm 1** Training stage 1
During inference, without knowing the ground truth \(\mathbf{x}_{0}\), we deduce the reverse process as shown in Algorithm 2. Initially, we sample \(\mathbf{x}_{T}\) from a normal distribution centered on \(\mathbf{y}\), and predict target sample \(\hat{\mathbf{x}}_{0T}\). At each step, we first update the value of \(\mathbf{x}_{t}\) given \(\hat{\mathbf{x}}_{0t+1}\) and \(\mathbf{y}\). Then we predict the conditional expectation of the target sample \(\hat{\mathbf{x}}_{0t}\). This process repeats \(T\) times. Finally, the clean audio signal is obtained by performing iSTFT on \(\mathbf{x}_{00}\).
To utilize randomness and diversity of the generative model in TSE tasks, we employ the ensemble inference strategy. We repeat inference process ten times with different random seeds to get various speeches. We then sum and normalize to get the final waveform.
### Mimetic continual learning
Comparing training and inference in Algorithm 1 and 2, we observe a gap due to the substitution of the ground truth \(\mathbf{x}_{0}\) with the predicted \(\hat{\mathbf{x}}_{0t+1}\) (in blue). Therefore, we introduce mimetic continual learning (MCL) as the second stage of training, which contains three different strategies to mimic the inference process. As shown in Algorithm 3, the first strategy mimics the first inference step from \(\mathbf{x}_{T}\) to \(\mathbf{x}_{T-1}\). The second builds on the first strategy by mimicking the prediction of \(\mathbf{x}_{t-1}\) given \(\mathbf{x}_{t}\). The third is consistent with the first training stage. In Algorithm 3, \(d\) measures the sum of SI-SDR between predicted speech and target speech and their L2 distance.
```
1:for epoch in 0...N do
2: Sample \(\mathbf{x}_{0},\mathbf{y}\), \(t\sim\mathcal{U}[0,1]\), \(\mathbf{z}\sim\mathcal{N}(0,\mathbf{I}),p\sim\mathcal{U}[0,100]\)
3:if\(p\in[0,\text{epoch}]\)then
4: Sample \(\mathbf{x}_{t}\sim\mathcal{N}(\mathbf{y},\sigma(t)^{2})\), Update \(\hat{\mathbf{x}}_{0t}\gets f_{\theta}\left(\mathbf{x}_{t},s\right)\)
5:elseif\(p\in[\text{epoch},\text{epoch}\times 2]\)then
6: Sample \(\mathbf{x}_{t}\sim\mathcal{N}(\mathbf{y},\sigma(t)^{2})\)
7: Update \(\hat{\mathbf{x}}_{0t}\gets f_{\theta}\left(\mathbf{x}_{t},s\right), \mathbf{x}_{t}^{\prime}\leftarrow\mu(\hat{\mathbf{x}}_{0t}^{\prime},\mathbf{y},t)+\sigma(t)^{2}\mathbf{z}\)
8: Update \(\hat{\mathbf{x}}_{0t}\gets f_{\theta}\left(\mathbf{x}_{t}^{\prime},s\right)\)
9:else
10: Update \(\mathbf{x}_{t}\leftarrow\mu(\mathbf{x}_{0},\mathbf{y},t)+\sigma(t)^{2}\mathbf{z}\)
11: Update \(\hat{\mathbf{x}}_{0t}\gets f_{\theta}\left(\mathbf{x}_{t},s\right), \lambda(t)\leftarrow\left(e^{t}-1\right)^{-1}\)
12:endif
13: Take gradient descent step on \(\nabla_{\theta}(\lambda(t)d(\hat{\mathbf{x}}_{0t},\mathbf{x}_{0}))\)
14:endfor
```
**Algorithm 2** Inference
### Regeneration for discriminative model
To enable our model to not only perform TSE in an end-to-end mode but also to further optimize on the basis of pre-processed speech, we propose the Regenerate-DCEM (R-DCEM) inference strategy, which leverages DCEM to regenerate \(\mathbf{x}_{gen}=\text{DCEM}(\mathbf{x}_{dis},\mathbf{y},s)\), given the discriminative model's output \(\mathbf{x}_{dis}\). The new inference process is shown in Algorithm 4. Compared with the original inference process, we only perform the last \(N\) steps and replace the predicted \(\hat{\mathbf{x}}_{0}\) with \(\mathbf{x}_{dis}\), where \(N=2\) by default. R-DCEM can be used as a non-intrusive speech quality optimizer that can adapt to various discriminative models without any training.
```
1:\(\hat{\mathbf{x}}_{0T-N+1}=\mathbf{x}_{dis}\)
2:for\(t=T-N,...,0\)do
3: Sample \(\mathbf{z}\sim\mathcal{N}(0,\mathbf{I})\)
4: Update \(\mathbf{x}_{t}\leftarrow\mu(\hat{\mathbf{x}}_{0t+1},\mathbf{y},t)+\sigma(t)^{2} \mathbf{z},\hat{\mathbf{x}}_{0t}\gets f_{\theta}\left(\mathbf{x}_{t},s\right)\)
5:endfor
6:return\(\text{iSTFT}(\hat{\mathbf{x}}_{00})\)
```
**Algorithm 3** Training stage 2
\(s\) as input. The model operates on both real and imaginary parts of the complex spectrogram. We modify its residual block, incorporating the FiLM mechanism of a single linear layer to perceive target speaker embedding \(s\)[19]. We also concatenate \(s\) with the hidden feature within the U-Net, positioned before the self-attention layer.
## 4 Experimental Setup
**Data:** We use the Libri2Mix 16kHz min mode [20] and choose trainable 360 for training. mix_both and mix_single are applied in multi- and single-speaker scenarios respectively. We denote the first speaker in the mixture as the target speaker. During training, the enrollment speech is the ground truth target speech. During inference, it is another speech of the target speaker differing from the target speech. All data are transformed into STFT with coefficients in [5].
**Settings:** We set hyperparameters in Eq 2,3 to \(\gamma=1.5,\sigma_{min}=0.05,\sigma_{max}=0.5\). We use Adam optimizer and exponential moving average with a decay of 0.999. We train the first stage with a learning rate of 1e-4 for 500 epochs. The second training stage lasts 12 epochs and the learning rate is decreased to 5e-5. The total inference step is 10, with linearly decreased timesteps from 1 to 0. In R-DCEM, we only perform the last 2 steps out of a total of 10. We select the best-performing model on 20 randomly chosen samples from dev-set for evaluation. In total, we train two models using our proposed method. The first model is trained on multi-speaker noisy data and tested in multi-speaker noisy, multi-speaker clean, single-speaker noisy scenarios. The second model is trained on single-speaker noisy data and tested in single-speaker noisy data.
**Baselines:** We compare our methods with discriminative and generative baselines. We reproduce DPCCN1[6] and DiffSep2[13] for multi-speaker scenario. DiffSep is cascaded with a speaker verification model [21] to realize TSE. We reproduce DCCRN3[22] and SGMSE+4[5] for single speaker scenario. We use a pre-trained ResNet34 model trained on VoxCeleb2 to extract speaker embedding and as the speaker verification model after separation model [21].
Footnote 1: [https://github.com/jyhan03/dpccn](https://github.com/jyhan03/dpccn)
Footnote 2: [https://github.com/fakufaku/diffusion-separation](https://github.com/fakufaku/diffusion-separation)
Footnote 3: [https://github.com/asteroid-team/asteroid](https://github.com/asteroid-team/asteroid)
**Evaluation metrics:** We evaluated both intrusive and non-intrusive metrics, i.e. with or without clean reference signal [23]. Intrusive metrics include wide-band Perceptual Evaluation of Speech Quality (PESQ) [24], Extended Short-Time Objective Intelligibility (ESTOI) [25], Scale-invariant(SI-) Signal-to-Distortion Ratio (SDR) and Signal-to-Artifact Ratio (SAR) [26]. OVRL, SIG, BAK, and DNSMOS [27] are non-intrusive metrics to assess speech quality without clean reference. All metrics are the higher the better.
## 5 Results and Analysis
### Multi-speaker scenario
We first investigate multi-speaker scenarios in both noisy and clean conditions. Table 1 compares discriminative model DPCCN [6], cascade diffusion-based separation and speaker verification model DiffSep+SV [13], our proposed DCEM and the R-DCEM methods in intrusive and non-intrusive metrics. D, G, D+G denote discriminative, generative, and the combination of discriminative and generative methods respectively. DCEM and R-DCEM share the same model and differ only in the inference process.
#### 5.1.1 Noisy Scenario
Comparing DCEM with discriminative baseline DPCCN, we observe that our method surpasses DPCCN in OVRL, SIG, BAK, and DNSMOS, indicating high perceptual speech quality. However, even though they are comparative in terms of ESTOI, DCEM is still lower than DPCCN in SI-SDR. We notice that DPCCN guarantees an SI-SDR greater than 10dB for 63% of the samples, compared to 51% for DCEM and 15% for DiffSep. For samples with SI-SDR less than -10dB, DPCCN is only 3.1% compared to 5.1% for DCEM.
On the other hand, DCEM outperforms DiffSep+SV significantly, which demonstrates the capability to address challenges arising from an unknown number of speakers and ambient noise. Notably, inference with R-DCEM using DPCCN predictions achieves the best performance among all models, affirming the effectiveness of our proposed methods.
#### 5.1.2 Clean scenario
Consistent with results in the noisy scenario, higher non-intrusive metrics prove that DCEM avoids speech distortion and enhances speech quality more effectively in the clean scenario than the discriminative DPCCN. Compared with DiffSep+SV, DCEM achieves
\begin{table}
\begin{tabular}{c|c|c|c c c c|c c c c} \hline \hline Eval & Model & Type & PESQ & ESTOI & SI-SDR & SI-SAR & OVRL & SIG & BAK & DNSMOS \\ \hline \multirow{6}{*}{Noisy} & Mixture & \(I\) & 1.08 & 0.40 & -2.0 & **42.4** & 1.63 & 2.33 & 1.66 & 2.71 \\ \cline{2-10} & DPCCN & D & 1.74 & 0.73 & 9.3 & 9.8 & 2.93 & 3.36 & 3.62 & 3.58 \\ \cline{2-10} & DiffSep+SV & G & 1.32 & 0.60 & 4.8 & 5.7 & 2.78 & 3.42 & 3.23 & 3.63 \\ \cline{2-10} & DCGM & G & 1.60 & 0.71 & 7.6 & 8.4 & **3.28** & **3.52** & **4.11** & 3.74 \\ \cline{2-10} & R-DCEM & D+G & **1.88** & **0.75** & **9.7** & 10.2 & 3.19 & 3.46 & 4.02 & **3.80** \\ \hline \multirow{6}{*}{Clean} & Mixture & / & 1.15 & 0.54 & 0.0 & **43.7** & 2.65 & 3.38 & 3.10 & 3.41 \\ \cline{2-10} & DPCCN & D & 2.22 & 0.83 & 13.1 & 13.6 & 3.05 & 3.43 & 3.77 & 3.73 \\ \cline{1-1} \cline{2-10} & DiffSep+SV & G & 1.85 & 0.79 & 9.6 & 10.2 & 3.14 & 3.49 & 3.88 & 3.83 \\ \cline{1-1} \cline{2-10} & DGEM & G & 1.79 & 0.78 & 9.9 & 11.0 & **3.30** & 3.53 & **4.14** & 3.79 \\ \cline{1-1} \cline{2-10} & R-DCEM & D+G & **2.27** & **0.85** & **13.3** & 14.0 & 3.29 & **3.53** & 4.10 & **3.91** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Performance comparison in multi-speaker scenario. All metrics are the higher the better.
Figure 1: Model architecture
similar performance, but in terms of SI-SDR distributions, DCEM has 73% of samples with SI-SDR over 10 dB, while DiffSep+SV has only 56%, as shown in Figure 2. Besides, DiffSep rarely generates samples with SI-SDR below -10 dB because of the speaker discriminatory nature of the speaker verification model. However, in DCEM, 5.5% of samples fall into this range, with 89% of them having the same gender, implying the need for further investigation into addressing complex scenarios like same-gender speaker confusion.
### Single-speaker scenario
In single-speaker scenarios, speech extraction can be performed without requiring additional enrollment speech. We directly extract speaker embedding \(s\) from noisy speech \(\mathbf{y}\) thanks to the speaker extractor's noise robustness, enabling broader application.
Table 2 shows that the proposed DCEM not only outperforms another diffusion-based model SGMSE+, but also further closes the gap with discriminative method DCCRN [22, 5]. Even in the mismatch scenario with multi-speaker training data and single-speaker testing, our model achieves results similar to the matching scenario, indicating generalization and robustness to unseen data and tasks. We can also employ enrollment speech, as in multi-speaker scenarios, but with marginal performance gain.
We use discriminative DCCRN to leverage R-DCEM for speech regeneration. Table 2 shows that both models trained on multi- and single-speaker data lead to the best results. Complementary to the average results above, we present in Figure 3 violin plots of the distribution of DNSMOS scores between the R-DCEM and its corresponding discriminant model (DPCCN for multiple speakers and DCCRN for single speakers). We observe consistent performance enhancement across various discriminative models and scenarios, implying the potential for R-DCEM to serve as a versatile plug-in to further ameliorate the performance of conventional approaches.
### Ablation study
Ablation studies on the multi-speaker clean scenario reveal the impact of the second stage of training (MCL) and the Ensemble strategy in Section 3. Table 3 indicates that disabling MCL leads to declines in both intrusive and non-intrusive assessments, emphasizing the significance of maintaining consistency between training and inference. Furthermore, MCL effectively mitigates speaker confusion with a relative reduction of 34% in the number of samples with SI-SDR below -10dB, thus improving the overall model performance.
We observe that the ensemble strategy is beneficial for intrusive metrics, but causes a slight decrease in non-intrusive speech quality. This suggests that averaging speech with diversity may introduce undesired distortion despite its ability to avoid incidental speech defects such as speaker confusion.
### Inference Speed
Typical diffusion models require numerous timesteps for optimal quality, a limitation unaddressed in previous work. Our DCEM significantly reduces inference time, enabling end-to-end target speech extraction in just 10 steps with a Real-Time-Factor (RTF) of 0.41, which is 4.3x and 2.2x faster than SGMSE and SGMSE+ [9]. Our DCEM also exhibits significantly higher inference speed compared with cascaded DiffSep+SV, which has a similar framework to SGMSE+ [13]. More practically, when given pre-processed speech of discriminative method, R-DCEM can be completed in just 2 steps resulting in an RTF of 0.095, which not only narrows the inference speed gap with traditional models but also ensures suitability for real-world applications.
## 6 Conclusion
We proposed Diffusion Conditional Expectation Model (DCEM), an efficient and robust generative model for target speech extraction, and then it is further extended to the Regenerate-DCEM (R-DCEM) so that it can be used as a generative optimizer for pre-existing models. Our experiments demonstrate consistent improvements in speech quality across various scenarios. These advancements not only bridge the inference speed gap with traditional models but also showcase robust generalization capabilities. In the future, we will further explore the potential of diffusion models by investigating conditioning mechanism, model structures, and so on.
\begin{table}
\begin{tabular}{c|c|c c c|c c} \hline \hline Model & Type & PESQ & ESTOI & SI-SDR & OVRL & DNSMOS \\ \hline Noisy speech & \(l\) & 1.16 & 0.56 & 3.5 & 1.75 & 2.63 \\ DCCRN & D & 2.03 & 0.81 & 13.3 & 2.98 & 3.64 \\ \hline SGMSE+ & G & 1.99 & 0.82 & 11.1 & 3.12 & 3.60 \\ DCEM & G & 2.03 & **0.83** & 12.6 & **3.33** & **3.84** \\ \hline R-DCEM & D+G & **2.24** & **0.83** & **13.7** & 3.15 & 3.77 \\ DCEM\({}^{\dagger}\) & G & 2.01 & 0.82 & 12.2 & 3.25 & 3.75 \\ \hline R-DCEM\({}^{\dagger}\) & D+G & 2.20 & **0.83** & **13.7** & 3.18 & 3.80 \\ \hline \hline \end{tabular}
* This model is trained on multi-speaker data
\end{table}
Table 2: Performance comparison in single-speaker noisy scenario
Figure 3: DNSMOS comparison between R-DCEM and corresponding discriminative model. The higher the better.
\begin{table}
\begin{tabular}{c|c c c|c c} \hline \hline Model & PESQ & ESTOI & SI-SDR & OVRL & DNSMOS \\ \hline DCEM & **1.79** & **0.78** & **9.9** & 3.30 & 3.79 \\ \(-\)MCL & 1.77 & 0.75 & 8.8 & 3.24 & 3.71 \\ \(-\)Ensemble & 1.77 & 0.76 & 8.9 & **3.35** & **3.84** \\ \hline \(-\)MCL,Ensemble & 1.76 & 0.73 & 7.6 & 3.29 & 3.75 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Ablation Study in multi-speaker clean scenario
Figure 2: SI-SDR distribution of DCEM and DiffSep+SV in multi-speakers clean scenario with gender annotation. |
2305.19726 | Learning Representations without Compositional Assumptions | This paper addresses unsupervised representation learning on tabular data
containing multiple views generated by distinct sources of measurement.
Traditional methods, which tackle this problem using the multi-view framework,
are constrained by predefined assumptions that assume feature sets share the
same information and representations should learn globally shared factors.
However, this assumption is not always valid for real-world tabular datasets
with complex dependencies between feature sets, resulting in localized
information that is harder to learn. To overcome this limitation, we propose a
data-driven approach that learns feature set dependencies by representing
feature sets as graph nodes and their relationships as learnable edges.
Furthermore, we introduce LEGATO, a novel hierarchical graph autoencoder that
learns a smaller, latent graph to aggregate information from multiple views
dynamically. This approach results in latent graph components that specialize
in capturing localized information from different regions of the input, leading
to superior downstream performance. | Tennison Liu, Jeroen Berrevoets, Zhaozhi Qian, Mihaela van der Schaar | 2023-05-31T10:36:10Z | http://arxiv.org/abs/2305.19726v1 | # Learning Representations without Compositional Assumptions
###### Abstract
This paper addresses unsupervised representation learning on tabular data containing multiple views generated by distinct sources of measurement. Traditional methods, which tackle this problem using the multi-view framework, are constrained by predefined assumptions that assume feature sets share the same information and representations should learn globally shared factors. However, this assumption is not always valid for real-world tabular datasets with complex dependencies between feature sets, resulting in localized information that is harder to learn. To overcome this limitation, we propose a data-driven approach that learns feature set dependencies by representing feature sets as graph nodes and their relationships as learnable edges. Furthermore, we introduce LEGATO, a novel hierarchical graph autoencoder that learns a smaller, latent graph to aggregate information from multiple views dynamically. This approach results in latent graph components that specialize in capturing localized information from different regions of the input, leading to superior downstream performance.
Machine Learning, ICML
## 1 Introduction
Tabular datasets encountered in the real world often contain distinct feature sets, or views, that originate from different sources of measurement. For instance, the UK Biobank (Bycroft et al., 2018) contains measurements of sociodemographic factors, heart and lung function, genomic data, and electronic health records, each providing information on a different aspect of a patient's medical state, but also dependent on one another to form a holistic medical context.
While different feature sets can be consolidated into a single table, doing so can result in suboptimal learning performance due to heterogeneity among feature sets and the loss of valuable relational information. A common approach is then _multi-view learning_(Xu et al., 2013), which examines each feature set separately and integrates information from multiple views to learn representations. This task can be difficult, particularly when labels for supervision are not available, which can help disambiguate the dependencies between views and task-relevant information. In the unsupervised learning setting, models rely on data assumptions and inductive biases to learn good representations automatically (Locatello et al., 2019).
Existing multi-view learning methods often rely on _compositional assumptions_, which assume that information is distributed and should be aggregated in predetermined patterns. The classic multi-view inductive bias assumes that views provide similar task-relevant information (Yan et al., 2021), guiding how information is aggregated, with the goal of learning robust and generalized representations that are invariant across views (Federici et al., 2019). These assumptions have been widely used in image, text, and speech domains, such as audio-visual speech recognition (Huang and Kingsbury, 2013) and image-caption models (Radford et al., 2021), where the settings are more controlled and the number of views is limited. In these domains, systematic and aligned data collection ensures maximal information overlap between feature sets, making inter-view relationships and information aggregation strategies known in advance.
However, these assumptions may not hold for tabular multi-view data, especially those collected _in-the-wild_(ITW), where relationships between feature sets are significantly more opaque. Examples of this include electronic health records (Johnson et al., 2023), biobanks (Nagai et al., 2017), and stock market data (Xu and Cohen, 2018). In these datasets, information is more likely to exist in localized clusters of views in unknown patterns, rather than being globally present in all views (Xu et al., 2013). This is particularly true when dealing with tabular problems that typically have more than two feature sets. Our findings indicate that compositional assumptions are inadequate when learning on tabular data collected in-the-wild, failing to capture localized information in representations.
To overcome this challenge, we propose a method to model relationships between feature sets and dynamically aggre
gate potentially localized information. We represent feature sets as graph nodes and their relationships as learnable edges. Furthermore, we introduce the Latent Graph \(\Delta\)uToencoder (LEGATO), a novel graph neural network that learns a smaller, latent graph. This architecture innovates on existing autoencoder architectures that learn compact node embeddings, but do so on the same topology as the input graph. Our method learns a smaller _graph_, which is crucial, as it allows for end-to-end learning of information aggregation strategies without relying on predefined assumptions. We term the latent graph a _decomposable representation_ to emphasize that, by design, it can be decomposed into node representations that specialize in aggregating information from different regions of the input. We evaluate the effectiveness of our method by testing its ability to transfer to downstream tasks, as a good representation should facilitate subsequent problem-solving.
**Contributions. 1.** We identify the challenges associated with learning representations from heterogeneous tabular feature sets collected in real-world settings and showcase the limitations of existing unsupervised learning methods that heavily rely on predefined compositional assumptions. **2.** Instead of relying on predefined assumptions, we propose a novel approach that treats feature sets as graphs to capture dependencies, which to the best of our knowledge, is a novel way to represent multi-view data. **3.** We introduce LEGATO, a novel graph autoencoder architecture that learns a smaller latent graph. This smaller graph induces a decomposable representation by dynamically aggregating localized information in a hierarchical manner. We conduct simulation studies to demonstrate the effectiveness of our model in learning data-driven aggregation strategies. Moreover, we showcase the superior downstream performance of our method on multiple real-world datasets.
## 2 Problem Definition
### Notation
In this paper, we use the terms "feature sets" and "views" interchangeably. We consider \(K\) different feature sets, depicting one instance \(X=\{X^{k}:k\in[K]\}\). Each \(X^{k}\) is sampled from a space \(\mathcal{X}^{k}\subseteq\mathbb{R}^{d^{k}}\), and \(\mathcal{X}=\mathcal{X}^{1}\times\cdots\times\mathcal{X}^{k}\). With \(X\) the random variable, we have \(x=\{x^{k}:k\in[K]\}\) as its realization. For each \(x^{k}\), we have a \(d\)-dimensional view embedding \(h^{k}\in\mathcal{H}^{k}\subseteq\mathbb{R}^{d}\) produced using an encoder function \(g^{k}:\mathcal{X}^{k}\rightarrow\mathcal{H}^{k}\).1 Correspondingly, \(f^{k}:\mathcal{H}^{k}\rightarrow\mathcal{X}^{k}\) denotes the view decoder function. We are agnostic to the exact architecture of \(g^{k}(\cdot)\) and \(f^{k}(\cdot)\) for generality. We have access to a dataset \(\mathcal{D}=\{x_{i}\}_{i=1}^{N}\), with \(N\) iid samples. We use superscript to indicate view and subscript for the sample, such that \(x_{i}^{k}\) is the \(k^{th}\) view of the \(i^{th}\) sample. When the context is clear, we may drop the subscript to declutter exposition.
Footnote 1: We assume the embedding dimension is \(d\) for all views for notation convenience, but this restriction is not necessary.
### Challenges of Learning In-The-Wild
**Compositional assumptions.** Compositional assumptions are two-fold: they reflect beliefs on how information is shared between feature sets, and how information should be aggregated in a representation. The multi-view assumption is the predominant compositional assumption made in existing works--it posits that important information co-occurs in all available views, leading to a focus on maximizing mutual information among multiple view representations (Federici
Figure 1: **Dynamically aggregating information**. Solid lines represent information sharing and dashed lines represent aggregation. Existing methods (1a) assume views share the same information and aggregate information globally. In comparison, our method (1b) learns dependencies and aggregation strategy in a data-driven manner.
Figure 2: **Effect of view correlation on learning (K=6). When views are globally correlated, higher correlation improves performance for all models. When local correlation increases, the performance of existing methods deteriorates as they fail to learn localized information.**
et al., 2019). This approach has been successfully applied in many domains, especially image, text, and speech, where it is known apriori (e.g. through careful data collection) that the semantically meaningful variations exist in all signals (Vrigkas et al., 2015; Radford et al., 2021). By learning shared information, these methods improve the robustness and generalization of multi-view representations. More recently, methods have also considered the possibility that each view may contain unique information not present in other views (Xu et al., 2013), with the aim of retaining both view-specific and globally shared information.
**Multi-view data collected in-the-wild.** Tabular feature sets collected in-the-wild (ITW) present a different challenge, as information is rarely presented in known patterns across different views. We argue that tabular multi-view data found in the real world are characterized by two main features: \(\blacktriangleright\)**Localized information** - where different sources of information are concentrated in localized subsets of views, as opposed to the globally shared information assumed by existing methods, and \(\blacktriangleright\)**A larger number of views** - resulting in more complex dependencies and localized clusters of information. We provide further discussions and detailed case studies on these feature sets and their characteristics in Appendix A.
These characteristics make the representation learning task more challenging. Existing methods use the multi-view inductive bias to infer a common representation \(z\) (and optionally a set of view-specific representations \(\{z_{i}\}_{i=1}^{K}\)), leading to a global aggregation of information, as visualized in Figure 0(a). However, these assumptions are inadequate to address problems ITW, which contain localized information that manifests in unknown ways. Additionally, the large number of possible view combinations (combinatorial in \(K\)) makes it infeasible to explicitly consider different local aggregation patterns. Our method, as depicted in Figure 0(b), addresses this challenge by proposing a novel approach to dynamically learn dependencies and aggregate information, without relying on predefined assumptions.
**Learning challenges.** Given the learning capacity and expressiveness of modern neural networks, it is natural to wonder whether incorrectly specified compositional assumptions are truly detrimental in practice. While representations may be biased, they can still implicitly learn all localized sources of information. We empirically show that this is not the case in a simulation study (described in Section 5.1), where the downstream task is to predict the latent variables that generated each view. Existing methods perform better as the global correlation between latent variables increases (Figure 1(a)). This is intuitive because views contain more information about latent variables in other views, which can be effectively learned using the multi-view inductive bias. However, when latent variables are only locally correlated (Figure 1(b)), increased correlation does not lead to improved performance. This is because higher correlation only provides locally useful information, which is overlooked when incorrect compositional assumptions are used.
## 3 Proposed Method
We propose a framework for _learning_ information aggregation patterns from data without predefined compositional assumptions. This requires accounting for localized information sharing between views, which can be naturally represented using graphs. Our method makes two contributions: first, we learn the view dependencies as edges in a graph, which, to the best of our knowledge, is a novel way to represent multi-view data. Second, we introduce LEGATO, a novel graph autoencoder that learns a smaller latent graph. The latent graph produces a decomposable representation that aggregates localized information. To complete the autoencoder, the latent graph is unpooled to reconstruct each view individually, with the hierarchical process trained end-to-end. Our proposed method is illustrated in Figure 3.
### Learning the Multi-view Graph
We define an initial graph on view embeddings, where nodes represent views and edges represent the inter-view relationships, i.e. \(G^{(0)}\coloneqq\left(H^{(0)},A^{(0)}\right)\). \(A^{(0)}\in[0,1]^{K\times K}\) is the adjacency matrix between \(K\) views and \(H^{(0)}\in\mathbb{R}^{K\times d}\) is the node feature matrix, where the \(k^{th}\) row is the view embedding \(h^{k}\). We are agnostic to the view encoder-decoder architecture and first obtain view embeddings \(h^{k}=g^{k}(x^{k})\) independently for each view \(k\in[K]\).
The adjacency matrix \(A^{(0)}\) is rarely known. In the most general setting, every node can be connected to every other node, ignoring localized structure. This reflects the multi-view inductive bias, which assumes that each view shares the same information with all other views (as shown in Figure 0(a)). Clearly, this is not the case ITW, as certain views will only share information locally with other views.
We propose to learn the localized graph structure. Specifically, \(\texttt{GRAPHLEARNER}:\mathbb{R}^{K\times d}\rightarrow[0,1]^{K\times K}\), which takes as input the view embeddings and returns the adjacency matrix. We first apply a non-linear transformation to each view embedding:
\[e_{i}=\texttt{LeakyReLU}\left(W[h_{i}\|1_{i}]\right) \tag{1}\]
where \(W\in\mathbb{R}^{d^{\prime}\times f}\) applies a linear transformation, followed by a \(\texttt{LeakyReLU}(\cdot)\) activation. We encode view information for view \(i\) through the concatenation operation \(\|\) of \(h_{i}\) and the one-hot encoding \(1_{i}\) to obtain a \(d^{\prime}\)-dimensional input. Then, we compute the inner product between views, normalized by the sigmoid function \(\sigma(\cdot)\) :
\[A^{(0)}_{ij}=\sigma(e_{i}^{T}\cdot e_{j}) \tag{2}\]
The normalized coefficients take values \(\in[0,1]\) to represent the dependence between views. Note that the mechanism is invariant to the ordering of inputs and that \(A^{(0)}\) is a symmetric matrix. We additionally apply a threshold function to \(A^{(0)}\), where entries \(<0.1\) are considered uninformative and zeroed out. As we want informative local neighbors to be found, we add a regularization term \(\mathcal{L}_{spar}=\frac{1}{NK^{2}}\sum_{i=1}^{N}\lVert A_{i}^{(0)}\rVert_{1}\), where \(\lVert\cdot\rVert_{1}\) denotes the \(p=1\) matrix norm. This term encourages sparsity in the adjacency matrix and reduces the learning of spurious dependencies between views (e.g. by learning a fully-connected graph).
**A distinction.** We emphasize that our goal is not _relational inference_, which seeks to infer relationships between views from observation data (Kipf et al., 2018; Hajiramezanali et al., 2020; Hasanzadeh et al., 2021). In this problem, a correctly recovered relational structure _is_ the object of inference. This stands in stark contrast to our work, where a partially correct structure is satisfactory, as our main purpose is to aggregate information while considering local dependencies. As we shall show later, even learning a partially correct structure can greatly improve the learned representations.
### LEGATO: Latent Graph Autoencoder
After learning an initial adjacency matrix, the next step is to aggregate information shared between views in a latent graph. To do this, we leverage the intuition that views with similar information should be aggregated together. We introduce LEGATO, a hierarchical procedure that learns a latent graph while pooling essential information (Cai et al., 2021). In more detail, we transform \(G^{(0)}\) through a pooling step to obtain a latent graph \(G^{(z)}\). This transformation pools information shared between views, so that each latent node aggregates localized information. Next, we take an unpooling step to reconstruct the graph \(\hat{G}^{(0)}\), and the entire hierarchical model is trained end-to-end as an autoencoder.
We use _graph neural networks_ (GNN) to learn the latent graph representation (Gilmer et al., 2017; Zhou et al., 2020). However, existing graph autoencoders are unsuitable for our purposes. The latent graphs in existing works learn compact node embeddings on the same graphical structure as the input graph, where similarity objectives are used to encourage embeddings of topologically connected nodes to be more similar (Kipf Welling, 2016; Simonovsky and Komodakis, 2018). In contrast, the latent graph learned in LEGATO is a smaller, pooled graph that aggregates information from input views with stronger dependencies. We provide an overview of GNN methods and elaborate on related graph autoencoder methods in Appendix B.
**Graph pooling.** The latent graph \(G^{(z)}\coloneqq(H^{(z)},A^{(z)})\) is a pooled graph with \(K^{\prime}<K\) nodes. Here, \(A^{(z)}\in[0,1]^{K^{\prime}\times K^{\prime}}\) and \(H^{(z)}\in\mathbb{R}^{K^{\prime}\times r}\), where each row is a \(r\)-dimensional latent node embedding. We propose a graph pooling operation \((H^{(z)},A^{(z)})=\text{POOL}(H^{(0)},A^{(0)})\) by adapting the DiffPool algorithm (Ying et al., 2018). In our experiments, we set \(K^{\prime}=K/2\), which was found to be a robust setting. Additionally, we note that by setting \(K^{\prime}=1\), we can perform global aggregation, similar to existing methods.
**Pooling strategy.** The pooling strategy is learned through a separate network that considers localized dependencies and view embeddings. This is different from traditional compositional assumptions that predefine the pattern of aggregation. Specifically, we learn a pooling matrix \(P\in[0,1]^{K\times K^{\prime}}\) in an input-dependent way by considering both view embeddings in \(H^{(0)}\) and view dependencies in \(A^{(0)}\). Intuitively, views that are dependent on each other likely contain similar information and should be aggregated together. To operationalize this insight, we learn the pooling matrix through a GNN:
\[P=\texttt{softmax}\left(\text{GNN}_{pool}(A^{(0)},H^{(0)})\right) \tag{3}\]
The \(\texttt{softmax}(\cdot)\) is applied in a _row_-wise fashion. Consequently, \(P\) indicates how information should be aggregated, where \(P_{ij}\) describes the contribution of the \(i^{th}\) view in the multi-view graph to the \(j^{th}\) node in the latent graph.
Figure 3: **High-level illustration of LEGATO.** The latent graph dynamically pools information by considering both view embeddings and dependencies. The latent graph returns a decomposable representation for downstream tasks.
**Latent embeddings.** We employ a separate GNN to update view embeddings using neighboring views' embeddings through message passing. This network produces \(Z\in\mathbb{R}^{K\times r}\), where each row now contains the updated \(r\)-dimensional embedding for each view:
\[Z=\text{GNN}_{embed}(A^{(0)},H^{(0)}) \tag{4}\]
By combining the pooling matrix and the transformed embeddings in Equations (3) and (4), we can now define the complete POOL operation. Mathematically, we can obtain the latent graph using the following equations:
\[A^{(z)} =P^{T}A^{(0)}P\in\mathbb{R}^{K^{\prime}\times K^{\prime}} \tag{5}\] \[H^{(z)} =P^{T}Z\in\mathbb{R}^{K^{\prime}\times r} \tag{6}\]
As in Equation (6), the latent embeddings are constructed through a weighted combination of transformed view embeddings using the pooling strategy in \(P\). This reflects the intuition that if a latent node pools information from a set of views, then its embedding should be constructed from those views. Correspondingly, the latent adjacency matrix Equation (5) considers existing connectivity strength in \(A^{(0)}\) and \(P\) to compute a weighted sum of edges between neighboring nodes.
**Orthogonality loss.** In practice, it can be difficult to train the pooling function \(\text{GNN}_{pool}\) using only gradient signal from an unsupervised loss. Instinctively, the function can learn a degenerate assignment where information is evenly pooled in the latent nodes, akin to the degeneracy of clustering (Alguwaizani, 2012). This would achieve the opposite of our desired objective, as we want latent nodes to aggregate different localized information. To alleviate this issue, we introduce an orthogonality regularization:
\[\mathcal{L}_{orth}=\frac{1}{N}\sum_{i=1}^{N}\frac{1}{C}\sum_{k=2}^{K^{\prime} }\sum_{j=1}^{k-1}\left\|\rho\left(h_{i}^{k},h_{i}^{j}\right)\right\|_{1} \tag{7}\]
where \(C=\frac{K^{\prime}\cdot(K^{\prime}-1)}{2}\) is the number of pairwise correlations and \(\rho(\cdot,\cdot)\) is calculated using cosine similarity. This term encourages orthogonality in the embeddings by de-correlating them. This encodes the intuition of decomposable representations, that each component should specialize in aggregating information from different local regions of the input, resulting in better representations for downstream tasks (Mathieu et al., 2019).
### Completing the Graph Autoencoder
**Graph unpooling.** The unpooling step decodes the original multi-view input from the pooled latent graph. We define the unpooling step \((\hat{H}^{(0)},\hat{A}^{(0)})=\texttt{UNPOOL}(A^{(z)},H^{(z)})\), where, \(\hat{H}^{(0)}\) and \(\hat{A}^{(0)}\) have the same dimensions as the input multi-view graph. Unpooling is mathematically identical to the pooling steps described in Equations (3) to (6). The intuition is also similar, in that the input graph is reconstructed based on a weighted combination of adjacency patterns and embeddings of the latent nodes. After the unpooling step, the node embeddings are passed through the corresponding view-specific decoders to reconstruct the views \(\hat{x}=\{\hat{x}^{k}\;:\;k\in[K]\}\).
**Training.** It is worth mentioning that multiple pooling and unpooling steps can be stacked, leading to the network gradually operating on more compressed latent graphs. For training the hierarchical model, we specify a reconstruction loss defined on the multi-view graphs:
\[\begin{split}\mathcal{L}_{recon}=&\frac{1}{NK}\sum _{i=1}^{N}\sum_{k=1}^{K}\lVert x_{i}^{k}-\hat{x}_{i}^{k}\rVert_{2}^{2}\\ &+\frac{1}{N}\sum_{i=1}^{N}\lVert A^{(0)}-\hat{A}^{(0)}\rVert_{2 }^{2}\end{split} \tag{8}\]
where the first term is a loss on reconstructed node embeddings and the second term is a loss on the recovered graph structure, together forming the graph reconstruction loss. This loss is combined with the regularization terms to form the training objective: \(\mathcal{L}_{recon}+\alpha\mathcal{L}_{orth}+\beta\mathcal{L}_{spar}\), where \(\mathcal{L}_{spar}\) regularizes the sparsity of the learned multi-view graph to reduce learning of spurious dependencies between views and \(\mathcal{L}_{orth}\) is an orthogonality regularization that decorrelates latent node embeddings. \(\alpha\) and \(\beta\) are the corresponding weighting terms for the two regularization terms. This expression and the hierarchical procedure are fully differentiable and can be trained end-to-end using auto-grad techniques.
### Latent Graph and Decomposable Representations
Existing unsupervised algorithms learn a latent representation that integrates different sources of information shared between views. However, this often results in representations that entangle localized information and are difficult to differentiate for downstream models. Our learned latent graph is decomposable and is expected to better preserve information and make it more amenable for downstream tasks (Lipton, 2018).
**Decomposable representations.** We claim that the latent graph is decomposable, as nodes act as specialized components that extract localized information from different regions in the input, and are encouraged to be orthogonal through \(\mathcal{L}_{orth}\). To make the representation more suitable for downstream models, we include an additional readout step that converts the latent graph into a vector \(\texttt{READOUT}:\mathbb{R}^{K^{\prime}\times r}\times[0,1]^{K^{\prime}\times K ^{\prime}}\rightarrow\mathbb{R}^{r}\). We use mean pooling to aggregate the node embeddings \(z=\frac{1}{K^{\prime}}\sum_{k=1}^{K^{\prime}}h^{k}\) and produce a vector representation that is composed of orthogonal components. Future works can consider more advanced readout strategies, including those
that take into account graph topology (Buterez et al., 2022).
Our approach can be informally compared to convolutional networks that extract localized information from natural images, which contain features in localized patches (LeCun et al., 2010). Importantly, pixels are related in a grid-like pattern and convolutional networks exploit this structure to learn and pool localized information. In our case, the relationships between views are not known a priori. Instead of making predefined assumptions, we model multi-view data as graphs and learn localized dependencies as edge weights. Subsequently, our graph autoencoder facilitates locality in information aggregation to compose representations.
## 4 Related Works
This work proposes a novel graph autoencoder for unsupervised representation learning on tabular multi-view data collected ITW. As such, there are two lines of related works: multi-view learning methods and GNN architectures.
**Multi-view learning.** Many existing methods assume that good bits of information co-occur in multiple views, and aim to extract globally present information. Figure 0(a) depicts the generative view of this assumption. One predominant approach is to obtain a _joint representation_ by integrating view representations onto the same latent space \(z=f\left(g^{1}(x^{1}),\dots,g^{k}(x^{k})\right)\). Ngiam et al. (2011) leveraged stacked autoencoders to obtain joint representation, whereas Srivastava & Salakhutdinov (2012, 2012) used probabilistic graphical models to infer \(z\). More recent works have used variational autoencoders (VAE) (Kingma & Welling, 2013). Suzuki et al. (2016) introduced a joint encoder structure to learn joint representations, whereas Wu & Goodman (2018) and Shi et al. (2019) proposed to combine view representations into a joint representation using product-of-experts (PoE) and mixture-of-experts (MoE) respectively.
Another approach learns _coordinated representations_ by placing regularization \(\phi(\cdot)\) on the correlation structure between representations to create a coordinated latent space, i.e. \(\operatorname*{arg\,max}_{h_{1:K}}\phi(h_{1:K})\). Prominent methods are based on canonical correlation analysis (CCA), which learns a common space where the linear canonical correlation between two views is maximized (Hardoon et al., 2004). Subsequent works have introduced non-linear extensions (Akaho, 2006; Andrew et al., 2013; Wang et al., 2015). These methods rely heavily on pair-wise coordination and cannot efficiently scale to more views. To address this, Benton et al. (2017) generalized CCA-style analysis to more than two views. Recent works have also adopted _self-supervised learning_ (SSL) objectives, which roughly maximize the mutual information between paired views. Federici et al. (2019) employs a mutual information bottleneck (MIB) to only retain mutual information between views. CLIP (Radford et al., 2021) contrastively maximizes (minimizes) cosine similarity of paired (unpaired) image-text samples.
The training objectives in CCA and SSL-based methods explicitly encourage learning of a view-invariant representation. A similar effect is implicit in joint representation methods, which can discard localized variations in shared representation spaces (Daunhawer et al., 2021; Wolff et al., 2022). When employed ITW, this bias towards global information can lead to fine-grained localized information being overlooked. Recent methods have additionally sought to preserve view-specific information (the generative model view of this assumption is presented in Figure 0(b)). MFM (Tsai et al., 2018) factorizes \(z\) into view-specific factors and shared factors but requires label information. Perhaps most similar to our work, Ye et al. (2016) and DMVAE (Lee & Pavlovic, 2021) aim for decomposable representations by explicitly separating shared and view-specific factors. However, both methods still rely on assumptions of global information and additionally, target information that manifests privately in each view. Our work does not require compositional assumptions and is capable of learning appropriate aggregation by accounting for inter-view relationships. We compare representative works in Table 1.
**GNN.** Graph autoencoders map graphs into a representation space to subsequently decode graph information from latent representations. Wang et al. (2016); Simonovsky & Komodakis (2018) embeds a graph into a continuous representation \(z\in\mathbb{R}^{r}\) to ensure topologically close nodes have similar representations. You et al. (2018) focuses on graph
\begin{table}
\begin{tabular}{|c|l|c|c|c c c|} \cline{2-7} \multicolumn{1}{c|}{} & **Method** & **Objective** & **Asm** & **(1)** & **(2)** & **(3)** \\ \hline \multirow{4}{*}{**Local**} & Suzuki et al. (2016) & Recon & fig 0(a) & ✓ & ✗ & ✗ \\ \cline{2-7} & Wu \& Goodman (2018) & Recon & fig 0(a) & ✓ & ✗ & ✗ \\ \cline{2-7} & Zhang et al. (2019) & Recon & fig 0(b) & ✓ & ✗ & ✗ \\ \cline{2-7} & Lee \& Pavlovic (2021) & Recon & fig 0(b) & ✓ & ✓ & ✗ \\ \cline{2-7} & Andrew et al. (2013) & CCA & fig 0(a) & ✗ & ✗ & ✗ \\ \cline{2-7} & Wang et al. (2015) & CCA & fig 0(a) & ✗ & ✓ & ✗ \\ \cline{2-7} & Benton et al. (2017) & CCA & fig 0(a) & ✓ & ✗ & ✗ \\ \hline \multirow{4}{*}{**Local**} & Federici et al. (2019) & MIB & fig 0(a) & ✗ & ✗ & ✗ \\ \cline{2-7} & Radford et al. (2021) & Contrastive & fig 0(a) & ✗ & ✗ & ✗ \\ \cline{2-7} & Tian et al. (2020) & Contrastive & fig 0(a) & ✓ & ✗ & ✗ \\ \cline{2-7} & **LEGATO** & **Recon** & **NA** & ✓ & ✓ & ✗ \\ \hline \end{tabular}
\end{table}
Table 1: **Related works.** Comparison of representative _unsupervised multi-view learning methods_ based on **training objective**, **assumed generative view (Asm)**, and desiderata: **(1)** scales to \(>2\) views, **(2)** learns localized information, and **(3)** dynamically learns aggregation strategy.
Figure 4: _Assumed compositional structure._
generation, recursively learning node embeddings to generate a graph sequentially. Instead of graph embeddings, Kipf and Welling (2016) infers a latent embedding for each node in the input graph. These works focus on learning embeddings on a fixed input graph \(G^{(0)}\), making them unsuitable for our purpose of dynamic and localized information aggregation. Our method is novel in that it hierarchically learns a smaller latent graph \(G^{(z)}\) whose node embeddings represent locally aggregated information. We provide an overview of related architectures in Table 2.
Previous methods have used GNNs for multi-view data by either: **1.** processing each view as a separate graph and using GNN to integrate node representations between graphs (Kim et al., 2020; Ma et al., 2020); or **2.** constructing an instance graph, where nodes represent instances of the data and edges represent relationships between them across views (Wei et al., 2019; Gao et al., 2020). In this work, we are the first to represent views as nodes and learn edge weights to indicate view dependencies.
## 5 Empirical Investigations
Having introduced the challenges of learning from multi-view data ITW and our proposed method to address it, we now turn to quantitatively evaluating our method:
1. **Learning ITW:**_What is the problem?_ Section 5.1 employs a simulation of ITW multi-view data to probe the performances of different compositional assumptions.
2. **Insights:**_How does it work?_ We use interpretability methods to interpret the graphs and latent aggregations.
3. **Performance:**_Does it work?_ Section 5.2 evaluates downstream performance of our method against state-of-the-art benchmarks on real world dataset.
4. **Gains:**_Why does it work?_ We deconstruct our method to investigate its sources of performance gain.
**Benchmarks.** We evaluate our method against 7 state-of-the-art methods, in line with benchmarks found in recent works (Federici et al., 2019; Zhang et al., 2019; Lee and Pavlovic, 2021). We consider two coordinated representation methods: **DCCAE**(Wang et al., 2015) and **DGCCA**(Benton et al., 2017); three joint representation methods: **JMVAE**(Suzuki et al., 2016), **MVAE**(Wu and Goodman, 2018), and **DMVAE**(Lee and Pavlovic, 2021); and one SSL method: **MIB**(Federici et al., 2019). We also include a vanilla **Transformer** model (Vaswani et al., 2017), which takes in a sequence of view embeddings and is pretrained using a reconstruction loss. For all results, we report the mean \(\pm\) std averaged over \(10\) runs. Our implementation can be found at [https://github.com/tennisonliu/LEGATO](https://github.com/tennisonliu/LEGATO) and at the wider lab repository [https://github.com/vanderschaarlab/LEGATO](https://github.com/vanderschaarlab/LEGATO). We provide additional information about implementation details, dataset preprocessing, and hyperparameters tuning in Appendix C.
### Synthetic Simulation
In Section 2.2, we characterized real-world ITW data as having more complex view dependencies, giving rise to clusters of localized information, and a larger number of views. In this subsection, we investigate the effect of these two characteristics on the quality of representations. We consider two view correlation settings, \(\blacktriangleright\)global : all views are globally correlated with each other, and \(\blacktriangleright\)local : views are locally correlated. We construct the following simulation as it is difficult in practice to have natural datasets that possess the required degree of view interaction.
**Simulation setting.** We simulate multi-view data with \(K<10\) views. Each view is generated from a scalar latent variable such that \(z_{k}\sim\mathcal{N}(k,1)\) and \(z_{k}\to x_{k}\;\;\forall\;k\in[K]\). We simulate global correlation between views by computing \(z_{k}\leftarrow(1-w)\cdot z_{k}+w\cdot z_{1}\;\forall\;k\in[K]\), such that information from \(z_{1}\) is shared across all views. Additionally, \(w\) controls how much information is shared, with a larger \(w\) indicating higher degrees of overlap, and \(w=0\) meaning each view is mutually independent. To simulate local correlation, we sample each pair of latent variables from a multivariate normal distribution, i.e.:
\[z_{1},z_{2}\sim\mathcal{N}(\mu,\mathbf{\Sigma}),\;\;\text{where}\;\;\mu=\begin{bmatrix} 1\\ 2\end{bmatrix},\;\mathbf{\Sigma}=\begin{bmatrix}1&w\\ w&1\end{bmatrix}\]
which with \(K\) views would give us \(K/2\) localized clusters, where each cluster of two views is correlated while being mutually independent of other clusters. We generate \(100\)-dimensional feature vectors for each view using a non-linear transformation, \(x_{k}=MLP_{k}(z_{k})\), where \(MLP_{k}(\cdot)\) is a randomly initialized single-layer MLP with Tanh\((\cdot)\) activation. The downstream task is the recovery of view-specific latent variables \(\{z\}_{i=1}^{K}\), which is a good proxy for whether representations learn localized information.
**Results.** We consider \(w\) in range \(\{0.0,0.25,0.50,0.75\}\) and
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline
**Method** & **Enc/Dec** & **Lat Rep** & **Aim** \\ \hline Wang et al. (2016) & MLP/MLP & \(z\in\mathbb{R}^{r}\) & Node \\ \hline Kipf \& Welling (2016) & GNN/Sim & \(A,H^{(z)}\) & embeddings \\ \hline You et al. (2018) & RNN/DP & \(A,H^{(z)}\) & \\ Simonovsky and Komodakis (2018) & GNN/MLP & \(z\in\mathbb{R}^{r}\) & Graph & generation \\ \hline De Cao and Kipf (2018) & GNN/MLP & \(z\in\mathbb{R}^{r}\) & \\ \hline
**LEGATO** & **GNN/GNN** & **A(0)**, **H(0)** & **Information aggregation** \\ \hline \end{tabular}
\end{table}
Table 2: **GNN methods. Overview of representative _graph autoencoder_ architectures based on encoder/decoder **(Enc/Dec)** architecture, **latent representation (Lat Rep)** and **aim**. Sim = similarity measure, DP = decision process.**
\(K\) in range \(\{2,4,6,8,10\}\). We plotted the effect of view correlation \(w\) and the number of presented views \(K\) on representation quality by evaluating the mean \(MSE\) in Figures 2 and 5 respectively. As we previously noted, higher global correlation improves the performance of all models, as each view contains more information about all other views. However, increased local correlation is found to decrease the performance of existing methods, which are biased by their compositional assumptions to overlook localized information in favor of globally present factors. Additionally, while views are globally correlated, a larger number of views lead to better performance. In contrast, when views are locally correlated, performances of conventional methods deteriorate quickly as more localized clusters emerge. In comparison, our work is the only one that can effectively learn localized information with higher degrees of local correlation and a larger number of views.
**Model inspection.** We investigate the inner workings of our proposed method and the learned multi-view graph and latent graph embeddings. We visualize learned dependencies in the multi-view graphs in Figure 5(a) and use Integrated Gradients (Sundararajan et al., 2017) to visualize the contribution of each view to latent node embeddings in Figure 5(b). We note that, while our method is not designed for _relational inference_, it can dynamically learn dependencies between views. Additionally, we see the specialization of latent nodes to aggregate information from different regions of the input, where each node focuses on extracting information from more correlated views.
### Overall Performance
**Datasets.** We now move on to evaluate our method on three real-world datasets. \(\blacktriangleright\)**TCGA (Tomczak et al., 2015) is a multi-omics dataset containing \(7295\) cancer cell lines with \(4\) views: mRNA expressions, DNA methylation, microRNA expressions, and reverse-phase protein array. The downstream task is to predict one-year mortality from cancer. \(\blacktriangleright\)**UK Biobank**(Sudlow et al., 2015) is a large population-based medical database. We extract a lung mortality dataset containing \(9\) views based on the given feature categorizations.2 The views include patient demographics, view and lifestyle factors, physical measures, recorded medical conditions, biomarkers, physical measures, geographical information, treatment history, and family/heredity conditions. The downstream task is the binary classification of lung cancer mortality. \(\blacktriangleright\)**UCI-MFS**(van Breukelen et al., 1998) is more representative of a traditional multi-view task, where views share similar information. Here, all views contain hand-crafted features extracted from images of handwriting. The downstream task is to predict the handwritten numerals (\(0\)-\(9\)). We describe dataset characteristics and pre-processing in Appendix C.
Figure 5: **Effect of \(K\) on learning (w=0.5).** When views are globally correlated, more views lead to better performance. When local correlation increases, performance worsens as more localized clusters of information emerge.
Figure 6: **Model inspection (K=6).** Our method dynamically learns view dependencies and latent nodes (components) specialize in aggregating localized information.
\begin{table}
\begin{tabular}{|c|l|c|c|c|} \cline{2-5} \multicolumn{1}{c|}{} & \multirow{2}{*}{**Method**} & **TCGA** & **UK Biobank** & **UCI-MFS** \\ & & **(AUROC \(\uparrow\))** & **(AUROC \(\uparrow\))** & **(ACC \(\uparrow\))** \\ \hline \multirow{7}{*}{**TCGA**} & DCCAE & \(0.673\pm 0.047\) & \(0.624\pm 0.041\) & \(0.742\pm 0.034\) \\ & DGCCA & \(0.620\pm 0.073\) & \(0.669\pm 0.058\) & \(0.688\pm 0.031\) \\ & JMVAE & \(0.695\pm 0.034\) & \(0.718\pm 0.043\) & \(0.825\pm 0.057\) \\ & MVAE & \(0.656\pm 0.039\) & \(0.715\pm 0.059\) & \(0.818\pm 0.042\) \\ & DMVAE & \(0.676\pm 0.029\) & \(0.688\pm 0.049\) & \(\mathbf{0.825\pm 0.043}\) \\ & MB & \(0.620\pm 0.083\) & \(0.696\pm 0.067\) & \(0.813\pm 0.036\) \\ & Transformer & \(0.679\pm 0.080\) & \(0.711\pm 0.064\) & \(\mathbf{0.825\pm 0.029}\) \\ \hline \multirow{7}{*}{**TCGA**} & NoMier & \(0.652\pm 0.036\) & \(0.710\pm 0.041\) & \(0.782\pm 0.034\) \\ & NoGraph & \(0.696\pm 0.032\) & \(0.698\pm 0.030\) & \(0.794\pm 0.046\) \\ \cline{1-1} & NoReg & \(0.688\pm 0.039\) & \(0.679\pm 0.032\) & \(0.801\pm 0.037\) \\ \cline{1-1} \cline{2-5} & **LEGATO** & \(\mathbf{0.703\pm 0.051}\) & \(\mathbf{0.720\pm 0.038}\) & \(0.824\pm 0.030\) \\ \hline \end{tabular}
\end{table}
Table 3: **Downstream classification results on three multi-view datasets. \(\mathrm{Bold}\) indicates the best performance.**
**Ablation study.** Our method is designed with a number of characteristics in mind. Having empirically demonstrated strong overall results, an immediate question is how important these characteristics are for performance. Specifically, we consider the sources of gain from **(a)**_hierarchical graph pooling_ (**NoHier**), we consider removing the pooling layer, relying simply on GCN layers, **(b)**_multi-view graph learning_ (**NoGraph**), we replace the learned input graph with a fully-connected graph, and **(c)**_orthogonality regularization_ (**NoReg**), we remove the orthogonality regularization.
**Results.** We report downstream classification performance in Table 3. We first analyze the performance on **TCGA** and **UK Biobank**, which are more representative of tasks found _in-the-wild_, with more complex view dependencies and a higher number of views. We note that in these settings, LEGATO achieves superior performance, being particularly suited for learning the complex dependencies between views and aggregating localized information. We additionally find that joint representation methods perform better than their coordinated counterparts, likely as the emphasis on shared information aggregation is implicit rather than explicitly enforced in CCA and SSL methods. Next, we investigate performance on **UCI-MFS**, which is more representative of traditional multi-view tasks. Here, we find that our model performs on par with state-of-the-art methods. This is likely because the multi-view assumption holds true, empowering baseline methods (e.g. **DMVAE**, **Transformer**) that exploit the multi-view inductive bias. On our ablation settings, we observe all three aspects are crucial for performance, with a notable \(8\%\) performance gain over a GCN network with no latent graph learning. Similarly, orthogonality regularization improves model performance by encouraging orthogonal components. We note that this is more crucial on ITW datasets with more views, as this regularization better encourages the learning of localized information.
## 6 Discussion
Existing multi-view methods make compositional assumptions on the existence of global information, often neglecting localized information when deployed on tabular data ITW. In this work, we represent multi-view data as graphs and their dependencies as learnable edge weights. Moreover, we propose LEGATO, a novel autoencoder that learns a latent graph as a decomposable representation, where each of the latent components specializes in learning different aspects of localized information. Our method empirically demonstrated its effectiveness in learning representations on traditional multi-view tasks but excelled on ITW multi-view datasets with more complex localized dependencies. **Future works.** We see several directions for future research. One avenue is the development of better GNN or attention mechanisms tailored to capture localized dependencies more effectively. Additionally, investigating advanced optimization strategies, regularization techniques, and loss functions that account for the specific challenges of multi-view learning in tabular data could lead to improved model performance and generalization. Lastly, while we used an unsupervised reconstruction loss, we believe that the incorporation of more advanced semi- and self-supervised objectives can better leverage unlabeled data to enhance representation learning.
## Acknowledgements
We thank the anonymous ICML reviewers as well as members of the van der Schaar lab for many insightful comments and suggestions. Tennison Liu would like to thank AstraZeneca for their sponsorship and support. Jeroen Berrevoets thanks W.D. Armstrong Trust for their support. This work is also supported by the National Science Foundation (NSF, grant number 1722516) and the Office of Naval Research (ONR).
|
2310.20311 | Steady water-waves with arbitrary surface-pressure: Their recovery from
bottom-pressure measurements | Equations relating the pressure at a horizontal seabed, the free-surface
profile and the surface-pressure are derived for two-dimensional irrotational
steady water waves with arbitrary pressure at the free surface. Special cases
include gravity, capillary, flexural and wind waves. Without approximations, we
show that the free-surface recovery from the bottom-pressure requires the
resolution of only one first-order ordinary differential equation independent
of the surface-pressure, thus providing a new general recovery method valid for
a broad class of water waves. Another equation provides an explicit expression
for the surface-pressure as a function of the bottom-pressure and of the
free-surface. Thus, if unknown, the surface-pressure can be also recovered if
one extra measurement is available. This new recovery procedure is illustrated
analytically for the linear approximation of a flexural-capillary-gravity wave,
and numerically for fully nonlinear capillary-gravity waves. | Didier Clamond, Joris Labarbe | 2023-10-31T09:37:02Z | http://arxiv.org/abs/2310.20311v1 | # Steady water-waves with arbitrary surface-pressure:
###### Abstract.
Equations relating the pressure at a horizontal seabed, the free-surface profile and the surface-pressure are derived for two-dimensional irrotational steady water waves with arbitrary pressure at the free surface. Special cases include gravity, capillary, flexural and wind waves. Without approximations, we show that the free-surface recovery from the bottom-pressure requires the resolution of only one first-order ordinary differential equation independent of the surface-pressure, thus providing a new general recovery method valid for a broad class of water waves. Another equation provides an explicit expression for the surface-pressure as a function of the bottom-pressure and of the free-surface. Thus, if unknown, the surface-pressure can be also recovered if one extra measurement is available. This new recovery procedure is illustrated analytically for the linear approximation of a flexural-capillary-gravity wave, and numerically for fully nonlinear capillary-gravity waves.
_E-mail addresses_: [email protected], [email protected]
## 1. Introduction
In this paper, we present equations relating the surface-wave profile, the surface-pressure and the bottom-pressure. This study includes any surface-pressure describing various physical effects, such as capillarity, flexural elasticity, wind stress, etc.
Methods for recovering pure gravity (i.e., with constant pressure at the free surface) irrotational waves from bottom pressure gauges have long been proposed. These methods either solve the problem exactly or under various simplifications; see [10], [15], [6], [17] and the references therein for reviews and details. Recently, [9] showed that an exact recovery is also possible in presence of constant vorticity. However, to the present authors knowledge, the recovery of capillary, flexural and wind waves (among many other situations of physical interest) has never been attempted. These phenomena involve different non-constant surface-pressures that can be very complicated (especially for capillary and flexural waves), and the surface-pressure is generally a function of the free surface profile that is unknown _a priori_. Hence, compared to the case with constant surface-pressure (i.e., pure gravity waves) treated in the references cited above, considering varying surface-pressure is a major additional complication, requiring a new method of resolution for the wave recovery problem.
In this short paper, we describe a new general recovery method valid for any surface-pressure. This is possible because the free-surface recovery from the bottom-pressure requires the resolution of only one first-order ordinary differential equation independent of the surface-pressure.
Once known, the surface-profile and the bottom-pressure yield an explicit relation for the surface-pressure. Thus, the surface-profile and the surface-pressure are both determined from the bottom-pressure, but modulo an unknown scalar parameter (e.g., the Bernoulli constant), so one extra relation is required to close the problem. This can be obtained either by an extra measurement or by the knowledge of the physical effects at the free-surface (i.e., knowing an equation the surface-pressure must satisfy).
The paper is organised as follow. Section 2 is devoted to the physical assumptions and the resulting fundamental equations. Equations for the free-surface and the surface-pressure recovery from the bottom-pressure are derived in section 3. The recovery procedure is illustrated analytically and numerically in sections 4 and 5, respectively. Finally, the section 6 outlines some conclusions and perspectives.
## 2. Preliminaries
In the frame of reference moving with a traveling wave of permanent shape, the flow beneath the wave is a steady two-dimensional irrotational motion of an inviscid fluid. Note that the wave phase velocity \(c\) is a non-zero constant in any other Galilean frame of reference. Let be \((x,y)\) a Cartesian coordinate system moving with the wave, \(x\) being the horizontal coordinate and \(y\) being the upward vertical one, and let be \((u(x,y),\,v(x,y))\) the velocity field in this moving frame of reference. We denote by \(y=-d\), \(y=\eta(x)\) and \(y=0\) the equations at the bottom, the free surface and the mean water level, respectively. The latter equation expresses that \(\langle\eta\rangle=0\) for a smooth \((2\pi/k)\)-periodic wave profile \(\eta\), where \(\langle\cdot\rangle\) is the Eulerian average operator over one period, i.e.
\[\langle\eta\rangle\stackrel{{\rm def}}{{=}}\frac{k}{2\pi}\int_{ -\pi/k}^{\pi/k}\eta(x){\rm d}x=0. \tag{1}\]
For solitary and more general aperiodic waves, the same averaging operator applies taking the limit \(k\to 0^{+}\).
The flow is governed by the balance between the restoring gravity force, the inertia of the system and a surface-pressure. With constant density \(\rho>0\) and acceleration due to gravity \(g>0\) directed downward, the kinematic and dynamic equations are, for \((x,y)\in\mathds{R}\times[-d,\eta(x)]\)[19],
\[u_{x}+v_{y}=0,\quad v_{x}-u_{y}=0,\quad uu_{x}+vu_{y}=-P_{x}/\rho,\quad uv_{x} +vv_{y}=-P_{y}/\rho-g,\qquad(2\,a,\,b,\,c,\,d)\]
where \(P(x,y)\) denotes the hydrodynamical pressure.
The flat bottom and steady free surface being impermeable, we have
\[v_{\rm b}=0,\qquad v_{\rm s}=u_{\rm s}\eta_{x},\]
with \(\eta_{x}\stackrel{{\rm def}}{{=}}{\rm d}\eta/{\rm d}x\) and where subscripts 'b' and's' denote, respectively, restrictions at the bottom and at the free surface, e.g. \(u_{\rm b}(x)=u(x,-d)\), \(v_{\rm s}(x)=v(x,\eta(x))\). The pressure at the free surface is
\[P_{\rm s}=P_{\rm atm}+\rho p_{\rm s}\qquad\mbox{at}\quad y=\eta(x), \tag{4}\]
where \(P_{\rm atm}\) is a constant atmospheric pressure and \(p_{\rm s}\) is a varying pressure (divided by the density). For instance, one can consider a prescribed surface pressure such as a Gaussian distribution of magnitude \(p_{0}\) and variance \(\lambda\)[18]
\[p_{\rm s}=p_{0}\exp\bigl{[}-x^{2}/(2\lambda)\bigr{]}\,, \tag{5}\]
or capillary and flexural effects such that [14, 16]
\[p_{\rm s}=-\frac{{\rm d}}{{\rm d}x}\left\{\frac{\tau\eta_{x}}{(1+\eta_{x}^{2} )^{1/2}}-\frac{D\eta_{xxx}}{(1+\eta_{x}^{2})^{5/2}}+\frac{5D\eta_{x}\eta_{xx}^ {2}}{2\left(1+\eta_{x}^{2}\right)^{7/2}}\right\}, \tag{6}\]
\(\tau\) being a surface tension coefficient and \(D\) a rigidity parameter (both divided by the fluid density). Other phenomena can of course be considered, as well as their combination. Without loss of generality, we take \(\langle p_{\rm s}\rangle=0\) since \(\langle p_{\rm s}\rangle\) can be absorbed into the definition of \(P_{\rm atm}\). It is thus convenient to introduce the normalised relative pressure
\[p(x,y)\stackrel{{\rm def}}{{=}}\bigl{[}P(x,y)-P_{\rm atm}\bigr{]} \bigr{/}\rho,\qquad(x,y)\in\mathds{R}\times[-d,\eta(x)]. \tag{7}\]
The flow being irrotational, the dynamical (Euler) equations (2_c-d_) can be integrated into a Bernoulli equation
\[2(p+gy)+u^{2}+v^{2}=B,\qquad(x,y)\in\mathds{R}\times[-d,\eta(x)], \tag{8}\]
where \(B\) is a Bernoulli constant. From equations (1)-(4) and (8), one gets [6, 9]
\[B=\left\langle u_{\mathrm{s}}^{2}+v_{\mathrm{s}}^{2}\right\rangle=\left\langle u _{\mathrm{b}}^{2}\right\rangle, \tag{9}\]
yielding the, here important, relation
\[\left\langle p_{\mathrm{b}}\right\rangle=gd. \tag{10}\]
Finally, equations (2_a-b_) imply that the complex velocity \(w\stackrel{{\mathrm{def}}}{{=}}u-\mathrm{i}v\) is a holomorphic function of the complex coordinate \(z\stackrel{{\mathrm{def}}}{{=}}x+\mathrm{i}y\), an interesting feature exploited below.
## 3. Equations for the free-surface and surface-pressure recoveries
For free-surface and surface-pressure recoveries, we present here a simple derivation of equations generalising those of [3] and [6].
### General equations
The function \(w^{2}\) being holomorphic, its real and imaginary parts satisfy the Cauchy-Riemann relations
\[\partial_{y}\big{(}u^{2}-v^{2}\big{)}-\partial_{x}(2uv)=0,\qquad\partial_{x} \big{(}u^{2}-v^{2}\big{)}+\partial_{y}(2uv)=0.\]
Integrating over the water column and using the boundary conditions, these relations yield after some elementary algebra
\[p_{\mathrm{b}}-p_{\mathrm{s}}-gh=\frac{\mathrm{d}}{\mathrm{d}x}\int_{-d}^{ \eta}uv\mathrm{d}y,\qquad\left(p_{\mathrm{s}}+g\eta\right)\frac{\mathrm{d} \eta}{\mathrm{d}x}=\frac{\mathrm{d}}{\mathrm{d}x}\int_{-d}^{\eta}\frac{u^{2}- v^{2}+B}{2}\mathrm{d}y.\]
Taylor expansions around \(y=-d\) can be written [5, 11, 13]
\[u^{2}-v^{2} =\cos[(y+d)\partial_{x}]\,u_{\mathrm{b}}^{2}=-2\cos[(y+d) \partial_{x}]\,(p_{\mathrm{b}}-gd), \tag{14}\] \[2uv =-\sin[(y+d)\partial_{x}]\,u_{\mathrm{b}}^{2}=2\sin[(y+d) \partial_{x}]\,(p_{\mathrm{b}}-gd) \tag{13}\]
or in complex form
\[w(z)^{2}=\exp[\mathrm{i}(y+d)\partial_{x}]\,u_{\mathrm{b}}(x)^{2}=u_{\mathrm{b }}(z+\mathrm{i}d)^{2}=B+2gd-2p_{\mathrm{b}}(z+\mathrm{i}d). \tag{15}\]
(For any real function \(F(x)\) continuable in the complex plane, \(F(x+\mathrm{i}h)=\exp[\mathrm{i}h\partial_{x}]\,F(x)\) is the Taylor expansion around \(h=0\).) Hence, with \(h\stackrel{{\mathrm{def}}}{{=}}d+\eta\), we have
\[\int_{-d}^{\eta}uv\mathrm{d}y =\left[1-\cos(h\partial_{x})\right]\partial_{x}^{-1}\left(p_{ \mathrm{b}}-gd\right), \tag{16b}\] \[\int_{-d}^{\eta}\frac{u^{2}-v^{2}+B}{2}\mathrm{d}y =-\sin(h\partial_{x})\,\partial_{x}^{-1}(p_{\mathrm{b}}-gd), \tag{16a}\]
so equations (12) yield
\[p_{\mathrm{s}}+g\eta =\partial_{x}\cos(h\partial_{x})\,\partial_{x}^{-1}\left(p_{ \mathrm{b}}-gd\right)=\left[\cos(h\partial_{x})-\eta_{x}\sin(h\partial_{x}) \right](p_{\mathrm{b}}-gd), \tag{18}\] \[(B-p_{\mathrm{s}}-g\eta)\eta_{x} =\partial_{x}\sin(h\partial_{x})\,\partial_{x}^{-1}\left(p_{ \mathrm{b}}-gd\right)=\left[\sin(h\partial_{x})+\eta_{x}\cos(h\partial_{x}) \right](p_{\mathrm{b}}-gd). \tag{17}\]
After one integration, equation (18) becomes
\[B\eta-\tfrac{1}{2}g\eta^{2}-\partial_{x}^{-1}\left(p_{\mathrm{s}}\eta_{x} \right)=\sin(h\partial_{x})\,\partial_{x}^{-1}(p_{\mathrm{b}}-gd). \tag{19}\]
With the special surface pressure (5) the term \(\partial_{x}^{-1}p_{\mathrm{s}}\eta_{x}\) cannot be obtained in closed form, but with (6) we have
\[\partial_{x}^{-1}\left(p_{\mathrm{s}}\eta_{x}\right)=\frac{\tau}{\left(1+ \eta_{x}^{2}\right)^{1/2}}\,-\,\tau+\frac{D\eta_{x}\eta_{xxx}-3D\eta_{xx}^{2}} {\left(1+\eta_{x}^{2}\right)^{5/2}}+\frac{5D\eta_{xx}^{2}}{2\left(1+\eta_{x}^{ 2}\right)^{7/2}}+\mathrm{constant}, \tag{20}\]
where the integration constant must be determined by the mean level condition (1), i.e., imposing
\[\left\langle\tfrac{1}{2}g\eta^{2}+\partial_{x}^{-1}\left(p_{\mathrm{s}}\eta_{x }\right)+\sin(h\partial_{x})\,\partial_{x}^{-1}(p_{\mathrm{b}}-gd)\right\rangle=0. \tag{21}\]
Note that the value of the integration constant in \(\partial_{x}^{-1}(p_{\rm b}-gd)\) does not matter here because this constant vanishes after application of the pseudo-differential operator \(\sin(h\partial_{x})\).
Equations (17), (18) and (19) are generalisations for \(p_{\rm s}\neq 0\) of the relations derived by [6, eq. 3.5-3.6] and by [3, eq. 4.4] when \(p_{\rm s}=0\). (This is obvious introducing the holomorphic function \(\mathfrak{P}(z)\stackrel{{\rm def}}{{=}}p_{\rm b}(z+{\rm i}d)\) and \(\mathfrak{Q}(z)\stackrel{{\rm def}}{{=}}\int\left[\mathfrak{P}( z)-gd\right]{\rm d}z\).)
### Generic equation for the free-surface recovery
When \(p_{\rm s}=0\) (pure gravity waves), \(\eta\) can be obtained from \(p_{\rm b}\) solving the ordinary differential equation (18) [6] or, more easily, solving the algebraic equation (19) [3]. When \(p_{\rm s}\neq 0\) is a function of \(x\) and/or \(\eta\), such as (5) and (6), in general (19) is a complicated highly-nonlinear high-order integro-differential equation for \(\eta\) due to the term \(\partial_{x}^{-1}\left(p_{\rm s}\eta_{x}\right)\) (see relation (20) for an example of practical interest). This is not a problem for recovering the free surface \(\eta\) from the bottom-pressure \(p_{\rm b}\) because the surface pressure \(p_{\rm s}\) can be eliminated between (17) and (18), yielding
\[B\eta_{x}=\left[\left(1-\eta_{x}^{2}\right)\sin(h\partial_{x})+2\eta_{x}\cos (h\partial_{x})\right](p_{\rm b}-gd), \tag{22}\]
or in complex form -- introducing \(\widetilde{\mathfrak{P}}(z)\stackrel{{\rm def}}{{=}}p_{\rm b}(z +{\rm i}d)-gd\) --
\[B\eta_{x}=\left(1-\eta_{x}^{2}\right)\operatorname{Im}\bigl{\{}\widetilde{ \mathfrak{P}}_{\rm s}\bigr{\}}+2\eta_{x}\operatorname{Re}\bigl{\{}\widetilde{ \mathfrak{P}}_{\rm s}\bigr{\}}, \tag{23}\]
that is a (nonlinear) first-order ordinary differential equation for \(\eta\). Equation (23) being algebraically quadratic for \(\eta_{x}\), it can be solved explicitly for \(\eta_{x}\); thus one gets
\[\operatorname{Re}\bigl{\{}\widetilde{\mathfrak{P}}_{\rm s}\bigr{\}}-\eta_{x} \operatorname{Im}\bigl{\{}\widetilde{\mathfrak{P}}_{\rm s}\bigr{\}}=\tfrac{1 }{2}B\pm\tfrac{1}{2}|B-2\widetilde{\mathfrak{P}}_{\rm s}|. \tag{24}\]
Since the free surface is flat if the bottom pressure is constant (and because \(B>0\)), the minus sign must be chosen. Moreover, the condition (9) rewritten in terms of \(\widetilde{\mathfrak{P}}\) yielding \(B=\langle|B-2\widetilde{\mathfrak{P}}_{\rm s}|\rangle\), the average of the right-hand side of (24) is zero, so is the left-hand side.
Equation (24) is _a priori_ not suitable if \(\eta\) is (nearly) not differentiable (limiting waves). It is thus more efficient to solve its antiderivative
\[\operatorname{Re}\bigl{\{}\mathfrak{Q}_{\rm s}\bigr{\}}-K=\tfrac{1}{2}\, \partial_{x}^{-1}\left(B-\Bigl{|}B-2\widetilde{\mathfrak{P}}_{\rm s}\Bigr{|} \right), \tag{25}\]
where \(K\) is an integration constant and where \(\mathfrak{Q}(z)\stackrel{{\rm def}}{{=}}q_{\rm b}(z+{\rm i}d)\), with \(q_{\rm b}(x)\stackrel{{\rm def}}{{=}}\partial_{x}^{-1}(p_{\rm b} -gd)\). Assuming \(\langle q_{\rm b}\rangle\stackrel{{\rm def}}{{=}}0\) (without loss of generality), it yields \(\partial_{x}\operatorname{Re}\bigl{\{}\mathfrak{Q}_{\rm s}\bigr{\}}= \operatorname{Re}\bigl{\{}\widetilde{\mathfrak{P}}_{\rm s}\bigr{\}}-\eta_{x} \operatorname{Im}\bigl{\{}\widetilde{\mathfrak{P}}_{\rm s}\bigr{\}}\) and \(\langle(1+{\rm i}\eta_{x})\mathfrak{Q}_{\rm s}\rangle=0\). The right-hand side of (25) being the antiderivative of a zero-average quantity, we conveniently choose \(\langle\partial_{x}^{-1}\bigl{(}B-|B-2\widetilde{\mathfrak{P}}_{\rm s}| \bigr{)}\rangle\stackrel{{\rm def}}{{=}}0\), hence \(K=\langle\operatorname{Re}\bigl{\{}\mathfrak{Q}_{\rm s}\bigr{\}}\rangle\). Thus, a numerical resolution of (25) does not require the computation of \(\eta_{x}\), that is an interesting feature for steep waves.
### Recovery of the surface-pressure
The free-surface \(\eta\) being obtained after the resolution of (24) or (25), the surface-pressure \(p_{\rm s}\) is obtained explicitly at once from (17)
\[p_{\rm s}=\partial_{x}\operatorname{Re}\bigl{\{}\mathfrak{Q}_{\rm s}\bigr{\}} -g\eta=\operatorname{Re}\bigl{\{}\widetilde{\mathfrak{P}}_{\rm s}\bigr{\}}- \eta_{x}\operatorname{Im}\bigl{\{}\widetilde{\mathfrak{P}}_{\rm s}\bigr{\}}-g\eta. \tag{26}\]
Thus, as \(\eta\), \(p_{\rm s}\) is known modulo the Bernoulli constant \(B\). Relation (10) holds as a definition of the mean depth \(d\), leaving us with only one scalar quantity to be determined (i.e., \(B\)).
### Closure relation
In order to fully recover both the free-surface and the surface-pressure, knowing the bottom pressure is not sufficient and one extra information is needed. We consider here two possibilities of practical interest.
A first possibility is when we have access to one independent extra measurement, for instance the mean velocity at the bottom (or elsewhere), the mean pressure somewhere above the seabed, the phase speed, the wave height, etc. In that case, the Bernoulli constant \(B\) is chosen such that the recovered wave matches this measurement. Thus, the free-surface and the surface-pressure can be both fully recovered.
If no extra measurements are available (only the bottom pressure is known), the free-surface can nevertheless be fully recovered with the knowledge of the physical nature of the surface-pressure, for instance given by (5) or (6) (among many other possibilities). The missing parameter can
then be obtained minimising an error (quadratic or minimax, for example) between the recovered surface-pressure \(p_{\mathrm{s}_{r}}\) obtained from (26) and the theoretical surface-pressure \(p_{\mathrm{s}_{t}}\) given, say, by (6).
### Remarks
The fact \(p_{\mathrm{s}}\) can be eliminated is not surprising. Indeed, \(p_{\mathrm{b}}\) too can be eliminated between (17) and (18), yielding the equation
\[p_{\mathrm{s}}+g\eta=\partial_{x}\cos(h\partial_{x})\sin(h\partial_{x})^{-1} \left[B\eta-\tfrac{1}{2}g\eta^{2}-\partial_{x}^{-1}\left(p_{\mathrm{s}}\eta_{x }\right)\right], \tag{27}\]
or, after inversion of the pseudo-differential operator,
\[B\eta-\tfrac{1}{2}g\eta^{2}-\partial_{x}^{-1}\left(p_{\mathrm{s}}\eta_{x} \right)=\sin(h\partial_{x})\cos(h\partial_{x})^{-1}\,\partial_{x}^{-1}\,(p_{ \mathrm{s}}+g\eta)\,. \tag{28}\]
Relation (28) with \(p_{\mathrm{s}}=0\) is an Eulerian counterpart of the [1] equation [4]. A more involved Eulerian equation, somehow similar to (27) with \(p_{\mathrm{s}}=0\), was derived by [11, eq. 10].
Note that, in its present form, equation (28) is not suitable for accurate numerical computations of \(\eta\) due to the complicated pseudo-differential operator. For this purpose, its integral formulation is better suited [4, SS6]. However, equations (27) and (28) are convenient to derive analytic approximations (c.f. section 4 where surface recovery is performed analytically for linear flexural-capillary-gravity waves in order to illustrate the procedure).
## 4. Example 1: Recovery of linear flexural-capillary-gravity waves
Here, we illustrate the recovery procedure for an infinitesimal flexural-capillary-gravity wave that is analytically tractable via its linear approximation.
### Linear approximation of a traveling wave
For infinitesimal waves, the surface pressure (6) and the Babenko-like equation (27) are linearised as
\[p_{\mathrm{s}}\approx D\eta_{xxxx}-\tau\eta_{xx},\qquad p_{\mathrm{s}}+g\eta \approx\partial_{x}\cos(d\partial_{x})\sin(d\partial_{x})^{-1}\,B\eta.\]
(\(2\pi/k\))-periodic solutions are thus \(\eta\approx a\cos(kx-\varphi)\) (\(ka\ll 1\) and \(\varphi\) a constant phase shift) with the (linear) dispersion relation
\[B\approx\left(g+\tau k^{2}+Dk^{4}\right)k^{-1}\tanh(kd). \tag{30}\]
The linear approximation of the bottom-pressure can then be obtained as
\[p_{\mathrm{b}}\approx gd+\mathfrak{p}\cos(kx-\varphi),\quad\mathfrak{p}=a \left(g+\tau k^{2}+Dk^{4}\right)\mathrm{sech}(kd)=kaB\,\mathrm{csch}(kd),\]
and the horizontal velocity at the bottom as
\[u_{\mathrm{b}}\approx\pm\sqrt{B}\left[1-B^{-1}\mathfrak{p}\cos(kx-\varphi) \right]\quad\implies\quad\langle u_{\mathrm{b}}\rangle\approx\pm\sqrt{B}. \tag{31}\]
This relation shows that, to this order of approximation, the Bernoulli constant \(B\) can be replaced by \(\left\langle u_{\mathrm{b}}\right\rangle^{2}\). Moreover, the sign of \(\langle u_{\mathrm{b}}\rangle\) gives the direction of propagation. Thus, in terms of parameters measurable at the bottom, the (linearised) free surface is
\[\eta\approx k^{-1}\left\langle u_{\mathrm{b}}\right\rangle^{-2}\mathfrak{p} \sinh(kd)\cos(kx-\varphi). \tag{33}\]
### Free-surface and surface-pressure recoveries
Suppose that data of the bottom-pressure can be well approximated by the ansatz (31_a_). A least squares (for example) minimisation between the data and (31_a_) gives \(gd\), \(k\), \(\varphi\) and \(\mathfrak{p}\); these parameters are now definitely known. We have to the first-order in \(\eta\)
\[\widetilde{\mathfrak{P}}_{\mathrm{s}} \approx\mathfrak{p}\cos(kx-\varphi+\mathrm{i}kd)-\mathfrak{ip}\sin (kx-\varphi+\mathrm{i}kd)k\eta, \tag{35}\] \[\mathfrak{Q}_{\mathrm{s}} \approx k^{-1}\mathfrak{p}\sin(kx-\varphi+\mathrm{i}kd)+\mathfrak{ ip}\cos(kx-\varphi-\mathrm{i}kd)\eta, \tag{34}\]
and, for infinitesimal waves, both \(\mathfrak{p}\) and \(\eta\) are small quantities of the same order. Thus, to the leading order, the recovery formula (24) yields
\[\mathfrak{p}\sinh(kd)\sin(kx-\varphi)+B\eta_{x}\approx 0\quad\implies \quad\eta=(kB)^{-1}\mathfrak{p}\sinh(kd)\cos(kx-\varphi), \tag{36}\]
where the resolution is performed under the condition (1). Similarly, to the leading order, the relation (26) yields the surface-pressure
\[p_{\mathrm{s}}\approx\left[\cosh(kd)-g(kB)^{-1}\sinh(kd)\right]\mathfrak{p} \cos(kx-\varphi)=\left[kB\coth(kd)-g\right]\eta. \tag{37}\]
With (36) and (37) the free-surface and the surface-pressure, respectively, are recovered modulo only one yet unknown parameter: the Bernoulli constant \(B\). If, for instance, \(\left\langle u_{\text{b}}\right\rangle\) has also been measured, then we have \(B\approx\left\langle u_{\text{b}}\right\rangle^{2}\) and the solution (33) is recovered. If no extra measurements are available, but if we know that we are dealing with flexural-capillary-gravity waves, the relation (6) should apply. Thus, the quadratic error \(E\) between (6) and (37) is, to the leading order,
\[E\approx\tfrac{1}{2}\mathfrak{p}^{2}\cosh(kd)^{2}\left[1-(g+\tau k^{2}+Dk^{4}) k^{-1}B^{-1}\tanh(kd)\right]^{2}, \tag{38}\]
so this error is minimum if \(B=(g+\tau k^{2}+Dk^{4})k^{-1}\tanh(kd)\), as expected. Alternatively, from the recovered surface pressure \(p_{s_{T}}\) given by (37), we have \(\max(p_{s_{T}})-\min(p_{s_{T}})=2\left(\coth(kd)-g/kB\right)\sinh(kd)\mathfrak{p}\), while the theoretical surface-pressure \(p_{st}\) (6) yields \(\max(p_{st})-\min(p_{st})\approx 2(\tau k^{2}+Dk^{4})\sinh(kd)\mathfrak{p}/kB\). Equating these two quantities gives the expected dispersion relation.
## 5. Example 2: Recovery of nonlinear capillary-gravity waves
We now consider the fully nonlinear recovery problem for capillary-gravity waves. Since we do not have experimental data for this problem, we first compute a travelling wave from which we extract the bottom pressure numerically. The algorithm used for such a computation is an adaptation of the method described in [4, 12] when arbitrary pressure is present at the free surface. Once computed, this accurate numerical solution is taken as data for the bottom pressure to reconstruct the wave profile, the surface pressure and various hydrodynamic parameters.
Figure 1. Recovery of a nonlinear capillary-gravity wave with period \(L/d=6\pi\), Froude number square \(B/gd=1.01568\) and Bond number \(\tau/gd^{2}=1/3\). (a): Bottom pressure treated as a “measurement” for the recovery procedure. (b,c): Respectively, recovered surface pressure and profile (blue circles) versus the exact solution (red line).
Following [3], we start by expanding the pressure data in truncated Fourier series (collocated at a set of equispaced points) and perform analytic continuation in the complex plane [3]
\[\widetilde{\mathfrak{P}}(z)=p_{\mathrm{b}}(z+\mathrm{i}d)-gd\approx\sum_{|n|>0} ^{N}\mathfrak{p}_{n}\mathrm{e}^{\mathrm{i}nk(z+\mathrm{i}d)}=\sum_{|n|>0}^{N} \mathfrak{p}_{n}\mathrm{e}^{-nkd}\mathrm{e}^{\mathrm{i}nkz}. \tag{39}\]
From the above definition, we compute the anti-derivative at the surface
\[\mathfrak{Q}_{\mathrm{s}}(x)=\int_{0}^{x}\widetilde{\mathfrak{P}}_{\mathrm{s} }(x^{\prime})\mathrm{d}x^{\prime}\approx\sum_{|n|>0}^{N}\frac{\mathrm{i} \mathfrak{p}_{n}}{nk}\frac{\mathrm{e}^{-nka}-\mathrm{e}^{\mathrm{i}nk(x+ \mathrm{i}\eta)}}{\mathrm{e}^{nkd}}. \tag{40}\]
We note that \(N=256\) is sufficient enough to accurately resolve the Fourier spectrum (up to computer precision) of the bottom pressure data. Once the holomorphic functions are computed, we solve expression (25) by imposing the total height of the wave as a closure relation within the built-in iterative solver fsolve from Matlab[9]. As an initial guess, we use the linear approximation given by (36). The algorithm only takes few seconds to run on a classical desktop and achieve a tolerance criterion of \(\epsilon<10^{-12}\) on the residual.
We present in Figures 1 and 2 two examples of nonlinear capillary-gravity waves. The primary possesses a surface tension coefficient with critical Bond number \(\mathrm{Bo}\stackrel{{\mathrm{def}}}{{=}}\tau/(gd^{2})=1/3\), whereas the second is subject to strong capillary effects with \(\mathrm{Bo}=2\). The first configuration displayed in Figure 1 is in rather shallow water, with Froude number squared \(\mathrm{Fr}^{2}\stackrel{{\mathrm{def}}}{{=}}B/gd=1.01568\). As clearly demonstrated in panels (a)a and (b)b, both highlight excellent agreement between the recovered surface pressure and wave profile with the solutions of reference. For the first case, the numerical error between recovered (r) and theoretically predicted (t) fields are as follows: \(||\eta_{r}-\eta_{t}||_{\infty}=6.5289\times 10^{-9}\), \(||p_{s_{r}}-p_{st}||_{\infty}=2.8887\times 10^{-8}\) and \(|B_{r}-B_{t}|=2.5233\times 10^{-7}\). Similarly, the second case represents waves over a significantly deep layer, where the inverse
Figure 2. Same panels as Figure 1 for the period \(L/d=2\pi\), the Froude number square \(B/gd=2.28113\) and the Bond number \(\tau/gd^{2}=2\).
problem is essentially more difficult to solve as it is mathematically ill-posed. Nevertheless, it also shows remarkable agreement in the recovered data. Regarding numerical errors for this case, it yields \(||\eta_{r}-\eta_{t}||_{\infty}=1.3882\times 10^{-9}\), \(||p_{s_{r}}-p_{st}||_{\infty}=1.8098\times 10^{-8}\) and \(|B_{r}-B_{t}|=2.1645\times 10^{-7}\). We note that the Froude number square is \(B/gd=2.28113\) in this case.
These recoveries were obtained assuming no _a priori_ knowledge of the physical nature of the surface pressure, but assuming that the total wave height has been measured in addition to the bottom pressure. If instead of the total wave height we consider, say, the mean horizontal velocity at the bottom, we were also able to recover both the free-surface and surface-pressure, with similar accuracy for \(\eta\) (\(\sim 10^{-8}\)) and \(B\) (\(\sim 10^{-10}\)).
With knowledge of the physical nature of the surface pressure, we were also able to recover the free surface without extra measurements beside the bottom pressure. This is obtained minimising an error between the reconstructed and theoretical surface pressure as explained in section 3.4. Our preliminary numerical investigations seem to indicate that the choice of the error to minimise plays a role in the speed and accuracy of the recovery procedure. A thorough numerical investigation of this optimisation problem is way beyond the scope of this short paper, which purpose is a proof of concept to attest the possibility to recover both the free-surface and the surface-pressure.
## 6. Discussion
We derived expressions for free-surface and surface-pressure recoveries, assuming the physical effects at the free surface or considering additional measurements. Then, we illustrated the practical procedure with a fast and simple numerical algorithm. The method proposed here is more general in substance than previous studies by [6, 3, 8], and can be generalised to incorporate linear shear currents along the lines of [9]. This approach can further be extended to accommodate overhanging waves (existing in presence of capillary and/or vorticity) as recently shown by [12].
So far, we have considered recovery procedures from bottom pressure measurements, but similar relations could be derived considering the pressure at another depth, as well as other measured physical quantities. Further extensions to configurations with non-permanent wave motions or arbitrary vorticity, for example, are also of great interest, but present technical challenges beyond the scope of this current work.
In this short paper, we demonstrated the possibility to recover the free-surface with arbitrary surface-pressure, and we briefly illustrated the procedure with few examples. We did not address the (difficult) question of uniqueness of the free-surface from a given bottom-pressure. Indeed, for instance, capillary-gravity waves are not unique for identical physical parameters [2, 7]. This example indicates, although the recovery from bottom measurements is a slightly different problem, that the question of uniqueness is important, both theoretically and practically, and it should be the subject of future investigations.
**Funding.** Joris Labarbe has been supported by the French government, through the UCA\({}^{\rm jedi}\)_Investments in the Future_ project managed by the National Research Agency (ANR) with the reference number ANR-15-IDEX-01.
**Declaration of interests.** The authors report no conflict of interest.
|
2310.20477 | Exploring Practitioner Perspectives On Training Data Attribution
Explanations | Explainable AI (XAI) aims to provide insight into opaque model reasoning to
humans and as such is an interdisciplinary field by nature. In this paper, we
interviewed 10 practitioners to understand the possible usability of training
data attribution (TDA) explanations and to explore the design space of such an
approach. We confirmed that training data quality is often the most important
factor for high model performance in practice and model developers mainly rely
on their own experience to curate data. End-users expect explanations to
enhance their interaction with the model and do not necessarily prioritise but
are open to training data as a means of explanation. Within our participants,
we found that TDA explanations are not well-known and therefore not used. We
urge the community to focus on the utility of TDA techniques from the
human-machine collaboration perspective and broaden the TDA evaluation to
reflect common use cases in practice. | Elisa Nguyen, Evgenii Kortukov, Jean Y. Song, Seong Joon Oh | 2023-10-31T14:10:30Z | http://arxiv.org/abs/2310.20477v2 | # Exploring Practitioner Perspectives On Training Data Attribution Explanations
###### Abstract
Explainable AI (XAI) aims to provide insight into opaque model reasoning to humans and as such is an interdisciplinary field by nature. In this paper, we interviewed 10 practitioners to understand the possible usability of training data attribution (TDA) explanations and to explore the design space of such an approach. We confirmed that training data quality is often the most important factor for high model performance in practice and model developers mainly rely on their own experience to curate data. End-users expect explanations to enhance their interaction with the model and do not necessarily prioritise but are open to training data as a means of explanation. Within our participants, we found that TDA explanations are not well-known and therefore not used. We urge the community to focus on the utility of TDA techniques from the human-machine collaboration perspective and broaden the TDA evaluation to reflect common use cases in practice.
## 1 Introduction
The suite of explainable AI (XAI) encompasses models and explanation methods that aim at uncovering the rationale behind black-box model behaviour for humans [1]. XAI methods are usually attribution methods, which can be categorised into feature and instance attribution. While the former finds explanations for model predictions within the features of an input (e.g. SHAP [2]), the latter explains model predictions at the instance level (e.g. Influence functions [3]).
This study focuses on an instance attribution approach called training data attribution (TDA). TDA gives insight by attributing model behaviour to training samples [4, 5]. The ground truth attribution of the model prediction on test sample \(z\) to a training sample \(z_{j}\) is the change in loss after leave-one-out retraining:
\[\mathrm{TDA}(z_{j},z):=\mathcal{L}(z;\theta_{\setminus j})-\mathcal{L}(z;\theta) \tag{1}\]
where the model parameters \(\theta\) are trained with the loss \(\mathcal{L}\). As such, TDA views the model as an output of the learning algorithm and attributes model behaviour to parts of the training set.
Explanations of machine learning (ML) models are sociotechnical in nature [6]. Efforts in human-centred XAI emphasise this side of XAI and aim at a deeper understanding of the explainee because it is essential for the effective adoption of XAI in practice [7]. Yet, we find that the human factor of XAI is underexplored for TDA.
To address this gap, we present a qualitative interview study with ML practitioners in application areas of high-risk systems according to Annex III of the EU AI Act [8] (e.g. healthcare, employment, law enforcement). ML applications in such areas will require assessment throughout their product lifecycle. We therefore expect XAI to be particularly relevant in such areas.
By interviewing practitioners, we take a human-centered perspective which gives us an impression of how ML models and explanation methods are put into practice and how practitioners view
the idea of TDA. Through an inductive thematic analysis, we find: (1) End-users are interested in training data attribution methods that could facilitate human-machine collaboration. Model developers find value in methods that enable them to improve the dataset quality. (2) Though the idea of TDA is generally positively perceived, within our participant pool, TDA is not utilised. XAI tools are only used as out-of-the-box functionality. We therefore anticipate that TDA tools can deliver practical value if they are easy to implement.
## 2 Related Work
Interview studies provide insights into human factors in explainable AI (XAI) and can inform the design of human-centred XAI technology [9]. Previous work has conducted semi-structured interviews with XAI practitioners of different technical expertise to study how people understand the problem of XAI [10], people's preferences regarding interactivity of explanations [11] and user needs, intentions and perceptions of explanations [12]. They found that user needs and XAI vocabulary vary across users [10] but interactivity [11] and actionability [12] are desired. These studies result in concrete recommendations about XAI in practice, i.e. a call for consistent vocabulary to facilitate clear communication and progress in XAI [10], the case for interactive dialogue systems [11] and the need for considering an explanation's actionability in the design process [12]. However, they base their studies mainly on feature attribution explanations while our work focuses on training data attribution (TDA) explanations. We therefore expand on existing literature about user perspectives on XAI.
TDA captures a training sample's attribution to a model decision on a test sample through the counterfactual change in a model's output on a test sample when a training sample is removed from the dataset (cf. Eq. 1). As computing TDA directly is prohibitive due to retraining costs, several methods exist [3, 13, 14, 15, 16] which focus on accurately approximating TDA. Applications of TDA methods are focused on topics from data-centric AI i.e. aiming at model improvement by improving the data (e.g. cleaning faulty data labels [17] or detecting model biases [18, 19]). We find that the study of user needs and perspectives is underexplored for TDA. Our study presents a first step in addressing this gap.
## 3 Interview methodology
This study aims to explore practical perspectives on training data attribution (TDA). Since we study subjective experiences, we opt for a qualitative analysis through interviews. We conducted semi-structured interviews to balance the interview structure and the freedom of conversational flow [20] and analysed the transcripts in an inductive thematic analysis (cf. Figure1). 1
Footnote 1: The IRB approval process is currently ongoing. We expect a decision in November 2023.
Participants.We define inclusion criteria to ensure participants align with our research aims: They should (1) have at least one month of experience in working with ML systems and (2) either work in a high-risk application area according to the EU AI Act [8] (e.g. health care, law enforcement, employment. Full list in Appendix A). This criterion serves to focus our studies on application areas that are likely to be subject to further regulations and governance in the future [1, 21]. Recruiting participants poses a challenge, especially in high-risk application areas. Hence, we use purposive sampling [22] and approach potential participants from the authors' network individually. We recruit 10 participants from diverse domains and degrees of experience (cf. Table 1).
Figure 1: Interview and data analysis process.
Interview process.The interviews were conducted during June - September 2023, either in person or remotely via video call. All interviews are one-on-one conversations in English, except with P10 in German. The participants were first briefed on the objective of the study and data processing using the informed consent form (cf. Appendix B). Upon receiving informed consent, we started the interview recording. Overall, the interviews lasted between 30 to 60 minutes. In each interview, the following topics are addressed (full interview guide in Appendix C):
* **Job-related information.** Perspectives may vary between different domains and levels of seniority as well as experience with the ML tool.
* **Interviewee's workflow with ML systems.** By asking about the workflow with the ML tool, we wish to understand the patterns of usage and challenges participants encounter.
* **Perspectives on training data.** Since we investigate TDA explanations, we explicitly ask participants about the role training data plays in their tasks.
* **Perspectives on data-driven XAI.** We address the participant's perspectives on XAI and particularly on TDA.
Interview transcription.The interviews are first transcribed automatically using Whisper [23] and then cleaned up manually. The transcript is then pseudonymised. We translated P10's German transcript to English using DeepL [24].
Analysis.We analyse the transcripts through an inductive thematic analysis by two coders (cf. Figure 1). The analysis is iterative: The interview transcript of P1 is first analysed jointly in an initial coding workshop. Afterwards, coders independently code five transcripts, extending on the themes and codes found in the initial analysis. During an intermediate coding workshop, agreements and disagreements between the coder's themes and codes are discussed. The workshop resulted in a new, merged definition of themes and codes which are used for the remaining transcripts. At the intermediate coding workshop, the interrater agreement is 77.3% measured by the percentage of agreement participants coded to themes. The final coding workshop serves the same purpose - after both coders reviewed the remaining transcripts, the overlap and gaps are discussed and the final themes are agreed upon. The final interrater agreement is 80.3%. Full analysis instructions in Appendix D.
## 4 Findings
The result of the thematic analysis is shown in Figure 2. We identified six main themes which are related to the current use of ML systems, perspectives towards explainable AI (XAI) and training data attribution (TDA). Two groups of interviewees have provided noticeably different perspectives - end-users and model developers. We thus discuss their perspectives separately.
### End-user perspective
An end-user makes use of ML tools and is not involved in the model-building process. We find that end-users often face challenges related to trust calibration when using ML systems and identify a possible use of TDA in facilitating human-machine collaboration.
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline ID & Country of work & Domain & Type & Job experience/with ML & Type of ML \\ \hline P1 & Germany & HR & End-user & 3 yrs/1 mo. & Chatbot \\ P2 & USA & AV & Developer & 2 yrs/7 yrs & Prediction model \\ P3 & Netherlands & TC & Developer & 3 yrs/5 yrs & Prediction model \\ P4 & Finland & CV & Developer & 4 yrs/6 yrs & Prediction model \\ P6 & Switzerland & Health & End-user & 2 yrs/2 yrs & Prediction model \\ P7 & Netherlands & Health & Developer & 1 yr/3 yrs & Prediction model \\ P8 & Belgium & Health & Developer & 2 yrs/6 yrs & Prediction model \\ P9 & Pakistan & Health & Developer & 5 yrs/2 yrs & Prediction model \\ P10 & Germany & HR & End-user & 3 yrs/1 yr & Chatbot \\ P11 & Germany & Health & End-user & 10 yrs/6 yrs & Clustering, Chatbot \\ \hline \hline \end{tabular}
\end{table}
Table 1: Participant information. HR = Human resources, AV = autonomous vehicles, TC = telecommunications, CV = Computer vision for automation. P5 did not meet the inclusion criteria.
**Role of ML system.** End-users use ML systems for work assistance and decision support. Chatbots generally fill the role of a work assistant that "[takes work off of the participant's hands and makes their work easier]" (P10) and is "available around the clock" (P10). Participants use chatbot systems to improve their writing in English (P1, P11), to search for information where previously they would "[ask] Google" (P1), or to ideate research ideas (P11). Moreover, P10 trusts their company-internal chatbot enough to reflect simple employee questions to the chatbot. As decision support systems, ML systems deliver information that acts as a basis for decisions taken by end-users, e.g. diagnosis support (P6).
Workflow with ML system.End-users rely on ML systems when they deliver helpful suggestions. If the ML system generates unhelpful results (P6, P10), users take over and turn away from ML support. We also find that ML systems often lack feedback loops, particularly when ML systems are purchased as a product from the market, leading end-users to voice concerns mainly when bugs accumulate (P6).
Role of training data.End-user participants were unaware of which training data may have been used to train the models (P1, P10). P6 (a medical doctor) mentions that they would be curious about training data but "[it] is a luxury that [requires time]", highlighting the practical constraint of time pressure.
The greatest challenge of ML system end-users is trust calibration.Our findings agree with Kim et al. [12]: It is unclear how much and when a system can be trusted. P1 sometimes finds themselves in a dilemma in which they wish to learn something from the chatbot, but are unable to calibrate their trust in the response due to missing knowledge: "I don't know everything regarding this topic. I [don't even] know what he's replying to me." (P1). P10 also mentions inadequate know-how in ML system usage as a challenge: "the employees often don't manage to ask the chatbot the right questions".
Use of XAI in practice.Not all participants have used XAI. Some participants were unaware of XAI since explanations are not a part of the ML tools they use (P1, P10, P11). P6 reports that XAI tools they used in radiology images so far (i.e. heatmaps) do not deliver a full answer to the why question, as counterfactual information is missing: "[If] I just get an overall highlight in these basal lung regions and the prediction that is atelectasis, I still don't know why this is atelectasis but not pleural effusion or consolidation." Moreover, P6 highlights their time constraints: They would hardly be able to look at explanations even when available. Therefore, the end-user's challenges in using XAI are not only a lack of awareness and availability but also limited time.
Perspectives on data-centric XAI.End-users are not familiar with the idea of TDA explanations. When asked for their opinion about the concept of TDA, chatbot users (P1, P10, P11)
Figure 2: Theme overview as a result of the thematic coding process. Themes directly related to training data and TDA are highlighted in orange.
were interested in training samples which give additional information related to their request or samples which help them improve their interaction with the model and ask better questions ("if [the chatbot] can also sometimes formulate: [for stupid questions] you [cannot expect a good answer because] [...] then maybe you understand [how to ask the question better] and can ask it more precisely again" - P10). P6 would be interested in samples similar to the test sample to calibrate their trust. P6 also emphasised that explanations can only be helpful if there is time to spend on an explanation. Back-and-forth interactions with the system are "absolutely unrealistic" (P6). The above findings agree with the insights in Kim et al. [12]: End-users want explanations that help them improve collaboration with the ML system. End-users wish to overcome the challenge of trust calibration and showed a positive sentiment towards the idea of TDA.
### Model developer perspectives
Model developers are concerned with the building of ML systems. We find that model developers often face challenges related to data quality and identify potential use cases for TDA.
Role of ML systems.Model developers work on decision-support (P3, P5, P7, P8, P9) and automation systems (P2, P4). They build ML systems according to the customer's needs. P3 uses ML systems to identify and explain the contributing factors to product issues: "If we can predict it, we can also have an idea what are the factors mostly creating this phenomenon."
Workflow with ML systems and the role of training data.Developers and end-users collaborate closely for building and evaluating ML models, where bugs are reported to the developers by the end-users (P2, P3, P4, P9). This shows a clear separation of domain knowledge: "Because personally, I cannot know if the model is doing the correct thing [...] business have to tell me" (P3). The model-building workflow is focused on data and developers spend a considerable amount of time with data curation (P2, P4, P3, P7, P8, P9). Participants explicitly stated that they use standard model architectures and the majority of the work is dataset curation (P3, P4, P8): "[What] drives your model is your data. [...] [If] it's already an established problem, you're probably not going to do better than an algorithm that's already been laid out to solve that problem for you." (P8). Data quality checks are a set part of the data preprocessing pipeline (P2, P3, P4, P8). P2 and P9 reported that they first assess data quality before inspecting the model in debugging. Furthermore, P2 explained that collecting more data is a common way to overcome model shortcomings in autonomous driving. This shows that development work in practice is centred around data. Consequently, model developer participants consistently view training data as the most important variable in a model, e.g. "[we] [...] believe that [...] the models can only be as good as the [...] data that you feed in." (P4).
Challenges in working with ML systems.Data quality issues are often the root cause of model malfunction. Participants report distribution shifts (P3, P4), data collection artefacts like missing data or labels (P2, P4, P3, P7, P8, P9), wrong labels (P2, P4), wrong data formats due to aggregation of different data sources (e.g. dates being interpreted as integers or wrong ordering of temporal data) (P8, P9), and historical data (P9). Issues with data quality impact model validation; for example, participants encounter difficulties due to absent labels. Furthermore, P2 mentions that the validation itself is a challenge due to multiple requirements that the ML system should fulfil. P2 also sees a challenge in the stochastic nature of ML models: "[The] same data set, same model, you train multiple times, you can get [different] results." In addition, memory and compute constraints are relevant to P2 and P4 as they work with ML systems on the edge. Our analysis shows that data plays a substantial role both in the challenges faced by model developers and in the development process itself.
Use of XAI in practice.Participants use XAI for different purposes, most commonly as a tool for model development (P2, P3, P8). As such, XAI tools offer explanations for per-example debugging of e.g. wrong predictions or act as a sanity check for model reasoning. Furthermore, P8 states that XAI tools are useful in getting customer buy-in and convincing the customers of the model's decision suggestion. P3 described the use of XAI as a tool to understand phenomena represented by the ML model: "[Building] the model, the whole purpose is to get some explainability. Because [...] we know that [a problem is] happening and predicting doesn't really add value. But if we can predict it, we can also have an idea what are the factors mostly
creating this phenomenon." While XAI and therefore explanations have different purposes, we note that participants use XAI tools mainly as an out-of-the-box functionality. P3 and P8 reported using a SHAP [2] library, whereas P2 visualises attention maps. We find that implementation thresholds must be low for the adoption of XAI in practice.
Perspective on (data-centric) Xai.Within our participants, we find that model developers are not familiar with TDA explanations. However, when asked about their intuition on what important training data could be, participants talked about out-of-distribution samples (P3, P8), mislabelled samples (P2), and samples close and far from the model's decision boundary (P7, P8). Developers seek to understand the data distribution and find ways to improve the data quality, and participants are interested in how TDA enables this. However, some participants specified that the usefulness of XAI depends on certain conditions: P3 and P8 who use explanations to present models to their business, state that in their experience, model performance must be high for explanations to serve their purpose. Additionally, P8 mentions that finding an individual training sample is unlikely to be informative in a large dataset and relevant data on a "collection level" would be more interesting. Our analysis shows that the idea of TDA is positively perceived by model developers. Furthermore, TDA as a data-centric XAI approach could fit well into the work of a model developer, which is strongly centred around the data itself.
## 5 Implications for future TDA research
Status quo of TDA research.Training data attribution (TDA) explains model behaviour by finding relevant training data to a model prediction, where "relevant" is defined by the change in loss after leave-one-out retraining (LOO) (cf. Eq. 1) [4, 5]. As mentioned in section 2, recent TDA research is focused on studying efficient and accurate approximations of Eq. 1 (e.g. [16]) or the application of TDA methods to particular use cases in data-centric AI (e.g. [18]). The human factor in TDA is underexplored and our study takes a first step in addressing this gap.
Some of the ideas from our study are actively researched.Our analysis of participants' ML workflow and perspectives on XAI has shed light on the required features for TDA methods. Some are being actively studied in the community: P8 mentions that the attribution of a single training sample is unlikely to be informative, which has been studied in e.g. [25, 26] and motivated TDA approaches like [27, 28]. Also, model developers' intuition that mislabeled data are important training data is addressed in TDA research through existing evaluations using mislabel identification tasks as in Koh and Liang [3].
Some are yet to be studied further.Other perspectives could add to TDA research: Participants mention several types of data quality issues beyond mislabels, such as missing data (P3, P8, P9), wrong data formats (P8, P9), distribution shifts (P3, P4), which are currently not often considered in evaluation. Furthermore, questions related to TDA in human-machine collaboration, like interaction and usability (P1, P6, P10, P11), are not explored in TDA research.
Future directions in TDA research.It is important to consider the user and human factors in the development of XAI technology like TDA, whether it addresses model developers or end-users [6]. We find that participants are generally unaware of TDA and therefore do not apply it even in suitable use cases. To improve accessibility, TDA researchers should understand and address user needs better. This includes, for example, expanding the current evaluation practices to cover diverse use cases. Practical constraints like time pressure (P6) and low implementation thresholds (P3, P8) should also be actively formulated as one of the research goals in the future.
## 6 Conclusion
We present a qualitative interview study with ML practitioners from various high-risk application areas to investigate the human factor of training data attribution (TDA) explanations. Through an inductive thematic analysis, we find that priorities and perspectives differ between end-users and developers but the idea of gaining insights into the model through training data is positively perceived overall. Our research reveals possible research directions in TDA to bridge the gap from research to practice: TDA for human-machine collaboration and expanding the evaluation of TDA to diverse data-centric use cases. Further, we highlight that simple and intuitive implementations of TDA methods are key.
## Acknowledgments and Disclosure of Funding
The authors thank the International Max Planck Research School for Intelligent Systems (IMPRS-IS) for supporting Elisa Nguyen. This work was supported by the Tubingen AI Center.
|
2309.05964 | Massive Access of Static and Mobile Users via Reconfigurable Intelligent
Surfaces: Protocol Design and Performance Analysis | The envisioned wireless networks of the future entail the provisioning of
massive numbers of connections, heterogeneous data traffic, ultra-high spectral
efficiency, and low latency services. This vision is spurring research
activities focused on defining a next generation multiple access (NGMA)
protocol that can accommodate massive numbers of users in different resource
blocks, thereby, achieving higher spectral efficiency and increased
connectivity compared to conventional multiple access schemes. In this article,
we present a multiple access scheme for NGMA in wireless communication systems
assisted by multiple reconfigurable intelligent surfaces (RISs). In this
regard, considering the practical scenario of static users operating together
with mobile ones, we first study the interplay of the design of NGMA schemes
and RIS phase configuration in terms of efficiency and complexity. Based on
this, we then propose a multiple access framework for RIS-assisted
communication systems, and we also design a medium access control (MAC)
protocol incorporating RISs. In addition, we give a detailed performance
analysis of the designed RIS-assisted MAC protocol. Our extensive simulation
results demonstrate that the proposed MAC design outperforms the benchmarks in
terms of system throughput and access fairness, and also reveal a trade-off
relationship between the system throughput and fairness. | Xuelin Cao, Bo Yang, Chongwen Huang, George C. Alexandropoulos, Chau Yuen, Zhu Han, H. Vincent Poor, Lajos Hanzo | 2023-09-12T05:18:09Z | http://arxiv.org/abs/2309.05964v1 | # Massive Access of Static and Mobile Users via Reconfigurable Intelligent Surfaces:
###### Abstract
The envisioned wireless networks of the future entail the provisioning of massive numbers of connections, heterogeneous data traffic, ultra-high spectral efficiency, and low latency services. This vision is spurring research activities focused on defining a next generation multiple access (NGMA) protocol that can accommodate massive numbers of users in different resource blocks, thereby, achieving higher spectral efficiency and increased connectivity compared to conventional multiple access schemes. In this article, we present a multiple access scheme for NGMA in wireless communication systems assisted by multiple reconfigurable intelligent surfaces (RISs). In this regard, considering the practical scenario of static users operating together with mobile ones, we first study the interplay of the design of NGMA schemes and RIS phase configuration in terms of efficiency and complexity. Based on this, we then propose a multiple access framework for RIS-assisted communication systems, and we also design a medium access control (MAC) protocol incorporating RISs. In addition, we give a detailed performance analysis of the designed RIS-assisted MAC protocol. Our extensive simulation results demonstrate that the proposed MAC design outperforms the benchmarks in terms of system throughput and access fairness, and also reveal a trade-off relationship between the system throughput and fairness.
Next generation multiple access, reconfigurable intelligent surfaces, MAC efficiency, access fairness.
## I Introduction
With the envisioned demands for access by massive numbers of users, high spectral/energy efficiency (SE/EE), and low-cost services (e.g., virtual/augmented reality (VR/AR), holographic telepresence, etc.) for the forthcoming Sixth Generation (6G) networks, research in future wireless communications continues to focus on the design of next generation multiple access (NGMA). To improve the SE/EE and quality-of-service (QoS), the NGMA approaches have to overcome the limitations of multiple access schemes in current wireless network standards by leveraging the envisioned gains of emerging techniques, such as reconfigurable intelligent surfaces (RISs) and artificial intelligence (AI), as well as new technologies yet to be defined [1, 2, 3]. These technologies enabling or being enabled by NGMA also give impetus to the design of medium access control (MAC) protocols, involving joint communication, control, and computing functionalities, which are expected to be vital for highly efficient massive multiple access [4, 5, 6].
In the development process of multiple access technologies, conventional orthogonal multiple access (OMA) schemes allow each user's transmission in an orthogonal way, thereby simplifying the transceiver design and avoiding interference among users, such as time-division multiple access (TDMA), frequency-division multiple access (FDMA), and code-division multiple access (CDMA). Compared to the family of OMA schemes, the non-orthogonal multiple access (NOMA) schemes have already been investigated in 5G networks due to bringing an additional degree of freedom in the power domain, which help each user achieve its QoS target [7, 8, 9, 10]. However, both OMA and NOMA schemes have limitations. Specifically, for OMA, the multiple access efficiency is limited by the radio resources and the signaling overhead, while for NOMA, most research thus far has focused on static and broadband users, but without considering mobility and randomness [4].
### _Motivation_
Next generation wireless networks are rapidly evolving toward a distributed intelligent communication, sensing, and computing platform, realizing the software-based network functionality paradigm. Efficient NGMA schemes are required to adapt to this trend. In this unifying context, there are several challenges to be addressed by NGMA, with two of the most prominent being the following:
* **Challenge-1:** How to improve the throughput performance of NGMA approaches when operating in complex wireless environments, where there exist both static and mobile users?
* **Challenge-2:** How to achieve improved connectivity via NGMA schemes, while guaranteeing access fairness?
Advances in metamaterials have recently fuelled research in RISs for beneficially reconfiguring wireless communication environments with the aid of large planar arrays of low-cost reconfigurable elements [11, 12, 13, 14], RISs are becoming a potential solution to tackle the above mentioned challenges. Facing a practical network consisting of both static and mobile users, the throughput performance and connectivity of traditional multiple access schemes are significantly affected by the randomness and mobility of users. For such cases, RISs can be incorporated into the NGMA design to enhance the wireless communication links of static and mobile users simultaneously, and thus improve the throughput and connectivity performance. However, RIS-assisted NGMA approaches will face the complex problem of the RIS design for massive numbers of static and mobile users, which refers to multi-user resource allocation and the RIS phase configuration optimization.
Motivated by these potential advantages, in this paper, we investigate the intricate interplay between the MAC protocol and the RIS configuration, as highlighted in Fig. 1. On the one hand, we demonstrate that the low-complexity of the RIS phase profile configuration and implementation significantly improves the MAC efficiency. On the other hand, our efficient MAC protocol supports the coexistence of static and mobile users in conjunction with our low-complexity RIS configuration, thereby supporting a massive number of users via multiple RISs. Based on these compelling features, an efficient NGMA scheme is indeed eminnently suitable for next-generation wireless communication systems.
### _State-of-the-Art_
Recently, RISs have introduced some significant changes and new opportunities for wireless communications [15, 16, 17, 18]. This new paradigm results in the migration from traditional wireless connections to "intelligent-and-reconfigurable connections".
#### I-B1 RIS Configuration in Wireless Communications
Being a newly proposed paradigm going beyond massive multiple-input multiple-output (MIMO), RISs featured with low-cost, ultra-thin, light-weight, and low power consumption hardware structures provide a transformative means of wireless environments into a programmable smart entity. In the context of RIS-aided communications, the authors of [19, 20, 21, 22, 23, 24] focused their significant attention on the configuration of RISs. Explicitly, Wu _et al._ studied the problem of joint active and passive beamforming [19]. To achieve high energy efficiency, Huang _et al._ investigated an RIS-assisted downlink multi-user system by joint optimizing the transmit power and the passive beamforming [20]. To increase the sum rate, Guo _et al._ studied an RIS-aided multi-user multiple-input single-output downlink system by jointly designing the beamforming and RIS phase shifts [21]. To assess the effect of RIS phase shifts on the data rate, Zhang _et al._ jointly optimized the number of RIS phase shifts and RIS reflection beamforming for RIS-assisted communication systems [22]. Li _et al._ jointly designed the trajectory and the RIS reflect beamforming for UAV communications [23]. Abeywickrama _et al._ investigated a practical RIS phase shift model, and jointly designed the transmit beamforming and the RIS reflect beamforming [24]. In addition, the physical-layer security of RIS-assisted systems was analyzed in [25]. Deep learning technologies in RIS-aided systems were explored in [26, 27, 28]. Deep learning technologies instead of conventional optimization methods were investigated for RIS-assisted aerial-terrestrial communications [29, 30].
#### I-B2 MAC Protocol for RIS-Assisted Communications
With the development of the physical layer technological breakthrough on RISs, an enormous amount of research effort focus on the RIS-assisted multi-user communication system, especially its MAC protocol for system improvement. Until now, the distributed or the centralized MAC protocols for the system performance improvement have been proposed for RIS-assisted communications. To be specific, the TDMA-based scheme was present to enable multiple users' communications via RISs on the same frequency in different time slots. For example, Hu _et al._ designed a frame-based MAC protocol for the RIS-assisted sensing system to achieve an accurate posture recognition [31]. Bai _et al._ proposed a TDD transmission protocol for RIS-aided mobile edge computing (MEC) systems [32]. Cao _et al._ proposed a frame-based MAC protocol to converge RIS and MEC into space information networks [33], and Yang _et al._ extended these investigations to discuss an RIS-assisted intelligent spectral learning system [34].
The FDMA-based scheme was adopted as well as by multiple users to communicate via RISs in the same time slot on non-overlapping domain frequency channels, and for example, Yang _et al._ proposed a practical OFDM-based transmission protocol for RIS-enhanced communication system [35]. Jung _et al._ also investigated the RIS-aided transmission protocol combined TDD with OFDMA schemes to achieve user scheduling and power control [36]. Moreover, the SDMA-based scheme was used to support communications among users via RISs either in a unique angular direction or by spatial multiplexing [37]. Furthermore, NOMA schemes were conceived for enhancing the multiple access performance of RIS-assisted multi-user communications [38, 39, 40, 41]. In contrast
Fig. 1: The interplay of MAC protocol and RIS configuration.
to these centralized MAC protocols, Cao _et al._ designed a distributed MAC protocol for RIS-assisted multi-user system with considering mobility and randomness of users [6, 42]. Additionally, efficient AI-based RIS-assisted MAC protocols have been investigated in [6, 30].
### _Contributions and Organizations_
The major contributions are summarized as follows:
* **Framework design:** To improve the throughput performance of NGMA, we propose an RIS-assisted multiple access framework. In the proposed framework, the static and mobile users can communicate with the base station (BS) via RISs through different multiple access schemes at a low cost.
* **MAC protocol:** To achieve high connectivity and access fairness of NGMA, we design a MAC protocol that integrates the scheduled-based and contention-based schemes into a frame. By implementing different RIS configuration on different types of users, we achieve high efficient RIS-assisted multiple access, while considering randomness and mobility.
* **Analysis and optimization:** We first analyze the system throughput performance of the proposed MAC protocol. Then, we formulate a joint optimization problem to maximize the system throughput, while guaranteeing the fairness of users. To solve the formulated problem, we decompose the original problem into two sub-problems: the MAC design problem and the RIS phase configuration problem, and then an alternating optimization technique is adopted to solve them.
* **Performance evaluation:** We evaluate the proposed MAC protocol in terms of system throughput and fairness. Simulation results reveal a trade-off relationship between the system throughput and fairness, and demonstrate that our MAC design outperforms benchmarks in terms of system throughput and access fairness.
The rest of this paper is organized as follows. We propose a multiple access framework for RIS-assisted multi-user communications in Section II. We then design a MAC protocol for the proposed framework in Section III. Next, we analyze the system performance of the designed MAC in Section IV. Furthermore, we formulate a joint optimization problem that includes the MAC protocol and the RIS configuration to maximize the system performance, and we also solve the formulated mixed-integer nonlinear programming (MINLP) problem in Section V. Simulation results are discussed in Section VI. Finally, conclusions are drawn in Section VII.
_Notations:_ As per the traditional notation, a bold letter indicates a vector or matrix. \(\max\{\cdot\}\) and \(\min\{\cdot\}\) represent the maximum value and the minimum value, respectively. The amplitude of a complex number \(x\) is denoted by \(|x|\). The main notation we use is listed in Table I.
## II Considered Multiple Access framework
In this section, we first introduce the system scenario of RIS-assisted multi-user wireless communications in Section II-A, and then we present a multiple access framework in Section II-B.
### _System Scenario_
We explicitly consider the different mobility profiles of users in practical scenarios (e.g., in smart industries where fixed sensors and mobile robots co-exist.), and improve the efficiency of the MAC protocol and reduce the complexity of the RIS configuration in our multi-user communication scenario by exploiting users' mobility profiles, as illustrated in Fig. 2. In contrast to [43, 44], the users having similar mobility profiles are grouped together to enhance the interplay of the RIS configuration and MAC protocol. In our scenario, we consider \(K\) existing users, \(Z\) new mobile users, \(M\) RISs each having \(N\) reflecting elements, and a BS, where \(M\) RISs are employed to assist the communications of \(K+Z\) users with the BS over \(C\) sub-channels. Here, we define the existing users and new mobile users as follows.
* _Existing users:_ These are the users who are already supported by the network. There are two types of existing users: static users without mobility and mobile users who may move out of the BS's coverage area.
* _New mobile users:_ These are users who are just joining
\begin{table}
\begin{tabular}{|c|l|} \hline
**Notation** & **Definition** \\ \hline \(\mathcal{K}\) & The set of \(K\) existing users \\ \hline \(\mathcal{M}\) & The set of \(M\) RISs \\ \hline \(\mathcal{N}\) & The set of \(N\) reflecting elements on one RIS \\ \hline \(\mathcal{C}\) & The set of \(C\) sub-channels \\ \hline \(\mathcal{J}\) & The set of \(J\) data slots on each sub-channel \\ \hline \(\mathcal{X}\) & The set of \(X\) static users \\ \hline \(\mathcal{Y}\) & The set of \(Y\) mobile users \\ \hline \(Z\) & The number of new mobile users \\ \hline \(U_{k}\) & The \(k\)-th user \\ \hline \(R_{m}\) & The \(m\)-th RIS \\ \hline \(\mathbf{g}_{km}\) & The vector of reflected path between \(U_{k}\) and \(R_{m}\) \\ \hline \(\mathbf{h}_{km}\) & The vector of reflected path between \(R_{m}\) and BS \\ \hline \(r_{k}\) & The direct path between \(U_{k}\) and BS \\ \hline \(D_{cj}\) & The \(j\)th data slot on sub-channel \(c\) \\ \hline \(u_{k}\) & The mobile profile of \(U_{k}\) \\ \hline \(a_{km}\) & The state of \(R_{m}\) for \(X\) static and \(Y\) mobile users \\ \hline \(t_{kj}\) & The state of \(D_{cj}\) for \(X\) static users \\ \hline \(\mathcal{S}_{s}\) & The throughput of the scheduled transmissions \\ \hline \(\mathcal{S}_{c}\) & The throughput of the contended transmissions \\ \hline \(\mathcal{S}_{0}\) & The overall throughput \\ \hline \(\alpha\) & The ratio of the scheduled periods \\ \hline \(\beta\) & The ratio of the contended periods \\ \hline \(t_{0}\) & The duration of the pilot period \\ \hline \(t_{1}\) & The duration of the scheduled transmission period \\ \hline \(t_{2}\) & The duration of the computing transmission period \\ \hline \(t\) & The duration of a data slot \\ \hline \(\mathcal{T}\) & The set of \(t_{0},t_{1}\), and \(t_{2}\) \\ \hline \(s_{k}\) & The transmit signal of \(U_{k}\) \\ \hline \(w_{k}\) & The additive white Gaussian noise \\ \hline \(\mathbf{\Theta}_{km}\) & The matrix of RIS reflection coefficient of \(U_{k}\) \\ \hline \(\mathbf{W}\) & The matrix of RIS phase shift \\ \hline \(\boldsymbol{\theta}_{km}\) & The vector of phase shift on \(R_{m}\) of \(U_{k}\) \\ \hline SNR\({}_{km}\) & The SNR at the BS from \(U_{k}\) via \(R_{m}\) \\ \hline \(\rho_{k}^{2}\) & The transmit power of \(U_{k}\) \\ \hline \(B\) & The total bandwidth \\ \hline \end{tabular}
\end{table} TABLE I: List of Main Notation
the network in the current frame due to mobility.
The set of existing users, RISs, reflecting elements and sub-channels are denoted by \(\mathcal{K}=\{1,\ldots,k,\ldots,K\}\), \(\mathcal{M}=\{1,\ldots,m,\ldots,M\}\), \(\mathcal{N}=\{1,\ldots,n,\ldots,N\}\), and \(\mathcal{C}=\{1,\ldots,c,\ldots,C\}\), respectively. We denote the \(k\)th user as \(U_{k},\ k\in\bar{\mathcal{K}}\), where \(\bar{\mathcal{K}}=\{1,\ldots,k,\ldots,K+Z\}\), while we represent the \(m\)th RIS as \(R_{m},\ m\in\mathcal{M}\). Each user is equipped with a single antenna and the BS is equipped with multiple antennas. Each RIS is equipped with a controller connected to the BS. The vector of reflected path between \(U_{k}\) and \(R_{m}\) is denoted by \(\mathbf{g}_{km}\in\mathbb{C}^{N\times 1}\), the vector of reflected path between \(R_{m}\) and BS for \(U_{k}\) is denoted by \(\mathbf{h}_{km}\in\mathbb{C}^{1\times N}\), and the direct path between \(U_{k}\) and BS is denoted by \(r_{k}\), where \(k\in\bar{\mathcal{K}},m\in\mathcal{M}\). In the system considered, a quasi-static fading channel model and perfect channel state information (CSI) are assumed1. Each RIS is assumed to be equipped with passive elements and operate in the non-overlapping frequency domain2. Additionally, we assume that one RIS can be used by one user at a time, and each user has the same payload. We also assume that the BS knows the network state of all existing users3.
Footnote 1: Channel estimation in RIS-assisted wireless systems is an ongoing field of research with approaches ranging from cascade channel estimation via passive RISs to compressed sensing channel estimation via RISs with minimal and basic sensing capabilities [45, 46].
Footnote 2: Compared with the desired reflected signal, the interference power caused by reflections via the remaining RISs in non-overlapping frequency bands is relatively low [47], and can be ignored.
Footnote 3: The new mobile users will be regarded as an existing user in the next frame once it has joined in the current frame, and the BS will update the value of \(K\) based on the dynamic network.
### _Framework_
A multiple access framework is conceived for RIS-assisted communications to exploit the symbiotic interplay between the MAC protocol and RIS configuration. The proposed multiple access framework is shown as in Fig. 2, which combines two aspects: 1) the MAC protocol, and 2) the configuration mode of multiple RISs. These two interact with each other according to the dynamic wireless environments. The MAC protocol is designed and optimized on a frame-by-frame basis, and each frame is divided into three periods: a pilot period, \(t_{0}\); a computing period, \(t_{1}\); and a transmission period, \(t_{2}\). The latter consists of the scheduled and the contended transmission periods, the proportion of each is \(\alpha\) and \(\beta\), respectively. In a frame, based on pilot transmissions and optimizing computation, the proposed MAC protocol switches between the scheduled and the contended modes (i.e., static users are scheduled to communicate with the BS via \(M\) RISs, while mobile users are allowed to contend for communications with the BS via the same RISs.). If only static or only mobile users have to be served in a frame, the proposed MAC protocol will become a pure scheduled mode or a pure contended mode. The ratio, \(\beta/\alpha\) is optimized by the BS during the computing period. In addition, centralized optimization is used for scheduled transmissions, while distributed optimization is used for contended transmissions. In contrast to the conventional RIS-aided MAC design methods [42], the proposed multiple access framework incorporates the RIS configuration into the MAC protocol, thereby improving the MAC efficiency as well as reducing the complexity of RIS configuration.
Specifically, the proposed multiple access framework has to achieve the following two challenging goals:
* For the MAC protocol, the low-complexity RIS configuration helps with improving the system throughput performance of the MAC protocol, while guaranteeing the fairness of users.
* For the RIS configuration, an efficient MAC protocol can decrease the RIS configuration complexity and improve the RIS utilization, and thus the system throughput performance of the MAC protocol can be further enhanced.
## III Proposed RIS-Assisted MAC Protocol
Based on the proposed multiple access framework and its design objectives, in this section, we present our scheme in terms of two aspects: i) MAC protocol in Section III-A, and ii) RIS configuration in Section III-B. Then, an intuitive example is illustrated in Section III-C.
### _MAC Protocol_
Based on the proposed multiple access framework in Section II-B, a MAC protocol that integrates the centralized and distributed implementations into a frame is presented, as illustrated in Fig. 3. The pilot period is further divided into \(K\) pilot slots according to the number of existing users. Based on the received pilot transmissions, the BS estimates the CSI of static users and calculates their RIS configuration and MAC protocol parameters during the computing period. Assume that the scheduled transmission period contains \(J\) data slots, and
Fig. 2: The multiple access framework for RIS-assisted communications by exploiting users’ mobility profiles, where \(K+Z\) users communicate with a BS via \(M\) RISs. During a frame, based on pilot transmission and computation, static users communicate with the BS by scheduling \(M\) RISs, while mobile users communicate with the BS using these RISs by contention. The ratio of the transmission period of two types is \(\beta/\alpha\), which is optimized by the BS during the computing period.
the \(j\)th data slot on sub-channel \(c\) is denoted as \(D_{cj},\ c\in\mathcal{C},\ j\in\mathcal{J}\), where \(\mathcal{J}=\{1,\ldots,j,\ldots,J\}\) is the set of data slots on each sub-channel. Then the BS tightly coordinates the multiple access of static users, following that, the static users transmit their data to the BS via RISs based on the scheduled results. During the contended transmission period, mobile users compute the RIS configuration and contend for their access based on their CSI.
Specifically, after the pilot transmissions and computing, the static users are scheduled, while the mobile users (i.e., the existing mobile users and the new mobile users in the current frame) are allowed to contend. In this context, the implementation of the RIS-assisted MAC protocol goes through the following four steps.
**Step 1: Pilot transmissions**. Based on the synchronization of the BS, during the pilot period, \(K\) existing users transmit their pilots to the BS for their RIS-assisted transmissions.
**Step 2: Computing and feedback**. During the computing period, the BS has to carry out the computations as follows.
* _User classification_. After receiving the pilot transmission of users, the BS classifies these users according to the known network information. The type of a user is defined as \(u_{k},\forall k\in\mathcal{K}\), which is denoted by \[u_{k}=\begin{cases}1,&\text{if $U_{k}$ is static user, $\forall k\in\mathcal{K}$},\\ 0,&\text{if $U_{k}$ is mobile user, $\forall k\in\mathcal{K}$}.\end{cases}\] (1) Thus, the number of static users is denoted as \(X=\sum_{k=1}^{K}u_{k}\). Let \(\mathcal{X}=\{1,\ldots,k,\ldots X\}\) be the set of \(X\) static users.
* _Channel estimation_. According to the user classification, the BS estimates the involved links of static users based on its pilot transmission on each sub-channel, i.e., \(\mathbf{g}_{km}\), \(\mathbf{h}_{km}\), and \(r_{k}\), where \(k\in\mathcal{X},m\in\mathcal{M}\).
* _Resource allocation_. Based on the channel information of static users, the BS computes the MAC protocol parameters and allocates the slots, power, and RIS resources for static users. To be specific, the BS first computes the duration of the scheduled transmission period and that of the contended transmission period, which is depicted in Section IV. Then, the BS allocates \(M\) RISs and \(J\) data slots to static users over \(C\) non-overlapping sub-channels. Since each user is only allowed to use one RIS in a frame, we define the state of \(M\) RISs for \(X\) static users as \[a_{km}=\begin{cases}1,&\text{if $U_{k}\gets R_{m},\ \forall k\in\mathcal{X},m\in\mathcal{M}$},\\ 0,&\text{Otherwise},\ \forall k\in\mathcal{X},m\in\mathcal{M},\end{cases}\] (2) where \(U_{k}\gets R_{m}\) means that \(R_{m}\) is allocated to \(U_{k}\). We then define the state of \(J\) data slots for \(X\) static users as \[t_{kj}=\begin{cases}1,&\text{if $U_{k}\gets D_{cj},\ \forall k\in\mathcal{X},j\in \mathcal{J},c\in\mathcal{C}$},\\ 0,&\text{Otherwise},\ \forall k\in\mathcal{X},j\in\mathcal{J},c\in\mathcal{ C},\end{cases}\] (3) where \(U_{k}\gets D_{cj}\) means that \(D_{cj}\) is allocated to \(U_{k}\), we have \(c=m\) since each RIS is bonded to a sub-channel.
* _RIS phase configuration_. The reflection parameters of \(M\) scheduled RISs are computed at the BS to support the transmission of \(X\) static users. Note that the computation of RIS configuration should be considered with the MAC protocol parameters, such as \(\alpha\), \(\beta\), and \(t_{2}\).
Based on the above computations, the BS feedbacks the scheduled information to static users during the computing period.
**Step 3: Scheduled transmission for static users**. During the scheduled transmission period, the BS instructs each RIS controller to configure its reflection parameters and initiates the RIS-assisted transmissions of the scheduled static users.
**Step 4: Contended transmission for mobile users**. As the designated contended transmission period begins, the unscheduled mobile users (i.e., \(K-X\) existing mobile users and \(Z\) new mobile users) start their multiple access and compute the RIS configuration by themselves based on the estimated CSI, which is calculated according to the sensing of each mobile user on each sub-channel. In contrast to the scheduled transmission of static users, mobile users have to negotiate with the BS for their channel access and RIS configuration, which is based on the distributed coordination function (DCF) scheme. Here, the results that \(Y\) mobile users contend for \(M\) RISs can be denoted by
\[a_{km}=\begin{cases}1,&\text{if $U_{k}\to R_{m},\ \forall k\in\mathcal{Y},m\in \mathcal{M}$},\\ 0,&\text{Otherwise},\ \forall k\in\mathcal{Y},m\in\mathcal{M},\end{cases} \tag{4}\]
where \(U_{k}\to R_{m}\) means that \(U_{k}\) contends for \(R_{m}\) successfully. \(\mathcal{Y}=\{1,\ldots,k,\ldots Y\}\) is the set of \(Y\) mobile users, and \(Y=K-X+Z\).
Specifically, the involved four actions at mobile users and the BS are described as follows.
* _Sensing, computing, and Backoff_. A mobile user senses the state of \(C\) sub-channels. Once some sub-channels are sensed to be idle, the mobile user computes the configuration of unused \(\hat{M}(\hat{M}\leq M)\) RISs and occupies an optimal RIS and the according sub-channel, then starts the backoff after a DCF inter-frame space (DIFS).
Fig. 3: The MAC protocol is a frame-based structure, which integrates the scheduling access and the contention access into a frame.
* _Request_. Once the backoff is finished at the contended sub-channel, the mobile user sends a request-to-send (RTS) packet included its RIS configuration information to the BS on its occupied sub-channel.
* _Feedback_. If the requested RIS is available for the mobile user on its occupied sub-channel, the BS allows the RIS controller to configure the RIS reflection parameters, and replies a clear-to-send (CTS) packet to the mobile user after a short inter-frame spacing (SIFS).
* _Transmission_. Following the elapse of a SIFS, once the CTS feedback is received at the mobile user, the mobile user then transmits its data to the BS via the configured RIS on its occupied sub-channel.
Given the dynamic switching between the scheduled mode and the contended mode, our MAC protocol is capable of maintaining the target rate via RISs at a low cost. Additionally, the fairness of the static and mobile users having poor channel conditions can be maintained by scheduling and effective contention, respectively.
By implementing the designed MAC protocol, the overall system throughput is calculated by
\[\mathcal{S}_{o}=\frac{t_{2}}{t_{0}+t_{1}+t_{2}}(\alpha\mathcal{S}_{s}+\beta \mathcal{S}_{c}), \tag{5}\]
where \(\mathcal{S}_{s}\) and \(\mathcal{S}_{c}\) indicate the throughput of the scheduled transmissions and that of the contended transmissions, respectively. \(\alpha\) and \(\beta\) are the ratio of the scheduled transmission period and contended transmission period, respectively, and \(\alpha+\beta=1,\alpha\in[0,1],\beta\in[0,1]\). In addition, \(t_{0},t_{1}\), and \(t_{2}\) denote as the pilot, the computing, and the transmission periods, respectively, denoted by the set \(\mathcal{T}=\{t_{0},t_{1},t_{2}\}\). Due to the introduction of RISs, the overall system throughput in (5) is not only affected by the MAC protocol but also significantly affected by the RIS configuration.
Introducing RISs enhances the quality of wireless links, thereby attaining the following benefits of the MAC protocol.
* _MAC efficiency improvement_. On the one hand, a low-complexity RIS configuration will reduce the computing cost; On the other hand, the access latency of static and mobile users can be reduced by the low-complexity scheduling and the low-contention collision, respectively. Thus, the efficiency of the MAC protocol is improved.
* _MAC fairness improvement_. Because of the separation of static and mobile users and the low-complexity operation on them, the fairness of users can be enhanced.
### _RIS Configuration_
As illustrated in Fig. 3, the RIS configuration is integrated into the computing period for static users and the contended transmission period for mobile users. By separately configuring RIS phase shifts at the BS and mobile users for the scheduled and contended transmissions, the complexity of the RIS configuration is decreased.
In the system, the user-RIS-BS channel is thus modeled as a composition of two components, namely, the direct path (i.e., the user-BS link) and the reflect path (i.e., the user-RIS-BS path including the user-RIS link and the RIS-BS link). Hence, the signal received at the BS from \(U_{k}\) through both the user-RIS and user-RIS-BS channels is denoted by
\[y_{k}=\underbrace{r_{k}s_{k}}_{\text{direct path}}+\underbrace{\mathbf{h}_{km} \mathbf{\Theta}_{km}\mathbf{g}_{km}s_{k}}_{\text{reflect path}}+w_{k},k\in \bar{\mathcal{K}},\;m\in\mathcal{M}, \tag{6}\]
where \(s_{k}\) represents the transmit signal of \(U_{k}\), and it is an independent random variable with zero mean and unit variance (normalized power). \(w_{k}\) denotes the additive white Gaussian noise (AWGN) at the BS, \(w_{k}\sim\mathcal{CN}(0,\sigma^{2})\). \(r_{k}\) is the direct link when \(U_{k}\) transmits data to the BS.
In (6), \(\mathbf{\Theta}_{km}\) is the matrix of the RIS reflection coefficient of \(U_{k}\), which can be expressed as
\[\mathbf{\Theta}_{km}\!=\!\text{diag}\left(\phi_{km}^{1},\ldots,\phi_{km}^{n}, \ldots,\phi_{km}^{N}\right),k\in\bar{\mathcal{K}},\;m\in\mathcal{M}, \tag{7}\]
where \(\phi_{km}^{n}=\gamma_{km}^{n}e^{j\theta_{km}^{n}}\) is the reflection coefficient of RIS element \(n\) on \(R_{m}\) for \(U_{k}\), \(\{\theta_{km}^{n},\gamma_{km}^{n}\}\) are the phase shift and amplitude reflect coefficient of RIS element \(n\) on \(R_{m}\) for \(U_{k}\). In practice, we assume that a continuous phase shift with a constant amplitude reflection coefficient is applied to each RIS element, i.e., \(|\gamma_{km}^{n}|=1,\theta_{km}^{n}\in[0,2\pi),k\in\bar{\mathcal{K}},\;m\in \mathcal{M},\;n\in\mathcal{N}\). Let \(\mathbf{\Psi}=[\mathbf{\theta}_{11},\ldots,\mathbf{\theta}_{km},\ldots, \mathbf{\theta}_{(K+Z)M}]\in\mathbb{C}^{N\times(K+Z)M}\) denote the matrix of the RIS phase shift, where \(\mathbf{\theta}_{km}\!=\![\theta_{km}^{1},\ldots,\theta_{km}^{n},\ldots, \theta_{km}^{N}]^{T}\in\mathbb{C}^{N\times 1}\) is the vector of the phase shift of \(R_{m}\) that is aligned to \(U_{k}\).
Accordingly, the SNR at the BS from \(U_{k}\) via \(R_{m}\) is expressed as
\[\text{SNR}_{km}\!=\!|(r_{k}\!+\!\mathbf{h}_{km}\mathbf{\Theta}_{km}\mathbf{ \theta}_{km})\rho_{k}|^{2}/\sigma^{2},k\!\in\!\bar{\mathcal{K}},m\!\in\! \mathcal{M}, \tag{8}\]
where \(\rho_{k}^{2}\) is the transmit power of \(U_{k}\).
As a benefit of the proposed MAC protocol, the RIS configuration has the following three advantages.
* _Complexity reduction_. By implementing centralized and distributed operations for static and mobile users instead of harnessing centralized operation for all users, the computational complexity of RISs can be substantially reduced in the computing period and the contended transmission period, respectively.
* _RIS utilization improvement_. Upon considering dynamic wireless environments, each period of a frame can be adjusted to improve the utilization of RISs.
* _Service fairness improvement_. The RISs can serve new mobile users in the contended period of the current frame, thereby providing fairness for the users.
### _An Intuitive Example_
An intuitive implementation example with \(K+1\) users is illustrated in Fig. 4, where \(K+1\) users include \(K\) existing static and mobile users, and a new mobile user that joins the system in the current frame.
\begin{table}
\begin{tabular}{c|c|c|c} \hline \multicolumn{2}{c|}{**Multiple access framework**} & **Static users** & **Mobile users** \\ \hline \multirow{2}{*}{Design} & MAC protocol & Scheduled & Contended \\ \cline{2-4} & RIS configuration & Centralized & Distributed \\ \hline \multirow{2}{*}{Optimization} & MAC protocol & \(\alpha^{*},\mathcal{T}^{*}\) & \(\beta^{*},\mathcal{T}^{*}\) \\ \cline{2-4} & RIS configuration & \(\mathbf{\Theta}_{km}^{*},k\in\mathcal{X}\) & \(\mathbf{\Theta}_{km}^{*},k\in\mathcal{Y}\) \\ \hline \end{tabular}
\end{table} TABLE II: Summary of design and optimization
In the illustrated example, by implementing the proposed multiple access framework, \(K\) existing users are classified into two types, i.e., \(X\) static users and \(K-X\) mobile users. During the computing period, the scheduled transmission period and the contended transmission period are optimized based on the classification of users, then the BS schedules \(X\) static users and \(M\) RISs for their RIS-assisted communications in the scheduled transmission period. Next, the existing \(K-X\) mobile users and a new mobile user contend for their RIS-assisted communications during the contended transmission period. Based on the optimized RIS configuration in the different transmission periods, each RIS controller is controlled by the BS to support the static users and the mobile users, respectively. A summary of design and optimization in the multiple access framework, as illustrated in Table II.
## IV Performance Analysis
In this section, we thoroughly analyze the system throughput performance of the proposed multiple access framework, where the RIS-assisted scheduled transmissions are analyzed in Section IV-A, and the RIS-assisted contended transmissions are analyzed in Section IV-B.
### _RIS-Assisted Scheduled Transmissions_
For the proposed multiple access framework, because of employing TDMA and FDMA schemes for static users, the system throughout of RIS-assisted scheduled transmissions, \(\mathcal{S}_{s}\), is the sum throughput of each scheduled static user. Thus, \(\mathcal{S}_{s}\) can be expressed as
\[\mathcal{S}_{s}=\frac{1}{\alpha t_{2}}\sum_{k=1}^{X}\sum_{m=1}^{M}\sum_{j=1}^{ J}a_{km}t_{kj}t\frac{B}{C}\log_{2}\left(1+\text{SNR}_{km}\right), \tag{9}\]
where \(B\) is the total bandwidth, \(\sum_{m=1}^{M}a_{km}=1,\forall k\in\mathcal{X}\), and \(\sum_{j=1}^{J}t_{kj}=1,\forall k\in\mathcal{X}\), \(t\) is the duration of a data slot.
According to (8), (9) can be rewritten as
\[\mathcal{S}_{\text{s}}\!\!=\!\!\frac{Bt}{C\alpha t_{2}}\!\sum_{k=1}^{X}\!\sum_{ m=1}^{M}\!\sum_{j=1}^{J}\!a_{km}t_{kj}\log_{2}\!\left(\!1\!+\!\!\frac{\left|\! \left\langle r_{k}\!+\!\mathbf{h}_{km}\boldsymbol{\Theta}_{km}\mathbf{g}_{km} \right\rangle\!\rho_{k}\!\right|^{2}}{\sigma^{2}}\!\right)\!, \tag{10}\]
where \(\alpha t_{2}=Jt\), \(\sum_{m=1}^{M}a_{km}=1,\forall k\in\mathcal{X}\), and \(\sum_{j=1}^{J}t_{kj}=1,\forall k\in\mathcal{X}\).
To guarantee the RIS-assisted transmission of each static user, the scheduled transmission period has to meet the following condition,
\[JC\geq X. \tag{11}\]
From (10), it can be observed that the system throughput of RIS-assisted scheduled transmissions is mainly affected by the configuration of RISs. Because the benefit of RISs, the data rate of each scheduled static user is increased significantly. The higher data rate is thereby improving the system throughput of RIS-assisted scheduled transmissions when the value of \(\alpha t_{2}\) is given. In addition, the system throughput of RIS-assisted scheduled transmissions is also improved by optimizing the MAC protocol parameters (\(\alpha\) and \(t_{2}\)), i.e., using the minimum time to achieve the RIS-assisted transmission of all static users.
### _RIS-Assisted Contended Transmissions_
Based on the performance analysis of RIS-assisted scheduled transmissions, \(X\) static users are scheduled by the BS to use \(M\) RISs in the duration of \(\alpha t_{2}\), while the other \(K-X\) mobile users, as well as \(Z\) new mobile users, are allowed to contend for \(M\) RISs in the duration of \(\beta t_{2}\). In other words, total \(Y=K-X+Z\) mobile users contend for \(M\) RISs in the duration of \(\beta t_{2}\) for their RIS-assisted transmissions. Thus, the performance of RIS-assisted contended transmissions is analyzed as follows.
During the contended transmission period, we assume that each idle sub-channel has an equal probability, \(\frac{1}{C}\), to be the best idle channel sensed by a mobile user, and the probability that \(V_{i}\) mobile users select a given channel, \(P_{V_{i}}\), can be given by
\[P_{V_{i}}\!=\!\left(\!\begin{array}{c}N_{i}\\ V_{i}\end{array}\!\right)\!\left(\!\frac{1}{C}\!\right)^{V_{i}}\left(1\!-\! \frac{1}{C}\right)^{N_{i}\!-\!V_{i}}, \tag{12}\]
where \(N_{i}\) is the number of mobile users that contend for their RIS-assisted transmissions at the \(i\)th time, and \(V_{i}\) is the number of mobile users that select a given channel at the \(i\)th time.
Refer to [48], at the \(i\)th time, as \(V_{i}\) mobile users contend for their access at the particular sub-channel, the successful transmission probability (\(P_{i,s}\)), the idle probability (\(P_{i,e}\)), and the failed transmission probability (\(P_{i,c}\)) can be expressed as
\[\left\{\begin{array}{l}P_{i,s}=V_{i}\tau_{i}\left(1-\tau_{i}\right)^{V_{i}- 1},\\ P_{i,e}=(1-\tau_{i})^{V_{i}-1},\\ P_{i,c}=1-P_{i,e}-P_{i,s},\end{array}\right. \tag{13}\]
where \(\tau_{i}\) is the stationary probability that a mobile user transmits a data packet in a random slot at the \(i\)th time, which is expressed as
\[\tau_{i}=\frac{2(1-2p_{i})}{(1-2p_{i})(W+1)+p_{i}W(1-\left(2p_{i}\right)^{l})}. \tag{14}\]
Fig. 4: An intuitive example of the proposed framework.
In (14), \(W\in\left[W_{min},W_{max}\right]\) is the contention window and \(l\) is the backoff stage. \(W_{min}\) and \(W_{max}\) denote the minimum and maximum contention window, respectively. \(p_{i}\) is the collision probability at the \(i\)th time, which is calculated by
\[p_{i}=1-\left(1-\tau_{i}\right)^{V_{i}-1}. \tag{15}\]
Therefore, the probability that there is a successful transmission as \(V_{i}\) mobile users select the particular channel is \(P_{V_{i}}P_{i,s}\). Hence, the successful transmission probability in a given channel at the first time can be expressed as
\[P_{1,C} = \sum_{V_{1}=1}^{N_{1}}P_{V_{1}}P_{V_{1},s}\] \[= \sum_{V_{1}=1}^{N_{1}}\left(\begin{array}{c}N_{1}\\ V_{1}\end{array}\right)\!V_{1}\tau_{1}\left(1\!-\!\tau_{1}\right)^{V_{1}\!-\! 1}\!\left(\frac{1}{C}\right)^{V_{1}}\!\left(1\!-\!\frac{1}{C}\right)^{N_{1}\!- \!V_{1}},\]
where \(N_{1}=Y\). Note that each mobile user selects its best idle channel based on its individual sensing, and then transmits its data to the BS via the RIS on its selected best idle channel. Moreover, by taking the effect of sensing errors into account in the selection of the best idle channel, (16) can be rewritten as follows:
\[P_{1,C}\!=\!\sum_{V_{1}=1}^{Y}\!P_{1,e}\!\left(\begin{array}{c}Y\\ V_{1}\end{array}\right)\!V_{1}\tau_{1}(1-\tau_{1})^{V_{1}\!-\!1}\!\left(\! \frac{1}{C}\!\right)^{V_{1}}\!\left(1\!-\!\frac{1}{C}\!\right)^{Y\!-\!V_{1}}. \tag{17}\]
Refer to [49], the number of mobile users that successfully transmit at the first time can be expressed as
\[\tilde{N_{1}}=\left\lfloor CP_{1,C}\right\rfloor. \tag{18}\]
According to (18), the number of mobile users has to contend for their RIS-assisted transmissions at the second time can be denoted by
\[N_{2}=Y-\left\lfloor CP_{1,C}\right\rfloor. \tag{19}\]
Then, the number of mobile users that successfully transmit at the first and second times can be expressed as
\[\tilde{N_{2}}=\left\lfloor C\sum_{l=1}^{2}P_{l,C}\right\rfloor, \tag{20}\]
where
\[P_{2,C}\!=\!\sum_{V_{2}=1}^{N_{2}}\!P_{2,e}\!\left(\begin{array}{c}N_{2}\\ V_{2}\end{array}\right)\!V_{2}\tau_{2}(1\!-\!\tau_{2})^{V_{2}\!-\!1}\!\left( \!\frac{1}{C}\!\right)^{V_{2}}\!\left(1\!-\!\frac{1}{C}\!\right)^{N_{2}\!-\!V _{2}}. \tag{21}\]
Similarly, the successful transmission probability at the \(i\)th time and the number of mobile users successfully transmit at the \(i\)th time can be respectively expressed as
\[P_{i,C}\!=\!\sum_{V_{i}=1}^{N_{1}}\!P_{i,e}\left(\begin{array}{c}N_{i}\\ V_{i}\end{array}\right)\!V_{i}\tau_{i}(1\!-\!\tau_{i})^{V_{i}-1}\!\left(\! \frac{1}{C}\!\right)^{V_{i}}\!\left(1\!-\!\frac{1}{C}\!\right)^{N_{i}\!-\!V _{i}} \tag{22}\]
and
\[\tilde{N_{i}}=\left\lfloor C\sum_{l=1}^{i}P_{l,C}\right\rfloor. \tag{23}\]
In (22), the number of mobile users has to contend for their access at the \(i\)th time, \(N_{i}\), is denoted by
\[N_{i}=\left\lfloor Y-C\sum_{l=1}^{i-1}P_{l,C}\right\rfloor. \tag{24}\]
Therefore, to guarantee the fairness of each mobile user (i.e., each mobile user can transmit one time), the number of contention times that can successfully meet the requirement of \(Y\) mobile users, \(N_{r}\), is expressed as
\[N_{r}=\sum_{i}I\left(N_{i}\geq 0\right), \tag{25}\]
where
\[I\left(N_{i}\geq 0\right)=\begin{cases}1,&\text{if }N_{i}\geq 0,\\ 0,&\text{otherwise}.\end{cases} \tag{26}\]
According to [48], it is known that the time length required by one successful contention and transmission can be expressed as
\[t_{r}\!=\!RTS\!+CTS\!+\!t_{d}\!+\!2SIFS\!+\!DIFS\!+\!2\delta, \tag{27}\]
where \(t_{d}\) is the time length of a payload required by the mobile user. Besides, \(RTS\), \(CTS\), \(SIFS\), and \(DFIS\) are the duration of request-to-send (RTS), clear-to-send (CTS), short inter-frame space (SISF), and DCF inter-frame space (DISF), respectively.
Based on the above analysis, the system throughout of RIS-assisted contended transmissions, \(\mathcal{S}_{c}\), is the sum throughput of each contended mobile user, which can be expressed as
\[\mathcal{S}_{c}=\frac{1}{\beta t_{2}}\sum_{k=1}^{Y}\sum_{m=1}^{M}a_{km}t_{d} \frac{B}{C}\log_{2}\left(1+\text{SNR}_{km}\right), \tag{28}\]
where \(\sum_{m=1}^{M}a_{km}=1,\forall k\in\mathcal{Y}\).
According to (8), (28) can be rewritten as
\[\mathcal{S}_{c}\!=\!\frac{Bt_{d}}{C\beta t_{2}}\sum_{k=1}^{Y}\sum_{m=1}^{M}a_ {km}\!\log_{2}\!\left(\!1\!+\!\frac{\left|\left(r_{k}\!+\!\mathbf{h}_{km} \mathbf{\Theta}_{km}\mathbf{\Theta}_{km}\mathbf{\Theta}_{km}\right)\rho_{k} \right|^{2}}{\sigma^{2}}\!\right), \tag{29}\]
To guarantee the RIS-assisted transmission of each mobile user, the contended transmission period has to meet the following condition
\[\beta t_{2}\geq N_{r}t_{r}. \tag{30}\]
According to (29), the parameters of the proposed RIS-assisted MAC protocol can be designed as
\[\beta:\alpha\geq\left(N_{r}t_{r}\right):\left(Jt\right). \tag{31}\]
**Remark 1**.: _Based on the throughput analysis in (10) and (29), the overall throughput included the scheduled and contended transmissions can be expressed as (32), where \(\sum_{m=1}^{M}a_{km}=1,\forall k\in\mathcal{X}\), and \(\sum_{m=1}^{M}a_{km}=1,\forall k\in\mathcal{Y}\). To improve the performance of the proposed joint multiple access framework, the MAC protocol parameters (e.g., \(\mathcal{T}\), \(\alpha,\beta\), and \(\rho_{k}^{2}\) ) and the RIS configuration parameters (e.g., \(a_{km}\) and \(\mathbf{\Theta}_{km}\)) can be jointly optimized for static and mobile users._
## V Joint Optimization and Solution
In this section, we first formulate a joint optimization problem in Section V-A. We then decompose the original problem in Section V-B, and we solve each sub-problem in Section V-C and Section V-D. Finally, we discuss the complexity in Section V-E.
### _Problem Formulation_
In this paper, we aim for maximizing the overall system throughput by jointly optimizing the MAC protocol and the RIS configuration parameters. Specifically, the proposed joint optimization is formulated as
\[\mathbb{P}_{0}:\max_{\{\mathcal{T},\alpha,\beta,t_{kj},\rho_{k}^{ \prime},a_{km},\mathbf{\Psi}\}}\ \mathcal{S}_{o} \tag{33}\] \[\mathrm{s.t.}\ t_{0}{>}0,\ t_{1}{>}0,\ t_{2}{>}0,\] (33a) \[\alpha+\beta=1,\ \frac{\beta}{\alpha}\geq\frac{N_{t}t_{r}}{Jt}, \ \alpha\in[0,1],\ \beta\in[0,1],\] (33b) \[t_{kj}\in\{0,1\},\ \forall k\in\mathcal{X},\ \forall j\in \mathcal{J},\] (33c) \[\sum_{j=1}^{J}t_{kj}=1,\ \forall k\in\mathcal{X},\] (33d) \[\sum_{k=1}^{X}\rho_{k}^{2}\leq P_{max},\ \ \forall k\in \mathcal{X},\] (33e) \[\rho_{k}^{2}=\Upsilon,\ \ \forall k\in\mathcal{Y},\] (33f) \[\frac{B}{C}\mathrm{log}_{2}(\text{1}{+}\text{SNR}_{km})\geq R_{ min},\ \forall k\in\bar{\mathcal{K}},\forall m\in\mathcal{M},\] (33g) \[a_{km}\in\{0,1\}\,,\ \forall k\in\bar{\mathcal{K}},\forall m \in\mathcal{M},\] (33h) \[\sum_{m=1}^{M}a_{km}=1,\ \forall k\in\mathcal{X},\] (33i) \[\sum_{k=1}^{X}a_{km}=J,\ \forall m\in\mathcal{M},\] (33j) \[\sum_{m=1}^{M}a_{km}=1,\ \forall k\in\mathcal{Y},\] (33k) \[|\gamma_{km}^{k}|=1,\ \forall n\in\mathcal{N},\ \forall k\in\bar{ \mathcal{K}},\ \forall m\in\mathcal{M},\] (33l) \[\theta_{km}^{n}\in[0,2\pi),\forall n\in\mathcal{N},\ \forall k\in\bar{ \mathcal{K}},\ \forall m\in\mathcal{M}, \tag{33m}\]
where (33a) limits the time length of \(t_{0},t_{1}\), and \(t_{2}\). (33b) represents the feasibility of \(\alpha\) and \(\beta\). (33c) indicates that \(t_{kj}\) is a binary value, where \(t_{kj}=1\) represents that slot \(D_{cj}\) is allocated to \(U_{k}\), otherwise \(t_{kj}=0\). (33d) indicates that at most one data slot is used by a static user in a scheduled transmission period. (33e) indicates that the sum transmit power of all static users has to be less than a maximum transmit power, \(P_{max}\). (33f) indicates that the transmit power of each mobile user is fixed as \(\Upsilon\). (33g) indicates that the data rate of \(U_{k}\) has to be higher than a data rate threshold \(R_{min}\). (33h) indicates that \(a_{km}\) is a binary value, where \(a_{km}=1\) represents that the \(R_{m}\) is used by the \(U_{k}\), and \(a_{km}=0\), otherwise. (33i) indicates that at most one RIS is allocated to a static user in a scheduled transmission period. (33j) indicates that one RIS serves \(J\) mobile users in a scheduled transmission period. (33k) indicates that at most one RIS is used by a mobile user in a contended transmission period. (33l) and (33m) indicate the feasibility of each RIS element's amplitude and phase shift.
We observe that the proposed joint optimization problem \(\mathbb{P}_{0}\) in (33) is an MINLP problem, which is NP-hard and whose globally optimal solution is difficult to obtain by using the common standard optimization approaches. On one hand, since the switching of the scheduled transmissions and the contended transmissions, only the static users have to be scheduled by the BS within \(\alpha t_{2}\), while the mobile users contend for the RISs resources randomly in \(\beta t_{2}\). On the other hand, the RIS configuration of each static user is computed at the BS, while the RIS configuration of each mobile user is computed by itself once it contends for the RIS. Hence, the MAC protocol optimization will significantly affect the computational complexity of the RIS configuration. Moreover, the RIS configuration also significantly affects the overall system throughput of the designed MAC protocol.
To this end, an alternative optimization method can be invoked as an intuitive approach to solve problem \(\mathbb{P}_{0}\) in (33). Here, we first decompose problem \(\mathbb{P}\) into two sub-problems; one is the optimization of the MAC protocol, and the other one is the optimization of the RIS configuration. We then solve them according to the iteration method to achieve the joint design optimization.
### _Problem Decomposition_
By using the Tammer decomposition method, the original problem \(\mathbb{P}_{0}\) is rewritten as
\[\hat{\mathbb{P}}_{0}:\max_{\{a_{km},\mathbf{\Psi}\}}\left(\max_{\{ \mathcal{T},\alpha,\beta,t_{kj},\rho_{k}^{2}\}}\mathcal{S}_{o}\right) \tag{34}\] \[\mathrm{s.t.}\ \ (33a)-(33m).\]
To solve the equivalent problem \(\hat{\mathbb{P}}_{0}\) in (34), the transformed two sub-problems are illustrated as
#### V-B1 Sub-problem \(\mathbb{P}_{1}\)
MAC protocol optimization with fixing the RIS configuration to maximize the overall system throughput, i.e.,
\[\mathbb{P}_{1}:\max_{\{\mathcal{T},\alpha,\beta,t_{kj},\rho_{k}^{ \prime}\}}\mathcal{S}_{o} \tag{35}\] \[\mathrm{s.t.}\ \ (33a)-(33g).\]
**Remark 2**.: _In (35), we turn the MAC protocol optimization into further study on the time resource allocation problem and the power resource allocation problem, which can be
computed at the BS. Thus, sub-problem \(\mathbb{P}_{1}\) can be transformed as_
\[\begin{split}&\hat{\mathbb{P}}_{1}:\max_{\rho_{k}^{2}}\left(\max_{ \{\mathcal{T},\alpha,\beta,t_{kj}\}}\mathcal{S}_{o}\right)\\ &\mathrm{s.t.}\quad(33a)-(33g).\end{split} \tag{36}\]
#### V-B2 Sub-problem \(\mathbb{P}_{2}\)
RIS configuration with fixing the MAC protocol parameters to maximize the overall system throughput, i.e.,
\[\begin{split}&\mathbb{P}_{2}:\max_{\{a_{km},\mathbf{\Psi}\}}\mathcal{S}_{o} ^{*}\\ &\mathrm{s.t.}\quad(33f)-(33m).\end{split} \tag{37}\]
**Remark 3**.: _In (37), we turn the RIS configuration into further study on the following two problems: i) the scheduled transmission optimization problem at the BS, and ii) the contended transmission optimization problem at each mobile user. Thus, sub-problem \(\mathbb{P}_{2}\) can be transformed as_
\[\begin{split}&\hat{\mathbb{P}}_{2}:\left(\max_{\{a_{km},\mathbf{\Psi}\}} \mathcal{S}_{s}\right)+\left(\max_{\{a_{km},\mathbf{\Psi}\}}\mathcal{S}_{c}\right) \\ &\mathrm{s.t.}\quad(33f)-(33m).\end{split} \tag{38}\]
### _Solution of The MAC Design Problem_
Through observing the objective function and constraints of the problem \(\mathbb{P}_{0}\) in (33), as the usage state of RISs (\(a_{km}\)) and RIS configuration (\(\mathbf{\Psi}\)) are fixed, the allocation of data slot (i.e., \(t_{kj}\)) will not affect the overall system throughput. Thus, solving \(\hat{\mathbb{P}}_{1}\) is equivalent to solving the following two problems:
\[\begin{split}&\mathbb{P}_{1a}:\ \min_{\{\mathcal{T},\alpha,\beta\}}\ \{t_{0}+t_{1}+t_{2}\}\\ &\mathrm{s.t.}\quad\quad(33a)-(33d),\end{split} \tag{39}\]
and
\[\begin{split}&\mathbb{P}_{1b}:\ \max_{\rho_{k}^{2}}\sum_{k=1}^{X}\!\! \sum_{m=1}^{M}\!\!a_{km}\!\log_{2}\!\left(\!\!1\!+\!\frac{\left|(r_{k}\!+\! \mathbf{h}_{km}\mathbf{\Theta}_{km}\!\mathbf{g}_{km})\rho_{k}\right|^{2}}{\sigma^{ 2}}\!\right)\\ &\mathrm{s.t.}\quad\quad(33e),(33g).\end{split} \tag{40}\]
To solve problem \(\mathbb{P}_{1a}\), \(t_{0}\), \(t_{1}\), and \(t_{2}\) should be minimized. According to the analysis of \(N_{r}\) and \(t_{r}\) in Section IV, to guarantee fairness of each user (i.e., each user can use one RIS only one time in a frame), the optimizations of \(t_{0}\), \(t_{1}\), and \(t_{2}\) are given by
\[\left\{\begin{array}{l}t_{0}^{*}=Kt_{p},\\ t_{1}^{*}=\mathcal{O}_{min},\\ t_{2}^{*}=J^{*}t+N_{r}t_{r},\end{array}\right. \tag{41}\]
where \(t_{p}\) is the time of one pilot transmission, \(\mathcal{O}_{min}\) is the minimum computational complexity (i.e., the minimum iteration time), and \(J^{*}=\frac{X}{\mathcal{T}}\).
Based on the optimal \(t_{2}^{*}\), the optimal \(\alpha^{*}\) and \(\beta^{*}\) can be expressed as
\[\left\{\begin{array}{l}\alpha^{*}=\frac{J^{*}t}{J+\frac{J}{J^{*}}N_{r}t},\\ \beta^{*}=\frac{N_{r}t}{J+\frac{J}{J^{*}}N_{r}t}.\end{array}\right. \tag{42}\]
Besides, given \(a_{km}\) and \(\mathbf{\Psi}\), problem \(\mathbb{P}_{1b}\) can be easily solved using the existed math tools since it is a strictly convex problem. In the end, the solution of the MAC design problem is illustrated in the proposed Algorithm 1.
```
0:\(K\), \(C\), \(M\), \(Z\), \(t_{p}\), \(t\), \(\sigma^{2}\), \(\mathbf{g}_{km}[0]\), \(\mathbf{h}_{km}[0]\), \(\mathbf{\Theta}_{km}[0]\), \(a_{km}[0]\), \(\rho_{k}^{2}[0]\), the maximum iteration number is \(L_{1}\), and set \(l_{1}=0\);
1:foruser\(k\in\mathcal{K}\), confirms \(u_{k}\);
2:endfor
3:Obtains \(X\) and \(Y\) according to (1);
4:Solves the optimal \(\mathcal{T}^{*}\) according to (39);
5:Calculates \(J^{*}\) according to (11);
6:Calculates \(N_{r}\) according to (25);
7:Calculates \(t_{r}\) according to (27);
8:Calculates \(\alpha^{*}\) and \(\beta^{*}\) according to (42);
9:foruser\(k\in\mathcal{K}\);
10:repeat
11: With fixing \(\mathbf{g}_{km}[l_{1}]\), \(\mathbf{h}_{km}[l_{1}]\), \(\mathbf{\Phi}_{km}[l_{1}]\), \(a_{km}[l_{1}]\), obtains \(\rho_{k}^{2}[l_{1}]\) according to (40);
12:Based on the optimal \(\mathcal{T}^{*}\), \(\alpha^{*}\), \(\beta^{*}\), and \(\rho_{k}^{2}[l_{1}]\), obtains \(\mathbf{g}_{km}[l_{1}+1]\), \(\mathbf{h}_{km}[l_{1}+1]\), \(\mathbf{g}_{km}[l_{1}+1]\), \(a_{km}[l_{1}+1]\) according to (43) and (49);
13:Updates \(l_{1}\gets l_{1}+1\);
14:until\(l_{1}\geq L_{1}\);
15:\(\rho_{k}^{2}=\rho_{k}^{2}\);
16:endfor
17:\(T^{*},\alpha^{*},\beta^{*}\), and \(\rho_{k}^{2}\).
```
**Algorithm 1**MAC Design Algorithm for Solving \(\mathbb{P}_{1}\) at the BS
### _Solution of The RIS Configuration Problem_
For the sub-problem \(\mathbb{P}_{2}\), since the MAC protocol parameters (\(\mathcal{T},\alpha,\beta,\rho_{k}^{2}\)) are fixed, \(\hat{\mathbb{P}}_{2}\) in (38) can be decomposed as the centralized RIS configuration and the distributed RIS configuration at the BS and mobile users, respectively. The solution of each problem is presented in Algorithms 2 and 3, respectively.
#### V-D1 Centralized RIS Configuration at The BS
\[\begin{split}&\mathbb{P}_{2a}\!:\!\max_{\{\mathbf{\omega}_{km},\mathbf{\Psi}\}} \sum_{k=1}^{X}\!\!\sum_{m=1}^{M}\!\!a_{km}\!\log_{2}\!\left(\!1\!+\!\frac{\left|(r_ {k}\!+\!\mathbf{h}_{km}\mathbf{\Theta}_{km}\!\mathbf{g}_{km})\rho_{k}\right|^{2}}{ \sigma^{2}}\!\right)\\ &\mathrm{s.t.}\quad\quad(33g)-(33j),(33l),(33m).\end{split} \tag{43}\]
Problem \(\mathbb{P}_{2a}\) is an MINLP problem. To solve problem \(\mathbb{P}_{2a}\), multiple alternative iterations between \(a_{km}\) and \(\mathbf{\Psi}\) are operated at the BS to obtain the RIS allocation and RIS phase shifts of \(X\) static users.
1. _Centralized RIS allocation optimization_. Fixed the phase shifts (\(\mathbf{\Psi}\)) of \(M\) RISs, problem \(\mathbb{P}_{2a}\) can be rewritten as \[\begin{split}&\mathbb{P}_{2a_{1}}\!:\!\max_{a_{km}}\sum_{k=1}^{X}\! \!\sum_{m=1}^{M}\!\!a_{km}\!\log_{2}\!\left(\!1\!+\!\frac{\left|(r_{k}\!+\! \mathbf{h}_{km}\mathbf{\Theta}_{km}\!\mathbf{g}_{km})\rho_{k}\right|^{2}}{\sigma^{2}} \!\right)\\ &\mathrm{s.t.}\quad\quad(33h)-(33j),\end{split}\] (44) where problem \(\mathbb{P}_{2a_{1}}\) is a "\(0-1\)" linear programming problem, which can be solved by the existed math tool.
2. _Centralized RIS phase shift optimization_. Fixed the RIS allocation (\(a_{km}\)) of \(X\) static users, problem \(\mathbb{P}_{2a}\) can be rewritten as \[\begin{split}&\mathbb{P}_{2a_{2}}\!:\!\max_{\mathbf{\Psi}}\sum_{k=1}^{X}\! \!\sum_{m=1}^{M}\!\!a_{km}\!\log_{2}\!\left(\!1\!+\!\frac{\left|(r_{k}\!+\! \mathbf{h}_{km}\mathbf{\Theta}_{km}\!\mathbf{g}_{km})\rho_{k}\right|^{2}}{\sigma^{2}}\! \right)\\ &\mathrm{s.t.}\quad\quad(33g),(33l),(33m).\end{split}\] (45)
Problem \(\mathbb{P}_{2a_{2}}\) is non-convexity, and the optimization result of \(\mathbb{P}_{2a_{2}}\) can be solved at the BS, which is highlighted in the following observation.
**Observation 1**.: _As the RIS allocation (\(a_{km}\)) is given, the optimal solution to problem \(\mathbb{P}_{\text{2-1b}}\) is the one that maximizes the channel gain via RISs. With this mind, its optimal solution, \(\Psi^{*}\), is denoted by \(\Psi^{*}=\{{\mathbf{\theta}_{11}}^{*},\dots,{\mathbf{\theta}_{km}}^{*},\dots,{\mathbf{ \theta}_{KM}}^{*}\}\), where \({\mathbf{\theta}_{km}}^{*}=\{\theta_{km}^{*},\dots,\theta_{km}^{*},\dots\theta_{km }^{*}\}\), and \(\theta_{km}^{*}=\arg(r_{k})-\arg(h_{km}^{*})-\arg(g_{km}^{*})\), \(\forall k\in\mathcal{X},\forall m\in\mathcal{M},\forall n\in\mathcal{N}\)._
Proof.: As for the solution of problem \(\mathbb{P}_{\text{2-1b}}\), we have the following inequality
\[|r_{k}+\mathbf{h}_{km}{\mathbf{\Theta}_{km}}\mathbf{g}_{km}|^{2}\leq|r_{k}|^{2}+| \mathbf{h}_{km}{\mathbf{\Theta}_{km}}\mathbf{g}_{km}|^{2}. \tag{46}\]
The equality in (46) holds only when the RIS reflection coefficients are equal to \(\arg(r_{k})\triangleq\arg(\mathbf{h}_{km}{\mathbf{\Theta}_{km}}\mathbf{g}_{km})\).
To optimize \(\theta_{km}^{n},n\in\mathcal{N}\), we let \(\mathbf{h}_{km}{\mathbf{\Theta}_{km}}\mathbf{g}_{km}{\mathbf{\mathbf{w}}_{km}}\), where \(\mathbf{w}_{km}=[w_{km}^{1},\dots,w_{km}^{n},\dots,w_{km}^{N}]\in\mathbb{C}^{ 1\times(N)}\), \(w_{k}^{n}=e^{j\theta_{km}^{*}},\forall n\in\mathcal{N}\), and \(\mathbf{\Phi}_{km}=\text{diag}(\mathbf{h}_{km})\mathbf{g}_{km}\). Then, problem \(\mathbb{P}_{\text{2-1b}}\) can be simplified as
\[\mathbb{P}_{3} :\max_{\mathbf{w}_{km}}\ |\mathbf{w}_{km}{\mathbf{\Phi}_{km}}|^{2} \tag{47}\] \[\mathrm{s.t.}\ |w_{km}|^{2}|^{2}=1,\ \ \forall k\in\mathcal{X}, \forall m\in\mathcal{M},\forall n\in\mathcal{N}.\]
It is observed that the optimal phase shift of RIS element \(n\) on the \(R_{m}\) for \(U_{k}\) can be obtained by setting \(\mathbf{w}_{km}^{*}=e^{j(\arg(r_{k})-\arg(\text{diag}(\mathbf{h}_{km}))})\), and then we have
\[\theta_{km}^{n^{*}}=\arg(r_{k})-\arg(h_{km}^{n})-\arg(g_{km}^{n}), \tag{48}\]
where \(h_{km}^{n}\in\mathbf{h}_{k}\), \(g_{km}^{n}\in\mathbf{g}_{k}\), \(\forall k\in\mathcal{X},\forall m\in\mathcal{M},\forall n\in\mathcal{N}\).
#### V-E2 Distributed RIS Configuration at Mobile Users
\[\mathbb{P}_{\text{2b}} :\max_{\{k_{lm},\theta_{km}\}}\!\!a_{km}\!\log_{2}\!\left(\!1\!+ \!\frac{|(r_{k}\!+\!\mathbf{h}_{km}{\mathbf{\Theta}_{km}}\mathbf{g}_{km})\rho_{k}| ^{2}}{\sigma^{2}}\!\right) \tag{49}\] \[\mathrm{s.t.}\ \ \ \ \ (33g),(33h),(33k)-(33m).\]
To solve problem \(\mathbb{P}_{\text{2b}}\) at each mobile user, the distributed RIS allocation and RIS phase shift problems are solved using an alternative method.
1. [label=()]
2. _Distributed RIS allocation optimization_. Since each mobile user contends for its RIS, fixed the RIS phase shift (\({\mathbf{\theta}_{km}}\)), problem \(\mathbb{P}_{\text{2b}}\) can be rewritten as \[\mathbb{P}_{\text{2b}} :\max_{a_{km}}a_{km}\!\log_{2}\!\left(\!1\!+\!\frac{|(r_{k}\!+ \!\mathbf{h}_{km}{\mathbf{\Theta}_{km}}\mathbf{g}_{km})\rho_{k}|^{2}}{\sigma^{2}}\!\right)\] (50) \[\mathrm{s.t.}\ \ \ \ \ (33h),(33k),\]
where \(\mathbb{P}_{\text{2b}_{1}}\) can be easily solved by finding the best RIS link for \(U_{k}\).
1. [label=()]
2. _Distributed RIS phase shift optimization_. Fixed the RIS allocation (\(a_{km}\)), problem \(\mathbb{P}_{\text{2b}}\) can be rewritten as \[\mathbb{P}_{\text{2b}_{2}} :\max_{\theta_{km}}a_{km}\!\log_{2}\!\left(\!1\!+\!\frac{|(r_{k}\!+ \!\mathbf{h}_{km}{\mathbf{\Theta}_{km}}\mathbf{g}_{km})\rho_{k}|^{2}}{\sigma^{2}}\!\right)\] (51) \[\mathrm{s.t.}\ \ \ \ (33g),(33l),(33m).\]
Problem \(\mathbb{P}_{\text{2b}_{2}}\) is non-convexity. The optimization result of \(\mathbb{P}_{\text{2b}_{2}}\) can be solved at each mobile user, and some comments are highlighted in the following observation.
**Observation 2**.: _As the RIS, \(R_{m}\), is obtained by \(U_{k}\), the optimal solution of problem \(\mathbb{P}_{\text{2-2b}}\) is the one that maximizes the channel gain of \(U_{k}\) via \(R_{m}\). Therefore, its optimal solution, \({\mathbf{\theta}_{km}}^{*}\), is denoted by \({\mathbf{\theta}_{km}}^{*}=\{\theta_{km}^{1},\dots,\theta_{km}^{n},\dots\theta_{km }^{N}\}\), where \(\theta_{km}^{n}=\arg(r_{k})-\arg(h_{km}^{n})-\arg(g_{km}^{n})\), \(\forall k\in\mathcal{Y},\forall m\in\mathcal{M},\forall n\in\mathcal{N}\)._
Proof.: The proof of **Observation 2** refers to **Observation 1**.
### _Complexity and Performance Improvement_
#### V-E1 Computational Complexity
To solve problem \(\mathbb{P}_{\text{0}}\) in (33), the major involved complexity includes solving the MAC design problem, the centralized RIS configuration problem, and the distributed RIS configuration problem.
* Complexity for solving the MAC design optimization problem: To get the optimal MAC protocol parameters, the complexity is decided by the computing of \(\mathcal{T}^{*},\alpha^{*},\beta^{*}\), and \(\rho_{k}^{*}\). The involved computational complexity in Algorithm 1 is \(\mathcal{O}(K+X^{3}L_{1}+M^{2}N^{2}L_{1}+X^{2}M^{2}L_{1})\).
* Complexity for solving the centralized RIS configuration problem: By using the alternative iteration method, the involved complexity is decided by the computing of the RIS allocation and the RIS phase shifts of \(X\) static users at the BS. Therefore, the involved computational complexity at the BS in Algorithm 2 is \(\mathcal{O}(M^{2}N^{2}L_{2}+X^{2}M^{2}L_{2})\).
* Complexity for solving the distributed RIS configuration problem: By using the alternative iteration method, the involved complexity is decided by the computing of the distributed RIS allocation and RIS phase shifts at a mobile
user. Therefore, the involved computational complexity at a mobile user in Algorithm 3 is \(\mathcal{O}(\hat{M}^{2}N^{2}L_{3}+\hat{M}^{2}L_{3})\), where \(\hat{M}\) is the number of unused RISs.
It can be seen that the computational complexity at the BS can be reduced since the RIS configuration for static and mobile users is optimized by the BS and each mobile user, respectively.
#### V-B2 MAC Protocol Performance Improvement by RISs
In the proposed multiple access framework, the benefit of RISs contains two folds: i) RISs assist each user to improve the data rate, and ii) the different RISs design for static and mobile users can improve channel efficiency due to the low computational complexity. Based on these, the MAC protocol performance improvement due to RISs is presented in the following observation.
**Observation 3**.: _For the proposed multiple access framework, denote \(\Delta\mathcal{O}\) as the decrement of computational complexity. The performance improvement brought by decreasing RIS configuration complexity is given by \(\frac{T+\Delta\mathcal{O}}{T}\), and the improvement brought by designing RIS configuration is limited by \(\sum_{k=1}^{X}\sum_{m=1}^{M}a_{km}\xi_{km}+\sum_{k=1}^{Y}\sum_{m=1}^{M}a_{km} \chi_{km}\), where \(\xi_{km},\forall k\in\mathcal{X},\forall m\in\mathcal{M}\) and \(\chi_{km},\forall k\in\mathcal{Y},\forall m\in\mathcal{M}\) are the data rate increment of \(U_{k},k\in\mathcal{K}\) benefited from \(R_{m}\)._
In **Observation 3**, \(T\) and \(\Delta\mathcal{O}\) are expressed as
\[\left\{\begin{array}{l}T=t_{0}+t_{1}+t_{2},\\ \Delta\mathcal{O}\approx\mathcal{O}\left(K^{3}L_{1}\right)-\mathcal{O}\left( X^{3}L_{1}\right).\end{array}\right. \tag{52}\]
Since the computational complexity at the BS in the proposed MAC framework is mainly affected by the number of static users, i.e., the computational complexity without considering mobility profiles is affected by \(\mathcal{O}(K^{3}L_{1})\), and that considering mobility profiles is affected by \(\mathcal{O}(X^{3}L_{1})\), the decrement of computational complexity, \(\Delta\mathcal{O}\), can be obtained in (52).
Moreover, \(\xi_{km}\) and \(\chi_{km}\) are calculated as
\[\left\{\xi_{km},\chi_{km}\right\} = \tag{53}\] \[= B\mathrm{log}_{2}\left(\frac{\kappa+\Delta\kappa}{\kappa}\right),\]
where \(\kappa=\sigma^{2}+r_{k}^{2}\rho_{k}^{2}\), and \(\Delta\kappa=\left|\mathbf{h}_{km}\mathbf{\Theta}_{km}\mathbf{g}_{km}\right|^ {2}\rho_{k}^{2}+2r_{k}\left|\mathbf{h}_{km}\mathbf{\Theta}_{km}\mathbf{g}_{km }\right|\rho_{k}^{2}\).
## VI Simulation Results
### _Simulation Settings_
#### Vi-A1 Network Scenario
We consider a network scenario that consists of a BS, 2 RISs having 128 RIS-elements each, and 200 users (includes 100 static users, 80 mobile users, and 20 new users, the ratio of different types of users is denoted by \(\epsilon=5:4:1\)). All the users are uniformly distributed in a square area of size \(50\times 50\) (in meters) with the BS located at (0, \(0\), \(100\)) in three-dimensional Cartesian coordinates. From a practical implementational perspective, the location of RISs is generally fixed. Hence the location of RISs is given by (\(25\), \(50\), \(50\)), and (\(50\), \(25\), \(50\)) is illustrated in Fig. 5. Unless stated otherwise, the other simulation parameters are set as follows. A Rician fading channel model is assumed, where the user-RIS and RIS-BS channels benefit from the existence of LoS links having a path loss exponent of 2.2, while the user-BS channels are NLoS links with a path loss exponent of 3.6. The power dissipated at each user is 10 dBm, the noise power is -94 dBm, the number of sub-channels is 2, and the bandwidth of each sub-channel is \(B=10\) MHz. The payload size of each user is \(500\) bytes. The packet size of RTS and CTS is set to \(24\) bytes and \(16\) bytes, respectively. SIFS and DIFS are set to \(10\)\(\mu\)s and \(50\)\(\mu\)s, respectively. The minimum contention window \(W_{0}\) is set to \(15\), and the maximum backoff stage \(m\) is set to \(6\). Furthermore, we assume that each RIS occupies a single sub-channel as a benefit of interference cancellation, and each user is only allowed to use a single RIS to communicate with the BS at a time. Lastly, when the number of users and that RISs changes, the results have to be evaluated on a frame-by-frame basis.
#### Vi-A2 Benchmark Schemes
We consider the following benchmark MAC schemes in the results for comparison.
* The centralized multiple access scheme (Scheme 1): The BS schedules the resources and configures RISs for users included the static and the mobile users without contention.
* The distributed multiple access scheme (Scheme 2): The static and the mobile users contend for their resources and configures RISs by themselves without scheduling.
### _Performance Evaluation_
Figure 6 evaluates the system throughput of three types of RIS-assisted MAC schemes as the number of users increases. Firstly, it is observed that the system throughput of each scheme is improved with the assistance of RISs. This is because the RIS can help user increase its data rate by improving its link performance. Fig. 6 shows that the system throughput of Scheme 1 and that of the proposed scheme initially increase, but then tend to saturate as the number of users increases. This is because the computation time ratio of each frame is reduced upon increasing the length of each frame. Then, the system throughput of Scheme 2 exhibits a slight lesion after saturation due to the collisions. Additionally, as the number of users increases, the system throughput of
Fig. 5: Simulation setup of the proposed multiple access framework.
the proposed scheme becomes higher than that of Scheme 1 and Scheme 2. This is because the computational complexity of Scheme 1 and the collision of Scheme 2 increase as the number of users increases. However, as the number of users increases, these two factors can be significantly alleviated in the proposed scheme since the different MAC protocols and RISs configurations are used for the static and mobile users.
As the number of users increases, Fig. 7 evaluates the system throughput of the proposed scheme in terms of the MAC protocol parameters. As shown in Fig. 7, when the ratio of different types of users, \(\epsilon\), is set as \((6\!:\!3\!:\!1),(5\!:\!4\!:\!1)\), and \((3\!:\!6\!:\!1)\), the system throughput of the proposed scheme decreases accordingly. This is because the ratio of mobile users is increased and that of static users is decreased, which leads to a bigger \(\beta/\alpha\). In the proposed scheme, as \(\beta/\alpha\) increases, the system throughput of the proposed scheme decreases, and the decrement of system throughput gradually becomes obvious. Here, the bigger \(\beta/\alpha\) means that the longer time will be spent on the contended transmissions. However, the longer time also serves the same number of mobile users and transmits the same payloads. In other words, the access efficiency of the proposed scheme decreases as \(\beta/\alpha\) increases. This is mainly because the channel resources are not fully used during the contended transmission period. Additionally, given \(\epsilon\), the scheduled transmission period is fixed once the number of static users is given, it is therefore existing an optimal \(\beta/\alpha\) that maximizes the system throughput of the proposed scheme, while guaranteeing the fairness of users, e.g., the optimal \(\beta/\alpha\) is \(0.93,1.41\) and \(3.31\) for each setting of \(\epsilon\). Also, for each setting of \(\epsilon\), the system throughput has a minor decline as the number of users increases.
Figure 8 evaluates the system's throughput of the three schemes in terms of the number of RIS elements and that of RISs, respectively. Observe in Fig. 8 that the system throughput of each scheme first increases and then saturates as the number of RISs increases. This is because the access efficiency of each scheme can be enhanced by having more RISs. However, more RISs will increase the computational complexity. Give \(\epsilon\)\((5\!:\!4\!:\!1)\), compared to Scheme 1 and Scheme 2, the proposed scheme performs better when the number of RISs is less than 4. This is because the proposed scheme decreases the computational complexity of static users, while also decreases the collision of mobile users. As the number of RISs is greater than 4, the throughput of Scheme 2 performs best due to the distributed computation and the less collision. Additionally, the throughput of each scheme
Fig. 8: Impact of \(N\) and \(M\) on the system throughput of the three schemes with \(\epsilon\!=\!5\!:\!4\!:\!1\), where the number of users is 200.
Fig. 6: System throughput comparison among the three schemes with RISs or not.
Fig. 7: Impact of \(\epsilon\) on optimal \(\beta/\alpha\) on the system throughput of the proposed scheme.
Fig. 9: Impact of \(\epsilon,N\), and \(M\) on the system throughput of the proposed scheme, where the number of users is 200.
increases as the number of RIS elements increases due to the enhanced link performance.
Figure 9 evaluates the system throughput of the proposed scheme in terms of \(\epsilon\), the number of RIS elements, and the number of RISs, respectively. Fig. 9 shows that the system throughput of the proposed scheme increases as the number of RISs increases for each setting of \(\epsilon\). This is because the consumed time handling the same traffic payload of all users decreases. Also, as the number of RISs increases, the increment of the system throughput gradually decreases. This is because more time has to been spent on the RIS configuration. Besides, compared to \(\epsilon\) (\(3\!:\!6\!:\!1\)), a better system throughput can be obtained in the setting of \(\epsilon\) (\(6\!:\!3\!:\!1\)). This is because the more static users are scheduled with a low computational cost and the fewer mobile users decrease the collision. At last, for each setting of \(\epsilon\), the system throughput of the proposed scheme increases as the number of RIS elements increases.
Figure 10 evaluates the proportion of served users via RISs in three schemes when \(\epsilon\) is set as \((5\!:\!4\!:\!1)\), \((5\!:\!3\!:\!2)\), and \((5\!:\!2\!:\!3)\), respectively. It is observed from Fig. 10 that the proportion of served users gradually decreases in Scheme 1, while keeps unchanged in the proposed scheme and Scheme 2 when the setting of \(\epsilon\) varies from \((5\!:\!4\!:\!1)\) to \((5\!:\!2\!:\!3)\). Compared to Schemes 1 and 2, the proposed scheme serves all users in each case. To be specific, the proposed scheme performs best, and Scheme 2 outperforms Scheme 1 when more new mobile users join in, which can be explained by two factors: 1) the new mobile users always cannot be served in the current frame by RISs in Scheme 1 due to its centralized scheduling; 2) Scheme 2 cannot serve all users since it brings more overheads compared to Scheme 1 in the same transmission time. In Fig. 11, the proportion of served users via RISs with the different settings of \(\epsilon\) and \(\beta/\alpha\) is observed. When the setting of \(\epsilon\) is \((6\!:\!3\!:\!1)\), \((5\!:\!4\!:\!1)\), and \((3\!:\!6\!:\!1)\), the proportion of served users in setting of \(\epsilon\) is highest at the optimal \(\beta/\alpha\) valued \(0.93\), \(1.41\), and \(3.31\). Additionally, if \(\beta/\alpha\) is larger than its optimal value in each setting of \(\epsilon\), the fairness of each user can be achieved, otherwise, the fairness cannot be guaranteed. Combined with the results in Fig. 7, we conclude that there is a trade-off between the system throughput and fairness in the proposed scheme, which is achieved by calculating the optimal \(\beta/\alpha\) for a specific \(\epsilon\).
## VII Conclusion
In this paper, we have conceived a multiple access framework for RIS-assisted communications by integrating the MAC protocol and the RIS configuration into a unified framework. The proposed framework improves the MAC efficiency, while reducing the RISs' computational complexity and offering fairness for multiple users. Our MAC protocol allows different types of users to be assigned to RISs and channel resources by scheduling or contention schemes. By exploiting the interplay between the MAC protocol and the RIS configuration, we have investigated the joint optimization problem of both designs to maximize the overall throughput, while guaranteeing fairness for the users. Then, we have adopted the popular alternative iteration method to obtain the problem's solution. Finally, simulation results have been presented to evaluate the MAC protocol in terms of its throughput and fairness. We have seen that, compared to the benchmarks, the proposed scheme achieves better access fairness and system throughput. As the number of RISs and users increase, harnessing RISs in our multi-user communications scenario substantially expands the resource allocation search space, hence resulting in increased computing complexity and latency. In this context, an AI-based method that shifts the complexity of online computation to offline training may be a potential technique for further addressing the complexity and latency issues of our RIS configuration and MAC protocol [10, 27, 28].
|
2309.08298 | Biological invasions and epidemics with nonlocal diffusion along a line | The goal of this work is to understand and quantify how a line with nonlocal
diffusion given by an integral enhances a reaction-diffusion process occurring
in the surrounding plane. This is part of a long term programme where we aim at
modelling, in a mathematically rigorous way, the effect of transportation
networks on the speed of biological invasions or propagation of epidemics. We
prove the existence of a global propagation speed and characterise in terms of
the parameters of the system the situations where such a speed is boosted by
the presence of the line. In the course of the study we also uncover unexpected
regularity properties of the model. On the quantitative side, the two main
parameters are the intensity of the diffusion kernel and the characteristic
size of its support. One outcome of this work is that the propagation speed
will significantly be enhanced even if only one of the two is large, thus
broadening the picture that we have already drawn in our previous works on the
subject, with local diffusion modelled by a standard Laplacian. We further
investigate the role of the other parameters, enlightening some subtle effects
due to the interplay between the diffusion in the half plane and that on the
line. Lastly, in the context of propagation of epidemics, we also discuss the
model where, instead of a diffusion, displacement on the line comes from a pure
transport term. | Henri Berestycki, Jean-Michel Roquejoffre, Luca Rossi | 2023-09-15T10:35:04Z | http://arxiv.org/abs/2309.08298v2 | # Biological invasions and epidemics with nonlocal diffusion along a line
###### Abstract
The goal of this work is to understand and quantify how a line with nonlocal diffusion given by an integral enhances a reaction-diffusion process occurring in the surrounding plane. This is part of a long term programme where we aim at modelling, in a mathematically rigorous way, the effect of transportation networks on the speed of biological invasions or propagation of epidemics.
We prove the existence of a global propagation speed and characterise in terms of the parameters of the system the situations where such a speed is boosted by the presence of the line. In the course of the study we also uncover unexpected regularity properties of the model. On the quantitative side, the two main parameters are the intensity of the diffusion kernel and the characteristic size of its support. One outcome of this work is that the propagation speed will significantly be enhanced even if only one of the two is large, thus broadening the picture that we have already drawn in our previous works on the subject, with local diffusion modelled by a standard Laplacian.
We further investigate the role of the other parameters, enlightening some subtle effects due to the interplay between the diffusion in the half plane and that on the line. Lastly, in the context of propagation of epidemics, we also discuss the model where, instead of a diffusion, displacement on the line comes from a pure transport term.
**MSC:** 35K57, 92D25, 92D30, 35B40, 35K40,
**Key words:** line of integral diffusion, front propagation, reaction-diffusion, propagation enhancement.
###### Contents
* 1 Biological invasions and epidemics in the presence of a line
* 1.1 Biological invasions
* 1.2 Propagation of epidemics
* 2 Main results
* 2.1 Initial value problems
* 2.2 Biological invasions: steady states and propagation
* 2.3 Further properties of the \(SIRT\) model with nonlocal diffusion
* 3 Initial value problems and a-priori bounds
* 4 The biological invasions model
* 4.1 Steady states and invasion
* 4.2 A benchmark: Fisher-KPP front propagation with nonlocal diffusion
* 4.3 Spreading speed
* 5 The \(SIRT\) model for epidemics along a line with nonlocal diffusion
* 5.1 A benchmark: \(SIR\)-type model with nonlocal diffusion
* 5.2 The influence of \(R_{0}\) and other parameters
* 5.3 Proof of the results on the \(SIRT\) model
* 5.4 The case of pure transport on the road
* 6 Discussion
## 1 Biological invasions and epidemics in the presence of a line
This work is part of a programme aimed at understanding the effect of a line, or a network of lines, on propagation properties in the context of reaction-diffusion. The underlying motivation is to model the effect of a line of transportation, such as a road, a railway, or a waterway, on the spreading of a biological invasion, or the dissemination of epidemics. In the present paper, we examine the case of a nonlocal diffusion on the line.
In [8], or [10] we considered the framework of local diffusion on the line. There, we analysed the case of a line having a diffusion of its own, given by a multiple of the Laplacian (thus associated with random brownian motion of individuals), and coupled with a Fisher-KPP, or diffusive SIR process with diffusion in the adjacent plane. We computed the propagation velocity, and an important outcome was that the overall propagation was increased as soon as the diffusion on the line exceeded a certain explicit threshold. Our results here recover and generalise those of the aforementioned papers.
An even more drastic effect was observed by A.-C. Coulon and the authors of the present paper in [5], where the diffusion was given by a fractional Laplacian. In that case, the fronts propagate exponentially fast in time, just as Fisher-KPP fronts with
such a kind of diffusion (see Cabre-Roquejoffre [11]). Thus, the present work is a further evidence that the line communicates the characteristics of its own diffusion to the whole process, regardless of the type of diffusion in the rest of the plane.
Our analysis is carried out for two distinct models: biological invasions in the context of population dynamics, and the spreading of epidemics. We describe these two frameworks in the following two subsections. They are closely related through a well known transformation that we will recall in Section 3.
### Biological invasions
In the spirit of the system we introduced in [8], we describe the invasion of a species whose dissemination is enhanced by a line of transportation by considering the upper half plane \(\mathbb{R}^{2}_{+}=\mathbb{R}\times(0,+\infty)\), that we call - in a stylised way - "the field", while its boundary \(\mathbb{R}\times\{0\}\) is referred to as "the road". We restrict ourselves to the upper half-plane, rather than considering the whole plane crossed by a line, because the results in the two cases can easily be deduced from one another. We then consider two distinct density functions describing the same species: \(u(t,x)\), \(t>0\), \(x\in\mathbb{R}\), is the density on the road, \(v(t,x,y)\), \(t>0\), \((x,y)\in\mathbb{R}^{2}_{+}\), is the one in the field. The individuals in the field are assumed to diffuse, according to the Laplace operator with a diffusion coefficient \(d>0\), and to proliferate with rate \(f(v)\), that we take to be smooth and of the Fisher-KPP type, i.e.
\[f(0)=f(1)=0,\qquad 0<f(s)\leqslant f^{\prime}(0)s\ \ \text{for all}\ \,s\in(0,1). \tag{1.1}\]
For the sake of definiteness, we extend \(f(s)\) by a negative function for values of \(s>1\).
In contradistinction to what happens in the field, we assume that the diffusion on the road is _nonlocal_, reflecting displacements of larger amplitude (see Turchin [28] for a biological discussion). For this purpose, we let \(K:\mathbb{R}\to\mathbb{R}\) be an even, smooth, nonnegative function with unit mass, supported in \([-1,1]\). For \(L>0\), we set
\[K_{L}(x):=\frac{1}{L}K\Big{(}\frac{x}{L}\Big{)}.\]
Thus \(K_{L}\) is supported in \([-L,L]\) and the mass condition is preserved:
\[\int_{\mathbb{R}}K_{L}(x)dx=1.\]
We define the nonlocal diffusion operator, depending on the parameter \(L>0\), by
\[\mathcal{J}u(x):=\int_{\mathbb{R}}K_{L}(x-x^{\prime})(u(x^{\prime})-u(x))dx^{ \prime}.\]
The parameter \(L\) then represents the order of the distance that individuals travel to owing to this non-linear dispersal.
The model writes
\[\left\{\begin{array}{rcl}\partial_{t}v-d\Delta v&=&f(v)&(t>0,\ (x,y)\in \mathbb{R}^{2}_{+})\\ -d\partial_{y}v&=&\mu u-\nu v&(t>0,\ x\in\mathbb{R},\ y=0)\\ \partial_{t}u-D\mathcal{J}u&=&\nu v-\mu u&(t>0,\ x\in\mathbb{R},\ y=0),\end{array}\right. \tag{1.2}\]
where \(d,D\) are positive parameters (recall that \(u\) is independent of \(y\)).
In [8] we considered the case where the diffusion on the road was given by \(D\partial_{xx}\) instead of \(D\mathcal{J}\). The main result we derived is the existence of a spreading velocity in the horizontal direction, and the comparison with the classical Fisher-KPP velocity. We will review these results in Section 2.2, and we will relate them with the results we get here on the new Model (1.2).
### Propagation of epidemics
The basic system of epidemiology describes bulk quantities, with total populations variables only depending on time, that is, with no spatial dependence. Proposed first by Kermack-McKendick [21] as an elaborate set of integral equations, this model reduces, when some parameters are assumed to be constant, to the classical \(SIR\) system. We owe it to Kendall to have first envisioned that an epidemic could propagate as a front in space, with a definite speed. Kendall developed this idea in an answer to a study of Bartlett [3], the underlying mechanism being the nonlocal contamination rate (see also [20]). We refer the reader to Ruan [25] for more information on the modelling issues.
In [10] we proposed a new stylized model, built on the ideas of our preceding road-field model [8], to couple a classical \(SIR\)-type model with spatial diffusion in the plane with a diffusion-exchange equation on the \(x\)-axis. The latter models a road on which infected individuals can travel, the diffusion being local and precisely given by \(-D\partial_{xx}\). The contamination process takes place outside the road, where the diffusion process is modelled by a Laplacian \(-d\Delta\). In addition to the classical compartments \(S,I\) and \(R\), this model therefore involves a fourth compartment, \(T\), of traveling infectious population. Hence our reference to this model as the "\(SIRT\)-model".
The main conclusion involved a parameter \(R_{0}\), known as the "Pandemic Threshold" in such models. We showed that it acts as a basic reproduction number: when it is larger than 1, there is front propagation. Moreover, the value of the ratio \(D/d\) determines whether propagation occurs at the usual \(SIR\) speed (as determined, for instance, by Aronson [1], or Diekmann [15]), or rather at a larger speed. We also pointed out situations where the propagation velocity can be quite large, even though the epidemics looks fairly mild when \(R_{0}\) very close to 1.
In the present paper we examine what happens when the diffusion on the road is governed by a nonlocal operator \(\mathcal{J}\) as in the previous subsection. This translates the fact that individuals have the potential to instantly make large - yet finite - jumps, and will result in a richer set of parameters to analyse. Specifically, we let \(S(t,x,y)\) denote the fraction of susceptible individuals at time \(t\geqslant 0\) and position \((x,y)\) of the domain \(\mathbb{R}^{2}_{+}\) (as before we restrict to the upper half-plane in view of symmetry reasons). We assume that susceptibles do not diffuse, and let \(I(t,x,y)\) be the fraction of infected individuals in the domain, and \(T(t,x)\) (standing for "travelling infected") be the fraction of infected individuals on the \(x\)-axis. The former ones are assumed to diffuse according to standard diffusion with amplitude \(d>0\), whereas the latter ones diffuse according to the nonlocal operator \(\mathcal{J}\) with coefficient \(D>0\).
Thus, the model writes
\[\left\{\begin{array}{rcl}\partial_{t}I-d\Delta I+\alpha I&=&\beta SI&(t>0,\ (x,y)\in \mathbb{R}_{+}^{2})\\ \partial_{t}S&=&-\beta SI&(t>0,\ (x,y)\in\mathbb{R}_{+}^{2})\\ -d\partial_{y}I&=&\mu T-\nu I&(t>0,\ x\in\mathbb{R},\ y=0)\\ \partial_{t}T-D\mathcal{J}T&=&\nu I-\mu T&(t>0,\ x\in\mathbb{R},\ y=0).\end{array}\right. \tag{1.3}\]
Of specific interest to us here will be to determine the spreading velocity for this new model, and to compare it with the speed without the road. We will then study its dependence on the parameters of the system, in particular the ones involved in the nonlocal diffusion: \(D\), \(L\).
We also apply our methods to the analysis of a somewhat different, yet related, framework. Namely, in the context of epidemic propagation we consider the influence of a line with a _transport_ mechanism in one direction. It is of interest in situations where individuals travel away from main towns for instance as they try to move away from contamination. This effect was largely reported in the recent COVID-19 pandemic in several countries. We will see that the transport also enhances the global propagation, but that the value of the spreading speed is, unexpectedly, strictly less than that of the transport.
The paper is organised as follows. Section 2 contains the statements of the main results. In Section 2.1, we perform a preliminary classical transformation that allows one to reduce the \(SIRT\) model (1.3) to a slight perturbation of (1.2). Next we state a result about the well-posed character of the two systems. Section 2.2 is devoted to the model (1.2) for biological invasions, for which we present firstly the Liouville-type result as well as the local-in-space convergence, and next the result about the propagation. Section 2.3 focuses on the specific features of the \(SIRT\) model: the description of the steady state and its behaviour at infinity, and the result on the spreading of the epidemic wave. The results presented in those subsections are proved in Sections 3, 4 and 5 respectively. In Section 6 we discuss our results and emphasise analogies and novelties with respect to previous models.
## 2 Main results
### Initial value problems
The preliminary question is whether the Cauchy Problem for models (1.2) and (1.3) is well-posed. So, we supplement (1.2) with the initial datum
\[(u(0,x),v(0,x,y))=(0,v_{0}(x,y)) \tag{2.1}\]
with \(v_{0}\geqslant 0\) smooth and compactly supported, and (1.3) with the initial datum
\[(S(0,x,y),I(0,x,y),T(0,x))=(S_{0},I_{0}(x,y),0), \tag{2.2}\]
with
\[S_{0}>0\ \ \mbox{constant},\qquad I_{0}\geqslant 0\ \mbox{smooth and compactly supported}.\]
It was noticed in [10] that the cumulative densities of Model (1.3) satisfy a Fisher-KPP type equation with fast diffusion on the line, thus allowing for the treatment proposed in [8]. Actually, this remark dates back, at least, to Aronson [1] for the \(SIR\) model in the whole space. Namely, calling
\[u(t,x):=\int_{0}^{t}T(s,x)ds,\qquad v(t,x,y):=\int_{0}^{t}I(s,x,y)ds,\]
the system rewrites as
\[\left\{\begin{array}{rcl}\partial_{t}v-d\Delta v&=&f(v)+I_{0}(x,y)&(t>0,\ (x,y)\in\mathbb{R}_{+}^{2})\\ -d\partial_{y}v&=&\mu u-\nu v&(t>0,\ x\in\mathbb{R},\ y=0)\\ \partial_{t}u-D\mathcal{J}u&=&\nu v-\mu u&(t>0,\ x\in\mathbb{R},\ y=0),\end{array}\right. \tag{2.3}\]
with, as in [10],
\[f(v):=S_{0}(1-e^{-\beta v})-\alpha v, \tag{2.4}\]
\[I_{0}\not\equiv 0\ \text{non-negative, smooth and compactly supported}, \tag{2.5}\]
and initial datum
\[(u(0,x),v(0,x,y))\equiv(0,0). \tag{2.6}\]
Observe that the non-linearity \(f\) in (2.4) vanishes at \(0\) and, being concave, it satisfies the Fisher-KPP condition \(f(s)\leqslant f^{\prime}(0)s\) for \(s>0\).
**Theorem 2.1**.: _The initial value problems (1.2),(2.1) with \(v_{0}\geqslant 0\) bounded and smooth, and (1.3),(2.2) with \(S_{0}>0\) constant and \(I_{0}\geqslant 0\) bounded and smooth, both have a unique classical, bounded solution. Moreover, first-order-in-time and second-order-in-space derivatives of the solution are globally bounded and Holder continuous._
The existence proof proceeds from fairly usual arguments. Less standard is the uniform bound of the derivatives for system (1.2) and its compact perturbation (2.3), as the line has no particular smoothing effect. It is obtained as a consequence of the maximum principle in narrow domains applied to the equations satisfied by the derivatives. We point out that the the first-order estimates for (2.3) are essential for us because the functions \(T,I\) of the original model (1.3) correspond to the time-derivatives of \(u,v\). Theorem 2.1 is proved in Section 3.
### Biological invasions: steady states and propagation
Let us focus on the model for biological invasions. The first issue to understand is the classification of the steady states of the systems and their attractiveness. There is an obvious positive solution, and the game is to show that it is globally attractive.
**Proposition 2.2**.: _The unique non-negative, bounded steady solutions for (1.2) are the constant ones \((u\equiv 0,v\equiv 0)\) and \((u\equiv\frac{\nu}{\mu},v\equiv 1)\)._
_Moreover, any solution \((u(t,x),v(t,x,y))\) to (1.2),(2.1) with \(v_{0}\geqslant 0,\not\equiv 0\) smooth and compactly supported, converges to \((\frac{\nu}{\mu},1)\) as \(t\to+\infty\), locally uniformly in \(x\in\mathbb{R}\), \(y\geqslant 0\)._
Proposition 2.2 is proved in Section 4.1 following the same scheme as for the model with local diffusion on the road considered in [8]. The arguments require uniform regularity of the solutions, which is guaranteed here by Theorem 2.1. In the realm of biological invasions, the statement about the long-time behaviour of the solution is known as the "hair trigger effect" (the terminology dates back to Aronson-Weinberger [2]).
Once the local behaviour of the solution is established, one naturally turns to the study of the propagation. For the classical Fisher-KPP equation
\[v_{t}-d\Delta v=f(v),\quad t>0,\ X\in\mathbb{R}^{N},\]
propagation occurs with an _asymptotic spreading speed_\(c_{{}_{K}}\), which is explicit: \(c_{{}_{K}}=2\sqrt{df^{\prime}(0)}\), see Aronson-Weinberger [2]. The next result identifies an asymptotic spreading speed for (1.2).
**Theorem 2.3**.: _Let \((u(t,x),v(t,x,y))\) be the solution to (1.2),(2.1) with \(v_{0}\geqslant 0,\not\equiv 0\) smooth and compactly supported. Then, there exists \(c_{*}>0\) such that, for all \(\varepsilon>0\), it holds_
\[\sup_{|x|\leqslant(c_{*}-\varepsilon)t}\big{|}(u(t,x),v(t,x,y))-\Big{(}\frac{ \nu}{\mu},1\Big{)}\big{|}\to 0,\ \ \sup_{|x|\geqslant(c_{*}+\varepsilon)t}\big{|}(u(t,x),v(t,x,y))\big{|}\to 0, \tag{2.7}\]
_as \(t\to+\infty\), locally uniformly with respect to \(y\geqslant 0\)._
_In addition, there is a quantity \(D_{*}>0\) such that the spreading speed \(c_{*}\) satisfies_
\[c_{*}\begin{cases}=c_{{}_{K}}&\text{if }D\leqslant D_{*}\\ >c_{{}_{K}}&\text{if }D>D_{*}\end{cases}\qquad\text{with }\ c_{{}_{K}}:=2\sqrt{df^{ \prime}(0)}.\]
_Finally, \(c_{*}/\sqrt{DL^{2}}\) converges to a positive constant as \(DL^{2}\to+\infty\)._
When the diffusion on the line is \(D\partial_{xx}\), we derived the same qualitative result in [8], with \(D_{*}=2d\). In the present case, the threshold \(D_{*}\) is given by the non-algebraic formula (4.11) below. Let us emphasise that, in contrast with the local case, it depends not only on \(d\), but also on \(f^{\prime}(0)\) as well as on the nonlocal kernel \(\mathcal{J}\) and in particular on its range \(L\). The proof of Theorem 2.3 uses the fact that \(\mathcal{J}\) has a compactly supported kernel. We expect the same type of result to hold for kernels decaying sufficiently fast at infinity, but to prove this one should dwell deeper into the arguments developed in the present paper. The last statement of the theorem says that, for large \(D\) and \(L\), the propagation in the horizontal direction really occurs as if the diffusion in the field were also given by the nonlocal operator \(\mathcal{J}\), the speed in that case being given by the formula (4.8) below. Theorem 2.3 is proved in Section 4.3.
### Further properties of the \(Sirt\) model with nonlocal diffusion
In the case of the Model (1.3) for propagation of epidemics, the existence of steady solutions has to be examined at the level of cumulative densities, which satisfy system (2.3) with \(f\) given by (2.4). Recall that the non-linearity \(f\), being concave, fulfils the Fisher-KPP condition \(f(s)\leqslant f^{\prime}(0)s\) for \(s>0\). We compute
\[f^{\prime}(0)=\alpha(R_{0}-1),\quad\text{where }\ R_{0}:=\frac{S_{0}\beta}{ \alpha}.\]
Hence \(f^{\prime}(0)>0\) if and only if \(R_{0}>1\), and in such a case \(f\) has a unique positive zero, that we call \(v_{*}\). As a consequence, when \(R_{0}>1\) the function \(f\) fulfils all the conditions in (1.1) with "1" replaced by \(v_{*}>0\). The quantity \(R_{0}\) can be viewed as the classical _basic reproduction number_, see for instance [19].
System (2.3) is nothing else than (1.2) with the additional source term \(I_{0}\) in the first equation. The Liouville-type result and the stability of the unique positive steady solution hold true for this new system. However, since \(I_{0}\) is non-constant, the positive steady solution is in this case nontrivial. It tends to the constant steady solution to (1.2) at infinity, with a given exponential decay, as stated in the following theorem.
**Theorem 2.4**.: _The problem (2.3)-(2.5) has a unique non-negative, bounded steady solution \((u^{r}_{\infty}(x),v^{r}_{\infty}(x,y))\). Such a solution satisfies_
\[u^{r}_{\infty}(x)=\begin{cases}0&\text{if }R_{0}<1\\ \frac{\nu}{\mu}\,v_{*}&\text{if }R_{0}>1\end{cases}\,+\,e^{-\kappa(x)|x|}, \qquad v^{r}_{\infty}(x,y)=\begin{cases}0&\text{if }R_{0}<1\\ v_{*}&\text{if }R_{0}>1\end{cases}\,+\,e^{-\lambda(x,y)|x|},\]
_where, in the case \(R_{0}>1\), \(v_{*}\) is the unique positive zero of \(f=0\), and \(\kappa,\lambda\) fulfil_
\[\lim_{|x|\to\infty}\kappa(x)=\lim_{|x|\to\infty}\lambda(x,y)=a_{*},\qquad\text{ locally uniformly in }y\geqslant 0,\]
_with_
\[0<a_{*}<\begin{cases}\sqrt{\frac{\alpha}{d}(1-R_{0})}&\text{if }R_{0}<1\\ \sqrt{\frac{-f^{\prime}(v_{*})}{d}}&\text{if }R_{0}>1.\end{cases}\]
_Moreover, the solution \((u,v)\) to (2.3)-(2.6) converges to \((u^{r}_{\infty},v^{r}_{\infty})\) as \(t\to+\infty\), locally uniformly in \(x\in\mathbb{R}\), \(y\geqslant 0\)._
The proof is presented in Section 5.3. As far as the Liouville-type result is concerned, the nonlocal operator \(\mathcal{J}\) does not introduce any substantial difference in the proof, compared with the \(SIRT\) model with local diffusion on the road treated in [10]. The study of the decay, instead, requires one to understand the structure of exponential eigenfunctions for the nonlocal operator.
Theorem 2.4 is a "pandemic threshold result" (c.f. for instance the discussion of Bartlett's paper [3] by Kendall) which exhibits two opposite scenarios according to whether \(R_{0}\) is below or above the value \(1\). Indeed, since the loss of susceptible individuals at a given location \((x,y)\) throughout the whole epidemic course is \(I_{tot}(x,y):=S_{0}-S(+\infty,x,y)\), and one has that \(S=S_{0}e^{-\beta v}\) by the second equation in (1.3), Theorem 2.4 yields
\[I_{tot}(x,y)=S_{0}\big{(}1-e^{-\beta v^{r}_{\infty}(x,y)}\big{)},\]
whence in particular
\[\lim_{|(x,y)|\to\infty}I_{tot}(x,y)=\begin{cases}0&\text{if }R_{0}\leqslant 1\\ S_{0}\big{(}1-e^{-\beta v_{*}}\big{)}&\text{if }R_{0}>1.\end{cases}\]
This means that the epidemic wave spreads throughout the territory if and only if \(R_{0}>1\). Therefore, Model (1.3) displays the same well-known dichotomy as the classical epidemic models.
The next question is then to determine the speed of the epidemic wave when \(R_{0}>1\). Similarly to what happens when the diffusion of the epidemic on the line is given by the Laplacian, considered in [10], the speed of propagation on the half-plane can be enhanced by the faster diffusion on the line.
**Theorem 2.5**.: _Assume that \(R_{0}>1\). Let \((u,v)\) be the solution to (2.3)-(2.6). Then, there exists \(c_{\mbox{\tiny{SIR}}}^{r}>0\) such that, for all \(\varepsilon>0\), it holds_
\[\sup_{|x|\leqslant(c_{\mbox{\tiny{SIR}}}^{r}-\varepsilon)t}\bigl{|}(u(t,x),v( t,x,y))-(u_{\infty}^{r}(x),v_{\infty}^{r}(x,y))\bigr{|}\to 0,\]
\[\sup_{|x|\geqslant(c_{\mbox{\tiny{SIR}}}^{r}+\varepsilon)t}\bigl{|}(u(t,x),v( t,x,y))\bigr{|}\to 0,\]
_as \(t\to+\infty\), locally uniformly with respect to \(y\geqslant 0\)._
_In addition, there is a quantity \(D_{*}>0\) such that the spreading speed \(c_{\mbox{\tiny{SIR}}}^{r}\) satisfies_
\[c_{\mbox{\tiny{SIR}}}^{r}\begin{cases}=c_{\mbox{\tiny{SIR}}}&\text{if }D \leqslant D_{*}\\ >c_{\mbox{\tiny{SIR}}}&\text{if }D>D_{*}\end{cases}\qquad\text{with }\ c_{\mbox{\tiny{SIR}}}:=2 \sqrt{d\alpha(R_{0}-1)}.\]
The quantity \(c_{\mbox{\tiny{SIR}}}=2\sqrt{d\alpha(R_{0}-1)}\) is the speed of propagation for the classical \(SIR\) model with local diffusion on the infected, without the road. The speed \(c_{\mbox{\tiny{SIR}}}^{r}\) and the threshold \(D_{*}\) in the above theorem coincide with the spreading speed \(c_{*}\) and the \(D_{*}\) that are provided by Theorem 2.3 in the case where \(f\) is given by (2.4). This is not surprising, because systems (2.3) and (1.2) only differ for the compactly supported source term \(I_{0}\), which, as one may expect, does not affect the dynamics of the solution far from the origin. This is rigorously proved at the end of Section 5.3.
We conclude with a final feature of Model (1.3): propagation occurs in the form of an epidemic wave. As its proof does not require more elements than [10, Proposition 3.7], we only state it informally and its proof will be left to the interested reader. In the case \(R_{0}>1\), the number of infected at a given location \((x,y)\) the epidemic peak occurs around a time \(\tau_{*}(x)\) which satisfies
\[\lim_{|x|\to+\infty}\frac{\tau_{*}(x)}{|x|}=\frac{1}{c_{\mbox{\tiny{SIR}}}^{r }},\]
In more precise terms, there is a constant \(T_{*}>0\) and a locally bounded, and locally bounded from below function \(I_{*}(y)\), defined for \(y\geqslant 0\), such that
\[T(\tau_{*}(x),x)\geqslant T_{*},\quad I(\tau_{*}(x),x,y)\geqslant I_{*}(y).\]
Moreover, uniformly in \(x\in\mathbb{R}\) and locally uniformly in \(y\geqslant 0\), one has
\[\lim_{t\to+\infty}\biggl{(}T(\tau_{*}(x)+t,x),I(\tau_{*}(x)+t,x, y)\biggr{)} =\lim_{\begin{subarray}{c}t\to+\infty\\ t\leqslant\tau_{*}(x)\end{subarray}}\biggl{(}T(\tau_{*}(x)-t,x),I(\tau_{*}( x)-t,x,y)\biggr{)}\] \[=(0,0).\]
Initial value problems and a-priori bounds
The study of system (1.2) was carried out in [8] in the case where \(\mathcal{J}\) is replaced by \(\partial_{xx}\). Problem (1.2) is almost linear, the only non-linearity being the harmless globally Lipschitz-continuous function \(f\). Moreover, the system displays a monotonic structure, which yields a comparison principle: if \((u^{1}_{0}(x),v^{1}_{0}(x,y))\leqslant(u^{2}_{0}(x),v^{2}_{0}(x,y))\) are two ordered, initial data then the corresponding solutions \((u^{1},v^{1})\), \((u^{2},v^{2})\) satisfy
\[(u^{1}(t,x),v^{1}(t,x,y))\leqslant(u^{2}(t,x),v^{2}(t,x,y)),\qquad\forall t>0, \ (x,y)\in\mathbb{R}^{2}_{+}.\]
The same holds true if \((u^{1},v^{1})\) is a sub-solution and \((u^{2},v^{2})\) is a super-solution, also in the generalised sense 1. One can show the comparison principle following the same arguments as in the proof of [8, Proposition 3.2], which hold true when \(\partial_{xx}\) is replaced by \(\mathcal{J}\). Indeed, those arguments only rely on the following form of ellipticity condition (which is straightforward to check):
Footnote 1: A sub-solution (resp. super-solution) satisfies “\(\leqslant\)” (resp. “\(\geqslant\)”) in the three equations in (1.2); a _generalised_ sub-solution (resp. super-solution) is the maximum (resp. minimum) of a finite number of sub-solutions (resp. super-solutions). We always require (sub,super)solutions to grow at most exponentially in space in order to guarantee the classical maximum principle for linear parabolic equations.
\[u(x)\geqslant u(x_{0})\quad\text{for all }x\in\mathbb{R}\quad\implies \quad\mathcal{J}u(x_{0})\geqslant 0, \tag{3.1}\]
which is the key for the maximum principle to hold, as well as on the existence of a smooth function \(\chi:\mathbb{R}\to[0,+\infty)\) such that
\[\|\chi^{\prime}\|_{\infty},\|\chi^{\prime\prime}\|_{\infty}<+\infty,\quad \lim_{x\to\pm\infty}\chi(x)=+\infty,\qquad|\mathcal{J}\chi|\leqslant 1\]
(which can easily be constructed after noticing that \(|\mathcal{J}\chi|\leqslant CL\|\chi^{\prime}\|_{\infty}\)). The monotonic structure of (1.2) entails that if the initial datum \(v_{0}\) in (2.1) is smooth (we have taken \(u(0,.)\equiv 0\) for convenience) the comparison principle propagates to the derivatives and entails _exponential in time_ a priori bounds for the successive derivatives of \(u\) and \(v\). The same is true for Model (1.3), as a consequence of the fact that the integrated system (2.3) also enjoys exponential in time a priori bounds for its solutions. Therefore, what we need to do is to construct solutions to (1.2),(2.1) and (2.3)-(2.6) and to derive _uniform in time_ estimates for their successive derivatives.
Proof of Theorem 2.1.: Let us first concentrate on the model for biological invasions (1.2),(2.1). We approximate it by adding a small local diffusion \(\varepsilon\partial_{xx}\), \(\varepsilon>0\), on the line:
\[\left\{\begin{array}{rl}\partial_{t}v-d\Delta v=&f(v)\quad(t>0,\ (x,y)\in \mathbb{R}^{2}_{+})\\ -d\partial_{y}v=&\mu u-\nu v\quad(t>0,\ x\in\mathbb{R},\ y=0)\\ \partial_{t}u-\varepsilon\partial_{xx}u-D\mathcal{J}u=&\nu v(t,x,0)-\mu u \quad(t>0,\ x\in\mathbb{R},\ y=0).\end{array}\right. \tag{3.2}\]
By [8], the Cauchy Problem for (3.2),(2.1) is well-posed and enjoys the comparison principle. Let \((u^{\varepsilon}(t,x),v^{\varepsilon}(t,x,y))\) be unique classical solution. Since \(M(\nu/\mu,1)\) is a supersolution to (3.2) for any constant \(M\geqslant 1\), taking \(M\) larger than \(\max(1,\sup v_{0})\) one deduces from the comparison principle that
\[0\leqslant u^{\varepsilon}(t,x)\leqslant M\frac{\nu}{\mu},\quad\ 0\leqslant v^{ \varepsilon}(t,x,y)\leqslant M,\quad\ \ \forall\varepsilon>0.\]
In order to get estimates that are uniform both in \(\varepsilon\) and in \(t\) we proceed as follows. Let us call \((U,V):=(\partial_{x}u^{\varepsilon},\partial_{x}v^{\varepsilon})\), where we have dropped the \(\varepsilon\)'s to to alleviate the notation. This pair solves the linear problem
\[\left\{\begin{array}{rl}\partial_{t}V-d\Delta V=&f^{\prime}(v^{ \varepsilon})V\quad(t>0,\ (x,y)\in\mathbb{R}^{2}_{+})\\ -d\partial_{y}V=&\mu U-\nu V\quad(t>0,\ x\in\mathbb{R},\ y=0)\\ \partial_{t}U-\varepsilon\partial_{xx}U-D\mathcal{J}U=&\nu V(t,x,0)-\mu U \quad(t>0,\ x\in\mathbb{R},\ y=0).\end{array}\right. \tag{3.3}\]
Pick any \(\ell>0\), having in mind that we will require it to be small. By interior parabolic regularity, see e.g. [22, Theorem 9.10.1], applied to the Fisher-KPP equation in (3.2) (recall that \(f\) is smooth), for given \(\alpha\in(0,1)\) there is \(C_{\ell}>0\) such that
\[|V(t,x,y)|\leqslant C_{\ell}\big{(}\|v^{\varepsilon}\|_{L^{\infty}(\mathbb{R }_{+}\times\mathbb{R}^{2}_{+})}+\|v_{0}\|_{C^{2+\alpha}(\mathbb{R}^{2}_{+})} \big{)}=:C^{\prime}_{\ell},\qquad\forall t\geqslant 0,\,x\in\mathbb{R},\,y \geqslant\ell.\]
We have that \(C_{\ell},C^{\prime}_{\ell}\) are independent of \(\varepsilon\) because \(v^{\varepsilon}\leqslant M\). We introduce the pair
\[(\bar{U}(x),\bar{V}(x,y))=(1,\frac{\mu}{\nu}\cos\,\Big{(}\frac{\pi y}{4\ell} \Big{)}).\]
By direct computation one checks that it satisfies third equations of (3.3) exactly, while it is a super-solution to the second one. For the first one we have that
\[-d\Delta\bar{V}-f^{\prime}(v^{\varepsilon})\bar{V}=\ \ \bigg{(}\frac{\pi^{2}}{16 \ell^{2}}-f^{\prime}(v^{\varepsilon})\bigg{)}\bar{V}, \tag{3.4}\]
which is positive for \(y\in(0,\ell)\) as soon as
\[\frac{\pi^{2}}{16\ell^{2}}>\max_{[0,M]}f^{\prime}. \tag{3.5}\]
Therefore, with such a choice of \(\ell\), \((\bar{U},\bar{V})\) is a super-solution to (3.3) and \(-(\bar{U},\bar{V})\) is a sub-solution. Moreover we have
\[\bar{V}(x,\ell)=\frac{\mu}{\nu\sqrt{2}}.\]
Set
\[M^{\prime}:=\sqrt{2}\frac{\nu}{\mu}\,\max\big{(}\|\partial_{x}v_{0}\|_{\infty },C^{\prime}_{\ell}\big{)},\]
whence
\[\forall t\geqslant 0,\ x\in\mathbb{R},\ y\in[0,\ell],\quad M^{\prime}\bar{V} (x,y)\geqslant M^{\prime}\bar{V}(x,\ell)\geqslant\max\bigg{(}\|\partial_{x}v_{ 0}\|_{\infty},|V(t,x,\ell)|\bigg{)}.\]
Recall, on the other hand, that \(u_{0}\equiv 0\leqslant M^{\prime}\bar{U}\). We can then apply the comparison principle between \((U,V)\) and \(\pm M^{\prime}(\bar{U},\bar{V})\) in the strip \(\mathbb{R}\times(0,\ell)\) and derive the following bounds for \(U,V\) in their domains of definition:
\[|U|\leqslant M^{\prime}\bar{U},\qquad|V|\leqslant M^{\prime}\bar{V}.\]
We have thereby shown that the functions \(\partial_{x}u^{\varepsilon}(t,x),\partial_{x}v^{\varepsilon}(t,x,y)\) are bounded uniformly in \(\varepsilon>0\), \(t\geqslant 0\), \(x\in\mathbb{R}\), \(y\geqslant 0\).
A bound for \(\partial_{xx}u^{\varepsilon},\partial_{xx}v^{\varepsilon}\) is derived in a similar way. Indeed, the couple \((\partial_{xx}u^{\varepsilon},\partial_{xx}v^{\varepsilon})\) solves a system quite similar to (3.3), up to the fact that the first equation has the additional inhomogeneous term \(f^{\prime\prime}(v^{\varepsilon})(\partial_{x}v^{\varepsilon})^{2}\), which we know to be bounded independently of \(\varepsilon\). As a consequence, recalling (3.4) and taking \(\ell\) satisfying (3.5), one sees that, for a possibly larger \(M^{\prime}\) than before, the pair \(M^{\prime}(\bar{U},\bar{V})\) is a super-solution to this new system for \(y\in(0,\ell)\).
Now that we know that \(\partial_{xx}u^{\varepsilon}\) is bounded, we can apply on one hand the regularity theory for the oblique derivative problem in (3.3) (see [24, Theorem 5.18]) and infer that the first-order-in-time and second-order-in-space derivatives of \(v^{\varepsilon}\) are globally bounded and Holder continuous, uniformly in \(\varepsilon>0\), \(t\geqslant 0\). On the other hand, we directly derive from the last equation in (3.2) the uniform \(L^{\infty}\) bound for \(\partial_{t}u^{\varepsilon}\).
One can bootstrap the above arguments, thanks to the smoothness of \(f\), and get uniform \(L^{\infty}\) bounds on the \(x\)-derivatives of any order of \(u^{\varepsilon},v^{\varepsilon}\), and also of \(\partial_{t}u^{\varepsilon},\partial_{t}v^{\varepsilon}\). The last equation in (3.2) eventually yields that \(\partial_{t}u^{\varepsilon}\) is uniformly Holder continuous too. These uniform estimates allow us to pass to the limit as \(\varepsilon\to 0\) (up to subsequences) in (3.2) and get a solution to (1.2). The uniform-in-time bounds on the derivatives are inherited by such a solution.
The argument for system (2.3) is analogous, up to the fact that we have the additional term \(I_{0}\) in the right-hand side for \(v\), which does not affect the previous analysis because \(I_{0}\) is bounded and smooth. Once the bounds for all derivatives of \(u\), \(v\) are secured, the bounds for \(S\), \(I\), \(T\) follow.
**Remark 3.1**.: _The boundedness of the derivatives of \((u,v)\) may look rather simple, but it relies on two deep facts. The first one is that the diffusion process in the upper half plane transfers some regularity to the line through the exchange condition. The second is the maximum principle in narrow domains (which is equivalent to the existence of a positive strict super-solution, see [7] for a very general study), which allows \(L^{\infty}\) estimates even though the domain is unbounded in one direction. Using classical regularisation techniques, one can relax the smoothness assumption on the initial datum \(v_{0}\) and just require it to be continuous; in such a case the uniform bounds on the derivatives hold starting from any given positive time \(T\)._
To conclude this section, let us dwell a little more on the issue of the boundedness of derivatives, and compare it to what happens in a classical reaction-diffusion equation with nonlocal diffusion, that is
\[u_{t}-D\mathcal{J}u=f(u),\ \ t>0,\ x\in\mathbb{R}, \tag{3.6}\]
where we assume, to fix ideas, that \(f(0)=f(1)=0\), and that either \(f\) fulfils the Fisher-KPP condition in (1.1), or \(f\) is bistable with positive mass (i.e. \(f^{\prime}(0),f^{\prime}(1)<0\), \(f\) has one zero in \((0,1)\) and \(\int_{0}^{1}f>0\)). The Cauchy Problem for (3.6) with initial data \(0\leqslant u_{0}\leqslant 1\) produces classical solutions, that are bounded but have derivatives that may grow exponentially in time, and moreover there is no regularisation mechanism as in parabolic equations (i.e. when "\(\mathcal{J}\)" is replaced by "\(\Delta\)").
When \(f\) is a bistable non-linearity, let us show that the preservation of regularity for (3.6) occurs in case a travelling wave with positive speed, connecting \(0\) to \(1\), exists.
Let \(\phi(\xi)\) be such a wave. It solves
\[-D\mathcal{J}\phi+c\phi^{\prime}=f(\phi)\quad\text{in}\ \ \mathbb{R}\] \[\phi(-\infty)=1,\quad\phi(+\infty)=0.\]
See Bates, Fife, Ren, Wang [4] for sufficient existence conditions. One readily sees that the positivity of the speed \(c\) yields the regularity of \(\phi\), and it is this exact same property that will trigger the regularity mechanism. Assume, in order to simplify the argument, that the initial datum \(u_{0}\) for (3.6) satisfies \(u_{0}(-\infty)=1\), \(u_{0}(+\infty)=0\) (if \(u_{0}\) were compactly supported one would need \(u_{0}\) to be sufficiently large on a large interval and one would also have to deal with leftwards propagating waves). A word by word adaptation of the celebrated Fife-McLeod argument [16] shows the existence of two positive numbers \(q\) and \(\omega\), as well as two real numbers \(\xi_{1}\geqslant\xi_{2}\) such that
\[\phi(x-ct+\xi_{1})-qe^{-\omega t}\leqslant u(t,x)\leqslant\phi(x-ct+\xi_{2})+ qe^{-\omega t}. \tag{3.7}\]
The function \(U(t,x):=-\partial_{x}u(t,x)\) solves
\[U_{t}-D\mathcal{J}U=f^{\prime}(u)U,\ \ t>0,\ x\in\mathbb{R}, \tag{3.8}\]
and the underlying mechanism of Theorem 2.1 is not present here. As a matter of fact, when there is no diffusion, that is, \(D=0\), \(u(t,x)\) tends to a step function as \(t\to+\infty\), while \(U(t,x)\) grows unboundedly in time, as it becomes a sum of Dirac masses as \(t\to+\infty\). Yet, when \(D>0\), one recovers the boundedness of \(U(t,x)\), but what makes it work is that, for every \(x\) in \(\mathbb{R}\), the function \(f^{\prime}(u(t,x))\) is non-negative for a set of times that has bounded measure. Indeed, we may rewrite (3.8) as
\[U_{t}+(D-f^{\prime}(u(t,x)))U=DK_{L}*U(t,\cdot)=DK_{L}^{\prime}*u(t,\cdot),\]
with the latter term going pointwise to \(0\) and \(f^{\prime}(u(t,x))\to f^{\prime}(1)<0\) as \(t\to+\infty\), due to (3.7). Therefore, the Gronwall Lemma gives, for \(t\) larger than any given \(T>0\), a bound of the form
\[|U(t,x)|\leqslant\sup_{t\geqslant T}|K_{L}^{\prime}*u(t,\cdot)|+C{\rm exp} \bigg{(}-tD+\int_{0}^{t}f^{\prime}(u(s,x))\bigg{)}ds, \tag{3.9}\]
for some large \(C>0\). Given any \(x\in\mathbb{R}\), estimate (3.7) shows that the time spent by \(u(s,x)\) in the zone \(\{f^{\prime}\geqslant 0\}\) is bounded independently of \(x\): this ensures the uniform boundedness for the exponential in (3.9). This argument, by the way, provides an alternative, and somewhat quicker, proof of a result by Chen [12] asserting that \(u(t,x)\) converges to a travelling wave.
When \(f\) is of the Fisher-KPP type, there are no such bounds as (3.7). The analogue of such bounds are given by (Graham [18]), but more work is needed to achieve the regularity proof, doing it is outside our scope here. Let us notice that regularity results have been proved for travelling fronts of equations of the type (3.6) that are inhomogeneous in \(t\) or \(x\). Let us for instance mention Coville, Davila, Martinez [13] or Shen, Shen [26] for particular cases of transition fronts for inhomogeneous Fisher-KPP non-linearities. All these results are different from Theorem 2.1 in spirit. This latter theorem presents indeed a new mechanism for the preservation of regularity.
The biological invasions model
### Steady states and invasion
This section is devoted to the proof of Proposition 2.2 concerning the biological invasions model (1.2). It contains two separate statements. The first one is a Liouville-type result asserting that the only non-negative, nontrivial, bounded, steady solution for (1.2) is the constant pair \((u\equiv\nu/\mu,v\equiv 1)\) (which is indeed a solution, as one can directly check). The second one describes the long-time behaviour for the Cauchy problem, locally in space. The proof is a straightforward adaptation of the one for the local problem given in [9], which is based on the comparison principle and a variant of the sliding method. We give it here for the sake of completeness.
Proof of Proposition 2.2.: We simultaneously show the two statements of the Proposition. For \(R>0\) sufficiently large, the principal eigenfunction \(\phi\) of \(-\Delta\) in the ball \(B_{R}\), with Dirichlet boundary condition, satisfies \(-\Delta\phi\leqslant f^{\prime}(0)\phi\), hence \(-\Delta(\delta\phi)\leqslant f(\delta\phi)\) for \(\delta>0\) smaller than some \(\delta_{0}>0\). We extend \(\phi\) by \(0\) outside \(B_{R}\), and we call \(\widetilde{\phi}(x,y):=\phi(x,y-R-1)\), so that its support does not intersect the line \(\{y=0\}\). We deduce that the pair \((0,\delta\widetilde{\phi})\) is a _generalised_ steady sub-solution to (1.2). On the other hand, the constant pair \(M(\nu/\mu,1)\) is a super-solution to (1.2) for any \(M\geqslant 1\). Let \((\underline{u},\underline{v})\) and \((\overline{u},\overline{v})\) be the solutions to (1.2) emerging respectively from the initial data \((0,\delta\widetilde{\phi})\) and \(M(\nu/\mu,1)\), with \(\delta\in(0,\delta_{0}]\) and \(M\geqslant 1\). Since system (1.2) fulfils the comparison principle - c.f. the beginning of Section 3 - one deduces that these solutions are respectively non-decreasing and non-increasing in \(t\), and satisfy \((\underline{u},\underline{v})\leqslant(\overline{u},\overline{v})\) for all \(t>0\) and \((x,y)\in\mathbb{R}^{2}_{+}\). Moreover, using the boundedness of derivatives asserted by Theorem 2.1, one infers that these solutions converge locally uniformly in space, as \(t\to+\infty\), to a steady solution \((\underline{u}_{\infty},\underline{v}_{\infty})\) and \((\overline{u}_{\infty},\overline{v}_{\infty})\) respectively, which satisfy
\[(0,\delta\widetilde{\phi})\leqslant(\underline{u}_{\infty},\underline{v}_{ \infty})\leqslant(\overline{u}_{\infty},\overline{v}_{\infty})\leqslant M \Big{(}\frac{\nu}{\mu},1\Big{)},\quad\forall(x,y)\in\mathbb{R}^{2}_{+}.\]
As a consequence, if we show that the steady states \((\underline{u}_{\infty},\underline{v}_{\infty})\) and \((\overline{u}_{\infty},\overline{v}_{\infty})\) coincide, we would have that any solution with an initial datum lying between \((0,\delta\widetilde{\phi})\) and \(M(\nu/\mu,1)\), for some \(\delta,M>0\), converges as \(t\to+\infty\) to such a steady state, locally uniformly in space. This would immediately yield the Liouville-type result, since any non-negative, bounded, steady solution \((u,v)\not\equiv(0,0)\) to (1.2) satisfies \(v>0\) in \(\mathbb{R}^{2}_{+}\) (because otherwise \(v\equiv 0\) by the elliptic strong maximum principle and thus \(u\equiv 0\) by the second equation in (1.2)) hence \((0,\delta\widetilde{\phi})\leqslant(u,v)\leqslant M(\nu/\mu,1)\) for \(\delta\ll 1\), \(M\gg 1\). Analogously, one would also derive the second statement of Proposition 2.2 thanks to the parabolic strong maximum principle applied to \(v\).
To conclude the proof we then need to show that \((\underline{u}_{\infty},\underline{v}_{\infty})\equiv(\overline{u}_{\infty },\overline{v}_{\infty})\). We know that \((\overline{u}_{\infty},\overline{v}_{\infty})\) is independent of \(x\). We now show that the same is true for \((\underline{u}_{\infty},\underline{v}_{\infty})\), using a variant of the sliding method. Namely, since \(\delta\widetilde{\phi}\leqslant\underline{v}_{\infty}\) in the ball \(B_{R}(0,R+1)\), and the strict inequality holds on its boundary, the elliptic strong maximum principle implies that the inequality is strict in the interior too. Hence we can find \(H>0\) such that \(\delta\widetilde{\phi}(x+h,y)\leqslant\underline{v}_{\infty}(x,y)\) for all \(h\in[-H,H]\) and \((x,y)\in\mathbb{R}^{2}_{+}\). Since the solution emerging from \((0,\delta\widetilde{\phi}(x+h,y))\) converges to \((\underline{u}_{\infty}(x+h),\underline{v}_{\infty}(x+h,y))\) as \(t\to+\infty\) (by the horizontal invariance of the system), it follows from the comparison
principle that \((\underline{u}_{\infty}(x+h),\underline{v}_{\infty}(x+h,y))\leqslant(\underline{u} _{\infty},\underline{v}_{\infty})\) in \(\mathbb{R}^{2}_{+}\). This being true for all \(h\in[-H,H]\), we conclude that \((\underline{u}_{\infty},\underline{v}_{\infty})\) is \(x\)-independent. Finally, since \((\underline{u}_{\infty},\underline{v}_{\infty})\) and \((\overline{u}_{\infty},\overline{v}_{\infty})\) do not depend on \(x\), the term \(\mathcal{J}u\) in (1.2) drops and one ends up in the local case. Namely, one directly applies [9, Proposition 3.1] (with \(\rho=0\)) and infers that \((\underline{u}_{\infty},\underline{v}_{\infty})\equiv(\overline{u}_{\infty}, \overline{v}_{\infty})\).
### A benchmark: Fisher-KPP front propagation with nonlocal diffusion
In order to investigate the propagation for the model (1.2), we start with considering the Fisher-KPP equation alone in the one-dimensional space, with nonlocal diffusion:
\[u_{t}-D\mathcal{J}u=f(u)\quad(t>0,\ x\in\mathbb{R}). \tag{4.1}\]
It is well known (see, for instance, Liang-Zhao [23], Thieme-Zhao [27]) that the speed of propagation for compactly supported initial data is inferred, in this case, from the study of the plane waves of (4.1), linearised around \(u=0\):
\[u_{t}-D\mathcal{J}u=f^{\prime}(0)u\quad(t>0,\ x\in\mathbb{R}) \tag{4.2}\]
Plane waves for (4.2) are sought for in the exponential form
\[u(t,x)=e^{-a(x-ct)},\]
with \(a,c>0\). Direct computation shows that
\[\mathcal{J}u =\int_{\mathbb{R}}K_{L}(x-x^{\prime})\big{(}e^{-(ax^{\prime}+ct)}- e^{-(ax+ct)}\big{)}dx^{\prime}\] \[=u\int_{\mathbb{R}}K_{L}(x-x^{\prime})\big{(}e^{-a(x^{\prime}-x)} -1\big{)}dx^{\prime}\] \[=\varphi_{L}(a)u,\]
where we have set
\[\begin{split}\varphi_{L}(a):&=\int_{\mathbb{R}}K_{L} (x)(e^{ax}-1)dx\\ &=2\int_{0}^{+\infty}K_{L}(x)(\cosh(ax)-1)dx.\end{split} \tag{4.3}\]
The function \(a\mapsto\varphi_{L}(a)\) is analytic, even, nonnegative and vanishes at \(0\). It further satisfies
\[\varphi_{L}^{\prime\prime}(a)=2\int_{0}^{+\infty}K_{L}(x)\cosh(ax)x^{2}dx, \tag{4.4}\]
hence it is strictly convex. Moreover, for all \(\delta\in(0,1)\) it holds that
\[\delta\Big{(}\min_{[1-\delta,1]}K\Big{)}\left(\frac{1}{2}e^{(1-\delta)aL}-1 \right)\leqslant\varphi_{L}(a)\leqslant e^{aL}-1,\]
which shows that \(\varphi_{L}(a)\) grows exponentially as \(a\to+\infty\).
The function \(u(t,x)=e^{-a(x-ct)}\) solves (4.2) if and only if
\[c=\frac{D\varphi_{L}(a)+f^{\prime}(0)}{a}. \tag{4.5}\]
We call \(c_{L}(D)\) the minimal value of \(c\) in (4.5) as \(a\) varies on \((0,+\infty)\). By the strict convexity of \(\varphi_{L}\), such a minimal value is attained by a unique \(a=a(D)\). It turns out that the minimum \(c_{L}(D)\) is the asymptotic spreading speed for (4.1), as well as the minimal speed of travelling waves, see for instance [14]. In order to see the important effect of the road on the overall propagation in model (1.2), it is useful to provide an order of magnitude of \(c_{L}(D)\) when \(D\) is large, the other parameters being fixed. The minimiser \(a(D)\) in (4.5) satisfies
\[\frac{f^{\prime}(0)}{D}=a(D)\varphi_{L}^{\prime}(a(D))-\varphi_{L}(a(D))=\int_ {0}^{a(D)}x\varphi_{L}^{\prime\prime}(x)dx. \tag{4.6}\]
This indicates that \(a(D)\to 0\) as \(D\to+\infty\), and more precisely that
\[\frac{f^{\prime}(0)}{D}=\frac{1}{2}a^{2}(D)\big{(}\varphi_{L}^{\prime\prime}( 0)+o(1)\big{)}\quad\text{as}\ \ D\to+\infty. \tag{4.7}\]
Recalling that \(c_{L}\) is given by (4.5) with \(a=a(D)\) satisfying (4.6), one gets
\[c_{L}(D)=D\varphi_{L}^{\prime}(a(D))=Da(D)\big{(}\varphi_{L}^{\prime\prime}(0 )+o(1)\big{)}\quad\text{as}\ \ D\to+\infty,\]
whence, by (4.7),
\[c_{L}(D)=\sqrt{2Df^{\prime}(0)\big{(}\varphi_{L}^{\prime\prime}(0)+o(1)\big{)} }\quad\text{as}\ \ D\to+\infty.\]
Finally, computing
\[\varphi_{L}^{\prime\prime}(0)=\int_{\mathbb{R}}K_{L}(x)x^{2}dx=\int_{\mathbb{ R}}K_{L}(x)x^{2}dx=L^{2}\int_{\mathbb{R}}K(x)x^{2}dx=:L^{2}\langle x^{2}K\rangle,\]
we eventually find
\[c_{L}(D)=\sqrt{2Df^{\prime}(0)\big{(}L^{2}\langle x^{2}K\rangle+o(1)\big{)}} \quad\text{as}\ \ D\to+\infty. \tag{4.8}\]
### Spreading speed
We now turn to Theorem 2.3 which asserts the existence of an asymptotic spreading speed \(c_{*}\) for model (1.2). This will be given by the least \(c\) so that the linearised system around \((0,0)\) has plane wave supersolutions (i.e. satisfying the inequalities "\(\geqslant\)" in the three equations) moving with speed \(c\) in the \(x\) direction. The analogous property is proved in [8] when the diffusion on the line is \(-D\partial_{xx}\).
To start with, we linearise the system (1.2) around \(v\equiv 0\) :
\[\left\{\begin{array}{rcl}\partial_{t}u-D\mathcal{J}u&=&\nu v-\mu u&(t>0,\ x \in\mathbb{R},\ y=0)\\ \partial_{t}v-d\Delta v&=&f^{\prime}(0)v&(t>0,\ (x,y)\in\mathbb{R}_{+}^{2})\\ -d\partial_{y}v&=&\mu u-\nu v&(t>0,\ x\in\mathbb{R},\ y=0).\end{array}\right. \tag{4.9}\]
The novelty with respect to [8] is in the nonlocal term \(\mathcal{J}u\) instead of \(\partial_{xx}u\). However, as seen in Section 4.2, exponential functions are also eigenfunctions for such a non-local operator. This is why we look for plane wave solutions for (4.9) as exponential functions, exactly as in the local case:
\[(u(t,x),v(t,x,y))=e^{-a(x-ct)}(1,\gamma e^{-by})\qquad a,\gamma,c>0,\ b\in \mathbb{R}. \tag{4.10}\]
Starting from these pairs, we will construct suitable super and sub-solutions to (1.2).
**Lemma 4.1**.: _Let \(\varphi_{L}\) be defined in (4.3), let \(c_{{}_{K}}:=2\sqrt{df^{\prime}(0)}\) and call_
\[D_{*}:=\frac{2f^{\prime}(0)}{\varphi_{L}\big{(}\frac{c_{{}_{K}}}{2d}\big{)}}. \tag{4.11}\]
_Then the following occur:_
1. _if_ \(D\leqslant D_{*}\) _then system (_4.9_) admits a supersolution in the form (_4.10_) if and only if_ \(c\geqslant c_{{}_{K}}\)_._
2. _if_ \(D>D_{*}\) _then there exists a quantity_ \(c_{*}(D,L)>c_{{}_{K}}\) _such that the system (_4.9_) admits a supersolution in the form (_4.10_) if and only if_ \(c\geqslant c_{*}(D,L)\)_. Moreover_ \(c_{*}(D,L)\) _satisfies_ \[\lim_{\text{DL}^{2}\to+\infty}\frac{c_{*}(D,L)}{\sqrt{DL^{2}}}>0.\] (4.12)
Proof.: The third equation of (4.9) rewrites in terms of the parameters in (4.10) as \(\gamma=\mu/(\nu+db)\). This fixes \(\gamma\), and entails that necessarily \(b>-\nu/d\) (observe that also when one deals with super-solutions, it is convenient to take \(\gamma\) so that equality holds in the third equation of (4.9), because increasing \(\gamma\) makes the inequality \(\geqslant\) in the first one more stringent). The plane wave problem for (4.9) then reduces to the following system in the unknowns \(a\) and \(b\):
\[\left\{\begin{array}{rcl}-D\varphi_{L}(a)+ca+\frac{d\mu b}{\nu+db}&=&0\\ -(a^{2}+b^{2})+\frac{ca}{d}&=&\frac{c_{{}_{K}}^{2}}{4d^{2}}\,,\end{array}\right. \tag{4.13}\]
where \(\varphi_{L}(a)\) is given by (4.3) and \(c_{{}_{K}}:=2\sqrt{df^{\prime}(0)}\). Solutions of (4.13) correspond in the \((a,b)\) plane, restricted to \(b>-\nu/d\), to the intersection between the curve \(\Gamma_{1}\), given by the first equation, and the circle \(\Gamma_{2}\) with centre \((\frac{c}{2d},0)\) and radius \(\rho(c):=\frac{\sqrt{c^{2}-c_{{}_{K}}^{2}}}{2d}\), which is nonempty if and only if \(c\geqslant c_{{}_{K}}\).
Let us examine \(\Gamma_{1}\), for given \(c>0\). Recall from Section 4.2 that \(\varphi_{L}\) is analytic, even, vanishes at \(0\) and it is uniformly strictly convex (i.e. \(\inf_{\mathbb{R}}\varphi_{L}^{\prime\prime}>0\)). We find that \(\Gamma_{1}\) is the graph
\[b=G_{1}^{c}(a):=\frac{\nu}{d}\left(\frac{\mu}{\mu+ca-D\varphi_{L}(a)}-1\right), \tag{4.14}\]
which is defined for \(a\in(a_{-}^{\infty}(c,D),a_{+}^{\infty}(c,D))\) (in order to fulfil \(b>-\nu/d\)) where \(a_{-}^{\infty}(c,D)<0<a_{+}^{\infty}(c,D)\) are the solutions to
\[D\varphi_{L}(a_{\pm}^{\infty}(c,D))=ca_{\pm}^{\infty}(c,D)+\mu. \tag{4.15}\]
The function \(G_{1}^{c}(a)\) is analytic, has the two vertical asymptotes \(a=a_{\pm}^{\infty}(c,D)\) and the two zeroes \(a=0\) and \(a=a_{0}(c,D)\), the latter being the unique positive solution of
\[D\varphi_{L}(a_{0}(c,D))=ca_{0}(c,D). \tag{4.16}\]
With respect to the parameter \(c\), the function \(G_{1}^{c}(a)\) is smooth and strictly decreasing for \(a>0\) (and \(a_{\pm}^{\infty}(c,D)\) are increasing).
Plane wave super-solutions to (4.9) correspond to the points \((a,b)\) lying in the intersection of the disk \(\mathcal{E}_{2}\) with boundary \(\Gamma_{2}\) and the region \(\mathcal{E}_{1}\) which is the one with boundary \(\Gamma_{1}\) and bounded \(a\) component. It is readily seen that \(\mathcal{E}_{2}\) is continuously strictly increasing with respect to \(c\), and we have seen that the same is true for \(\mathcal{E}_{2}\cap\{a>0\}\). For \(c<c_{{ K}}\) the intersection \(\mathcal{E}_{1}\cap\mathcal{E}_{2}\) is empty because \(\mathcal{E}_{2}\) is. For \(c=c_{{ K}}\) the disk \(\mathcal{E}_{2}\) reduces to its centre \((\frac{c_{{ K}}}{2d},0)\) and thus \(\mathcal{E}_{1}\cap\mathcal{E}_{2}\neq\emptyset\) if and only if
\[\frac{c_{{ K}}}{2d}\leqslant a_{0}(c_{{ K}},D). \tag{4.17}\]
Therefore, if condition (4.17) holds, plane wave supersolutions exist if and only if \(c\geqslant c_{{ K}}\).
Suppose instead that (4.17) does not hold, that is, the disk \(\mathcal{E}_{2}\) "appears" when \(c=c_{{ K}}\) outside the set \(\mathcal{E}_{1}\). Notice that the leftmost point of \(\mathcal{E}_{2}\), i.e. \(\Big{(}\frac{c-\sqrt{c-c_{{ K}}^{2}}}{2d},0\Big{)}\), approaches the origin as \(c\to+\infty\). Therefore, by the monotonicity properties of \(\mathcal{E}_{1}\) and \(\mathcal{E}_{2}\), as \(c\) increases starting from \(c_{{ K}}\), there has to be a first value of \(c\) at which \(\mathcal{E}_{1}\) and \(\mathcal{E}_{1}\) intersect, being tangent at some point \((a_{*},b_{*})\), which corresponds to a solution of (4.13). This first \(c\) is the sought for \(c_{*}(D,L)\). The dichotomy is depicted in Figure 1.
Let us look at condition (4.17) in terms of \(D\). Recall from (4.16) that \(a_{0}(c,D)\) is the unique positive solution of \(\psi(a_{0}(c,D))=c/D\), with \(\psi(a):=\varphi_{L}(a)/a\). The strict convexity of \(\varphi_{L}\) implies that the function \(\psi\) is strictly increasing for \(a>0\) and satisfies \(\psi(0^{+})=0\) and \(\psi(+\infty)=+\infty\), hence \(a_{0}(c,D)=\psi^{-1}(c/D)\) and then (4.17) rewrites as
\[\psi^{-1}\Big{(}\frac{c_{{ K}}}{D}\Big{)}\geqslant\frac{c_{{ K}}}{2d}.\]
Rewriting \(\psi(a):=\varphi_{L}(a)/a\) and \(c_{{ K}}=2\sqrt{df^{\prime}(0)}\) we eventually find that (4.17) is equivalent to \(D\leqslant D_{*}\) with \(D_{*}\) given by (4.11).
In order to conclude the proof of the lemma it only remains to show (4.12). Firstly, we use that \(\varphi_{L}(a)=\varphi_{1}(aL)\) and that \(\varphi_{1}\) is strictly convex to derive from (4.15) the existence of a positive constant \(H\) such that
\[H(a_{\pm}^{\infty}(c,D))^{2}DL^{2}\leqslant ca_{\pm}^{\infty}(c,D)+\mu.\]
This yields
\[a_{+}^{\infty}(c_{*}(D,L),D)\to 0\quad\text{ as }\;DL^{2}\to+\infty. \tag{4.18}\]
Now, having in mind the conclusions (4.7)-(4.8) of the benchmark in Section 4.2, we look for \(a\), \(b\), \(c\) in (4.13) under the form
\[a=\frac{c_{{ K}}\alpha}{\sqrt{2dDL^{2}\langle x^{2}K\rangle}}, \qquad b=\frac{\beta}{2d},\qquad c=c_{{ K}}\sqrt{\frac{DL^{2}\langle x^{2}K\rangle}{2d}}\,w.\]
with \(\alpha,\beta\in\mathbb{R}\), \(w>0\), where \(\langle x^{2}K\rangle:=\int_{\mathbb{R}}K(x)x^{2}dx\). The condition \(b>-\nu/d\) reads \(\beta>-2\nu\). We know from (4.18) that any solution of (4.13) satisfies \(a\to 0\) as \(DL^{2}\to+\infty\), hence we can write \(\varphi_{L}(a)=\frac{1}{2}\varphi_{L}^{\prime\prime}(0)a^{2}+o(1)\). Recalling that \(\varphi_{L}^{\prime\prime}(0)=L^{2}\langle x^{2}K\rangle\), we end up with the following reduced system, as \(DL^{2}\to+\infty\):
\[\left\{\begin{array}{rcl}-f^{\prime}(0)\alpha^{2}+o(1)+2f^{ \prime}(0)w\alpha+\frac{\mu\beta}{2\nu+\beta}&=&0\\ 2w\alpha-\frac{2d}{DL^{2}\langle x^{2}K\rangle}\,\alpha^{2}- \frac{\beta^{2}}{c_{{}_{\!K}}^{2}}&=&1.\end{array}\right. \tag{4.19}\]
Let us neglect for a moment the term \(o(1)\) in the first line and the one in \(\alpha^{2}\) in the second line (whose coefficient tends to \(0\) as \(DL^{2}\to+\infty\)). We get
\[\left\{\begin{array}{rcl}\alpha^{2}-2w\alpha&=&\frac{\mu\beta}{ 2\nu f^{\prime}(0)+f^{\prime}(0)\beta}\\ \alpha&=&\frac{1+\beta^{2}/c_{{}_{\!K}}^{2}}{2w}.\end{array}\right. \tag{4.20}\]
The first equation describes a curve in the \((\beta,\alpha)\) plane with asymptotes \(\alpha=w^{2}\pm\sqrt{w^{2}+\mu/f^{\prime}(0)}\), while the second one is a parabola. One sees that the two curves do not intersect for \(w\) small, and do intersect for \(w\) large. There exists then a positive minimal value of \(w\), that we call \(w_{*}(d,\mu,\nu)\), such that the system has a solution. From this, one eventually deduces that the minimal value of \(w\) for which the complete system (4.19) admits solution converges to \(w_{*}(d,\mu,\nu)\) as \(DL^{2}\to+\infty\). Reverting to the original parameters, we have shown that
\[\lim_{DL^{2}\to+\infty}\frac{c_{*}(D,L)}{\sqrt{DL^{2}}}=\sqrt{2f^{\prime}(0) \langle x^{2}K\rangle}\,w_{*}(d,\mu,\nu), \tag{4.21}\]
that is (4.12).
Next, for \(D>D_{*}\), we derive the existence of some generalised subsolutions, with bounded support, that move with speed slightly smaller than \(c_{*}(D,L)\), where \(D_{*}\) and \(c_{*}(D,L)\) are given in Lemma 4.1.
Figure 1: The minimal \(c\) for the existence of plane wave super-solutions: (a) \(c=c_{{}_{\!K}}\), (b) \(c=c_{*}(D,L)\).
**Lemma 4.2**.: _For \(D>D_{*}\), there is a sequence \(c\nearrow c_{*}(D,L)\) with associated pairs of continuous, nonnegative and not identically to \(0\) functions \(u_{c},v_{c}\), compactly supported in \(\mathbb{R}\) and \(\mathbb{R}\times[0,+\infty)\) respectively, such that_
\[k\big{(}u_{c}(x-ct),v_{c}(x-ct,y)\big{)}\]
_is a generalised subsolution to (1.2) for \(k>0\) small enough._
Proof.: The first step is to find a sequence \(c\nearrow c_{*}(D,L)\) such that, for any of such \(c\)'s, the system
\[\left\{\begin{array}{rcll}\partial_{t}u-D\mathcal{J}u&=&\nu v-\mu u&(t>0,\ x \in\mathbb{R},\ y=0)\\ \partial_{t}v-d\Delta v&=&(f^{\prime}(0)-\delta)v&(t>0,\ x\in\mathbb{R},\ 0<y<Y) \\ -d\partial_{y}v&=&\mu u-\nu v&(t>0,\ x\in\mathbb{R},\ y=0)\\ v&=&0&(t>0,\ x\in\mathbb{R},\ y=Y)\end{array}\right. \tag{4.22}\]
with \(\delta>0\) sufficiently small and \(Y>0\) sufficiently large, admits a sign-changing solution of the form
\[\big{(}\widetilde{u}_{c}(x-ct),\widetilde{v}_{c}(x-ct,y)\big{)}.\]
In addition, we will have that the sets where \(\widetilde{u}_{c}\) and \(\widetilde{v}_{c}\) are positive have bounded connected components and satisfy
\[\{x\in\mathbb{R}\ :\ \widetilde{u}_{c}(x)>0\}=(2k\pi\omega_{c}-\pi\omega_{ c}/2,2k\pi\omega_{c}+\pi\omega_{c}/2),\quad k\in\mathbb{Z}, \tag{4.23}\] \[\{x\in\mathbb{R}\ :\ \widetilde{v}_{c}(x,0)>0\}=(2k\pi\omega_{c}+ \vartheta_{c}-\pi\omega_{c}/2,2k\pi\omega_{c}+\vartheta_{c}+\pi\omega_{c}/2),\quad k\in\mathbb{Z}, \tag{4.24}\]
where \(|\vartheta_{c}|\leqslant\pi\omega_{c}\) and \(\omega_{c}\to+\infty\) as \(c\nearrow c_{*}(D,L)\). We postpone this first step to the Appendix.
Having the functions \(\widetilde{u}_{c},\widetilde{v}_{c}\) at hand, one needs to truncate their support. Call \(U_{c}:=(-\pi\omega_{c}/2,\pi\omega_{c}/2)\) and \(V_{c}\) the connected component of \(\{\widetilde{v}_{c}>0\}\) such that \(V_{c}\cap(\mathbb{R}\times\{0\})=(\vartheta_{c}-\pi\omega_{c}/2,\vartheta_{c }+\pi\omega_{c}/2)\), then define
\[u_{c}:=\begin{cases}\widetilde{u}_{c}&\text{in }U_{c}\\ 0&\text{outside}\end{cases},\qquad v_{c}:=\begin{cases}\widetilde{v}_{c}& \text{in }V_{c}\\ 0&\text{outside}\end{cases}.\]
Since \(u_{c}\geqslant\widetilde{u}_{c}\) in \(V_{c}\cap(\mathbb{R}\times\{0\})\), one has that \(v_{c}(x-ct,y)\) is a generalised subsolution of the linear parabolic problem with Robin boundary condition given by the first two equations in (4.22), with \(u=u_{c}\). Instead, the first equation in (4.22) has to be handled more carefully due to the nonlocal term, since \(\mathcal{J}u_{c}\neq\mathcal{J}\widetilde{u}_{c}\) even in the region \(U_{c}\) where \(u_{c}\equiv\widetilde{u}_{c}\). However, for \(x\in U_{c}\), there holds that
\[\mathcal{J}u_{c}(x) =\int_{(x-L,x+L)\cap U_{c}}K_{L}(x-x^{\prime})\widetilde{u}_{c}( x^{\prime})dx^{\prime}-\widetilde{u}_{c}(x)\] \[=\mathcal{J}\widetilde{u}_{c}(x)-\int_{(x-L,x+L)\setminus U_{c} }K_{L}(x-x^{\prime})\widetilde{u}_{c}(x^{\prime})dx^{\prime},\]
and we have \((x-L,x+L)\subset(-\pi\omega_{c}/2-L,\pi\omega_{c}/2+L)\). If \(c\) is large enough so that \(\pi\omega_{c}\geqslant L\), one has that the latter set is contained in \([-3\pi\omega_{c}/2,3\pi\omega_{c}/2]\), and thus, being \(\widetilde{u}_{c}\leqslant 0\) in \([-3\pi\omega_{c}/2,3\pi\omega_{c}/2]\setminus U_{c}\), we deduce that \(\mathcal{J}u_{c}\geqslant\mathcal{J}\widetilde{u}_{c}\) in \(U_{c}\). We also clearly have \(\mathcal{J}u_{c}\geqslant 0\) outside \(U_{c}\). Summing up, we have that for \(c\) sufficiently large, \((u_{c}(x-ct),v_{c}(x-ct,y))\) is a generalised subsolution to (4.22), hence to (1.2) up to multiplication by a small \(k>0\)
Proof of Theorem 2.3.: Let us show that the two limits in (2.7) hold with \(c_{*}:=c_{{}_{K}}\) if \(D\leqslant D_{*}\) and \(c_{*}:=c_{*}(D,L)\) if \(D>D_{*}\), where \(D_{*}\) and \(c_{*}(D,L)\) are given by Lemma 4.1. By reason of symmetry in the \(x\) variable, it is sufficient to derive them for \(x\geqslant 0\). The second limit immediately follows by comparison with the plane waves provided by Lemma 4.1 (which are super-solutions to the nonlinear problem (1.2) thanks to the KPP hypothesis). For the first one, we make use of the sub-solutions \(k(u_{c}(x-ct),v_{c}(x-ct,y))\) provided by Lemma 4.2 in the case \(D>D_{*}\), for \(k\) sufficiently small and \(c<c_{*}\) which can be taken arbitrarily close to \(c_{*}\). In the case \(D\leqslant D_{*}\), since \(c_{*}=c_{{}_{K}}\), the existence of a compactly supported sub-solution \(v_{c}\) moving with a speed \(c<c_{*}\) is standard for the Fisher-KPP equation \(\partial_{t}v-d\Delta v=f(v)\), and thus one can simply neglect the equations on the line and take \(u_{c}\equiv 0\). We then decrease \(k\) if need be in order to have in addition that \(k(u_{c}(x),v_{c}(x,y))\leqslant(u(1,x),v(1,x,y))\) for all \((x,y)\in\mathbb{R}_{2}^{+}\). We deduce, by comparison,
\[\big{(}u(1+\tau,c\tau),v(1+\tau,c\tau,y)\big{)}\geqslant\big{(}u_{c}(0),v_{c}( 0,y)\big{)},\quad\forall\tau\geqslant 0,\;y\geqslant 0.\]
Then, applying Proposition 2.2 to the solution with initial datum \((u_{c},v_{c})\) we get, always by comparison,
\[\liminf_{t\to+\infty}\Big{(}\inf_{\tau\geqslant 0}\big{(}u(1+\tau+t,c\tau),v(1+ \tau+t,c\tau,y)\big{)}\Big{)}\geqslant\Big{(}\frac{\nu}{\mu},1\Big{)},\]
locally uniformly with respect to \(y\geqslant 0\), from which, calling \(s:=1+\tau+t\) we derive
\[\liminf_{t\to+\infty}\Big{(}\inf_{s\geqslant t+1}\big{(}u(s,c(s-t-1)),v(s,c(s- t-1),y)\big{)}\Big{)}\geqslant\Big{(}\frac{\nu}{\mu},1\Big{)}.\]
This yields
\[\liminf_{t\to+\infty}\Big{(}\inf_{x\in[0,c^{\prime}t]}\big{(}u(t,x),v(t,x,y) \big{)}\Big{)}\geqslant\Big{(}\frac{\nu}{\mu},1\Big{)},\]
for any \(0<c^{\prime}<c\). We recall, on the other hand, that \(\limsup_{t\to+\infty}(u,v)\leqslant(\nu/\mu,1)\) uniformly in space, as seen in the proof of Proposition 2.2. The first limit in (2.7) then follows from the fact that \(c^{\prime}\) and \(c\) can be taken arbitrarily close to \(c_{*}\).
Let us dwell a little bit on the quantity \(w_{*}(d,\mu,\nu)\) appearing in (4.21). Computing the curves at \(\beta=0\) one infers that \(w_{*}(d,\mu,\nu)\leqslant 1/2\). Observe that the parameter \(d\) affects \(w_{*}(d,\mu,\nu)\) only through the term \(c_{K}\) in the second equation, which modulates the opening of the parabola. One deduces that \(w_{*}(d,\mu,\nu)\) is decreasing with respect to \(d\) and tends to \(1/2\) as \(d\to 0^{+}\), and to the unique solution \(w\) of the equation \(w+\sqrt{w^{2}+\mu/f^{\prime}(0)}=\frac{1}{2w}\) as \(d\to+\infty\). Notice, however, that the limit as \(d\to+\infty\) has not real meaning for the model (1.2), because the reduction to (4.20) is not justified since the term neglected in the second equation of (4.19) becomes large as \(d\to+\infty\) (and we indeed know that \(c_{*}(D,L)\geqslant c_{{}_{K}}\to+\infty\)).
## 5 The \(Sirt\) model for epidemics along a line with nonlocal diffusion
### A benchmark: \(Sir\)-type model with nonlocal diffusion
Consider a standard \(SIR\) model, where the population lives on the real line \(\mathbb{R}\), with an initially homogeneous population of susceptibles, and where the infected may move
according to nonlocal diffusion and are initially confined in a bounded region. This yields the following system for the density of susceptibles \(S(t,x)\) and of infected \(I(t,x):\)
\[\left\{\begin{array}{rcl}\partial_{t}S&=&-\beta SI\qquad\quad(t>0,x\in\mathbb{R })\\ \partial_{t}I-D\mathcal{J}I&=&\beta SI-\alpha I\quad(t>0,x\in\mathbb{R})\end{array}\right.\]
completed with the initial condition \((S,I)(0,x)=(S_{0},I_{0}(x))\), where \(S_{0}\) is a positive constant and \(I_{0}\) is non-negative and compactly supported. The integrated density
\[u(t,x):=\int_{0}^{t}I(s,x)ds\]
solves the following nonlocal equation with non-homogeneous right hand-side:
\[u_{t}-D\mathcal{J}u=f(u)+I_{0}(x), \tag{5.1}\]
with \(f\) given by (2.4). This is the same equation as (4.1) with the addition of the compact perturbation \(I_{0}\). If \(R_{0}:=S_{0}\beta/\alpha\) is larger than \(1\) then \(f\) is of the Fisher-KPP-type. It turns out that \(I_{0}\) does not affect the large time/space dynamic of the solution, and in fact the asymptotic spreading speeds for (4.1) and (5.1) coincide. Namely, the spreading speed for (5.1) is the minimal \(c>0\) such that the transcendental equation (4.5), that now reads \(-D\varphi_{L}(a)+ca=\alpha(R_{0}-1)\), admits a solution \(a>0\).
### The influence of \(R_{0}\) and other parameters
We now investigate the influence of the parameters of the system (1.3) on the speed \(c_{\mbox{\tiny{SIR}}}^{{}^{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
The integrated quantities
\[\mathcal{U}(\tau,\xi)=\int_{0}^{\tau}\mathcal{T}(\sigma,\xi))d\sigma,\qquad \mathcal{V}(\tau,\xi,\zeta)=\int_{0}^{\tau}\mathcal{I}(t,x,y)ds.\]
will then solve
\[\left\{\begin{array}{rl}\partial_{\tau}\mathcal{U}-\mathcal{D}\widetilde{ \mathcal{J}}\mathcal{U}=&\bar{\nu}\mathcal{V}(\tau,\xi,0)-\bar{\mu}\mathcal{U }+\mathcal{T}_{0}(\xi)\quad(\tau>0,\ \xi\in\mathbb{R})\\ \partial_{\tau}\mathcal{V}-\Delta\mathcal{V}=&\widetilde{f}(\mathcal{V})+ \mathcal{I}_{0}(\xi,\zeta)\quad(\tau>0,\ \xi\in\mathbb{R},\ \zeta>0)\\ -\partial_{\zeta}\mathcal{V}(\tau,\xi,0)=&\bar{\mu}\mathcal{U}(\tau,\xi)- \bar{\nu}\mathcal{V}(\tau,\xi,0)\quad(\tau>0,\ \xi\in\mathbb{R}).\end{array}\right. \tag{5.2}\]
The integral operator \(\widetilde{\mathcal{J}}\) is given by
\[\widetilde{\mathcal{J}}\mathcal{U}(\xi)=\int_{\mathbb{R}}K(\xi^{\prime})\bigg{(} \mathcal{U}(\xi)-\mathcal{U}(\xi+\Lambda\xi^{\prime})\bigg{)}d\xi^{\prime}\]
The function \(\widetilde{f}\) is given by \(\widetilde{f}(\mathcal{V})=1-e^{-R_{0}\mathcal{V}}-\mathcal{V}\), so that, \(\widetilde{f}^{\prime}(0)=R_{0}-1=\dfrac{w_{{}_{\mathit{SIR}}}^{2}}{4}\). The initial quantities \(\mathcal{I}_{0}\) and \(\mathcal{T}_{0}\) have obvious meanings. The minimal reduced speed for (5.2), that we call \(w_{{}_{\mathit{SIR}}}^{{}^{T}}\), is the least \(w\) so that the system in \(a,b\)
\[\left\{\begin{array}{rl}-\mathcal{D}\Phi_{K}(a)+wa+\dfrac{\bar{\mu}b}{\bar{ \nu}+b}=&0\\ &-(a^{2}+b^{2})+wa=\dfrac{w_{{}_{\mathit{SIR}}}^{2}}{4}\end{array}\right. \tag{5.3}\]
has solutions, with
\[\Phi_{K}(a)=2\int_{0}^{+\infty}K(\xi^{\prime})(\cosh(a\xi^{\prime})-1)d\xi^{ \prime}.\]
We study \(R_{0}\) is only slightly larger than \(1\). So, \(w_{{}_{\mathit{SIR}}}\) is now a small parameter. From the second equation of (5.3) we suspect that \(a\), \(b\) and \(w\) will scale like \(w_{{}_{\mathit{SIR}}}\) and that \(a\) will be much smaller than \(b\). So, we set:
\[a=w_{{}_{\mathit{SIR}}}\bar{a},\ \ b=w_{{}_{\mathit{SIR}}}\bar{b},\ \ w=w_{{}_{\mathit{SIR}}}\bar{w},\]
and we introduce the new parameters
\[\lambda=\dfrac{\bar{\mu}}{\bar{\nu}w_{{}_{\mathit{SIR}}}},\ \ \rho=\dfrac{w_{{}_{\mathit{SIR}}}}{\bar{\nu}}.\]
This leads to the final reduced system (a rigorous justification would be easy, _via_ the Implicit Functions Theorem):
\[\left\{\begin{array}{rl}-\dfrac{\mathcal{D}}{R_{0}-1}\Phi_{K}( \Lambda\sqrt{R_{0}-1}a)+\bar{w}\bar{a}+\lambda\bar{b}=&0\\ &-\bar{b}^{2}+\bar{w}\bar{a}=\dfrac{1}{4}.\end{array}\right.\]
The second equation is the standard parabola \(\Gamma_{2,\bar{w}}\)
\[\bar{a}=\dfrac{1}{\bar{w}}\bigg{(}\dfrac{1}{4}+\bar{b}^{2}\bigg{)}\text{:=}\ h(\bar{w},\bar{b}).\]
A final observation will allow us an easy treatment of the first equation. We notice that the second and third terms are expected to be of finite size, so that the first one should also be of finite size. Given that the ratio \(\dfrac{\mathcal{D}}{R_{0}-1}\) is large, the hyperbolic cosine should be small, so that \(\Lambda\sqrt{R_{0}-1}a\) should be small. And so, we may approximate \(\Phi_{K}(\Lambda\sqrt{R_{0}-1}a)\) as
\[\Phi_{K}(\Lambda\sqrt{R_{0}-1}\bar{a})\sim M_{1}\mathcal{D}\Lambda^{2}\bar{a}^ {2}=K_{0}\dfrac{DL^{2}}{d}\bar{a}^{2},\qquad M_{1}=\int_{0}^{+\infty}\xi^{2}K( \xi)d\xi.\]
And so, with the same analysis as in [10], we obtain the existence of a positive bounded function \(\omega_{\mbox{\tiny{\it SIR}}}^{{}^{T}}(\lambda)\) such that
\[\lim_{DL^{2}\to+\infty,w_{\mbox{\tiny{\it SIR}}}\to 0}\sqrt{\dfrac{d}{M_{1}DL^{2}} \dfrac{w_{\mbox{\tiny{\it SIR}}}^{{}^{T}}}{w_{\mbox{\tiny{\it SIR}}}}}=\omega _{\mbox{\tiny{\it SIR}}}^{{}^{T}}(\lambda),\quad\lambda=\dfrac{\bar{\mu}}{\bar {\nu}w_{\mbox{\tiny{\it SIR}}}}. \tag{5.4}\]
### Proof of the results on the \(Sirt\) model
We start with the proofs of the Liouville-type result and of the local stability property contained Theorem 2.4. We can follow exactly the same arguments used in the proofs of [10, Theorem 3.4, 3.5], thanks to the fact that system (2.3) enjoys the comparison principle and the uniform regularity of the solutions, due to Theorem 2.1. We sketch these arguments below for completeness.
Proof of Theorem 2.4 - Liouville-type result and stability.: Let \((u,v)\) be the solution to (2.3)-(2.6). Since its initial datum \((0,0)\) is a sub-solution to (2.3)-(2.5), the comparison principle implies that \((u,v)\) is non-decreasing in \(t\). Observe that one can choose \(M>v_{*}\) large enough so that the constant pair \(M(\nu/\mu,1)\) is a super-solution to (2.3). It follows that \((u(t,x),v(t,x,y))\) converges as \(t\to+\infty\), locally uniformly in space, to a bounded pair \((u_{\infty}^{r}(x),v_{\infty}^{r}(x,y))\), and moreover, by the uniform regularity of solutions, that \((u_{\infty}^{r}(t,x),v_{\infty}^{r}(x,y))\) is a (stationary) non-negative solution to (2.3). It remains to prove the Liouville-type result. We distinguish the two cases according to \(R_{0}\).
_Case \(R_{0}>1\)_.
In this case \(f\) fulfils all conditions in (1.1) with "1" replaced by \(v_{*}>0\). Hence, since \((u(t,x),v(t,x,y))\) is a super-solution to (1.2), and \(v(t,x,y)>0\) for \(t>0\), \(x\in\mathbb{R}\), \(y>0\) due to the parabolic strong maximum principle (because \(I_{0}\not\equiv 0\)), we infer from Proposition 2.2 that
\[(u_{\infty}^{r}(x),v_{\infty}^{r}(x,y))\geqslant\left(\dfrac{\nu}{\mu},1 \right)v_{*},\]
the right-hand side being the unique positive solution to (1.2). It is then straightforward to see that \(v_{\infty}^{r}(x,y)\to v_{*}\) as \(y\to+\infty\), uniformly in \(x\in\mathbb{R}\). Conversely, taking a diverging sequence \((x_{n})_{n\in\mathbb{N}}\) in \(\mathbb{R}\), the limit of the translations \((u_{\infty}^{r}(x+x_{n}),v_{\infty}^{r}(x+x_{n},y))\) (which exists locally in \((x,y)\) by the uniform regularity of \((u_{\infty}^{r},v_{\infty}^{r})\)) is a positive, stationary solution to (1.2), hence by Proposition 2.2 it coincides with \((\nu/\mu,1)v_{*}\).
It remains to prove the Liouville result. Let \((u_{1},v_{1})\) and \((u_{2},v_{2})\) be two pairs of positive, bounded, stationary solutions to (2.3). Assume by way of contradiction that
\[k:=\max\bigg{(}\sup_{\mathbb{R}}\dfrac{u_{1}}{u_{2}}\,,\,\sup_{\mathbb{R} \times\mathbb{R}^{+}}\dfrac{v_{1}}{v_{2}}\bigg{)}>1.\]
Because of the limits we have just proved, one of the following situations necessarily occurs:
\[\max_{\mathbb{R}}\frac{u_{1}}{u_{2}}=k,\quad\text{or}\quad\max_{\mathbb{R}\times \mathbb{R}^{+}}\frac{v_{1}}{v_{2}}=k.\]
Suppose we are in the latter case. Then, it is readily seen using the concavity of \(f\) that the maximum cannot be achieved in the interior of \(\mathbb{R}\times\mathbb{R}^{+}\). Then, it is achieved at some point \((\bar{x},0)\), and Hopf's lemma yields
\[\partial_{y}(kv_{2}-v_{1})(\bar{x},0)>0.\]
Using the second equation in (2.3), together with \(v_{1}(\bar{x},0)=kv_{2}(\bar{x},0)\), we find that
\[ku_{2}(\bar{x})=-\frac{kd}{\mu}\partial_{y}v_{2}(\bar{x},0)+\frac{k\nu}{\mu}v_ {2}(\bar{x},0)<-\frac{d}{\mu}\partial_{y}v_{1}(\bar{x},0)+\frac{\nu}{\mu}v_{1} (\bar{x},0)=u_{1}(\bar{x}),\]
which contradicts the definition of \(k\). Consider the remaining case:
\[\max_{\mathbb{R}}\frac{u_{1}}{u_{2}}=k>\frac{v_{1}}{v_{2}}.\]
Computing the difference of the equations satisfied by \(ku_{2}\) and \(u_{1}\) at a point \(\bar{x}\) where this maximum is achieved, we derive from (3.1) the contradiction
\[0\leqslant D\mathcal{J}(ku_{2}-u_{1})(\bar{x})=\nu(v_{1}-kv_{2})(\bar{x},0)- \mu(u_{1}-ku_{2})(\bar{x})=\nu(v_{1}-kv_{2})(\bar{x},0)<0.\]
We have thereby shown that \(k\leqslant 1\), that is, \((u_{1},v_{1})\leqslant(u_{2},v_{2})\). Exchanging the roles of the solutions, yields the uniqueness result.
_Case \(R_{0}\leqslant 1\)_.
We start with the Liouville-type result for non-negative solutions, with possibly \(I_{0}\equiv 0\). We need to show that, for any two pairs \((u_{1},v_{1})\), \((u_{2},v_{2})\) of non-negative, bounded, stationary solutions to (2.3), it holds that \((u_{1},v_{1})\leqslant(u_{2},v_{2})\). Assume by contradiction that, on the contrary,
\[h:=\max\left(\frac{\mu}{\nu}\sup_{\mathbb{R}}(u_{1}-u_{2})\,,\,\sup_{\mathbb{ R}\times\mathbb{R}^{+}}(v_{1}-v_{2})\right)>0.\]
Suppose first that \(\sup_{\mathbb{R}\times\mathbb{R}^{+}}(v_{1}-v_{2})=h\), and let \(((x_{n},y_{n}))_{n\in\mathbb{N}}\) be a maximising sequence. If \((y_{n})_{n\in\mathbb{N}}\) is bounded from below away from \(0\), then the functions \(v_{j}(x+x_{n},y+y_{n})\) converge locally uniformly (up to subsequences) towards two solutions \(\widetilde{v}_{j}\) of the equation \(-d\Delta\widetilde{v}_{j}=f(\widetilde{v}_{j})\) in a neighbourhood of the origin. Moreover, \((\widetilde{v}_{1}-\widetilde{v}_{2})(0,0)=\max(\widetilde{v}_{1}-\widetilde {v}_{2})=h\), and thus
\[0\leqslant-d\Delta(\widetilde{v}_{1}-\widetilde{v}_{2})(0,0)=f(\widetilde{v}_{ 1}(0))-f(\widetilde{v}_{2}(0))=f(\widetilde{v}_{2}(0)+h)-f(\widetilde{v}_{2}( 0)).\]
This is impossible because \(f\) is decreasing on \(\mathbb{R}^{+}\). If, instead, \(y_{n}\to 0\) (up to subsequences), then the pairs \((u_{j}(x+x_{n}),v_{j}(x+x_{n},y))\) converge locally uniformly (up to subsequences) towards two solutions \((\widetilde{u}_{j},\widetilde{v}_{j})\) of the same system, which is of the form (2.3) with \(I_{0}\) either translated by some vector \((\xi,0)\), or replaced by \(0\). Moreover the difference \(\widetilde{v}_{1}-\widetilde{v}_{2}\) attains its maximal value \(h>0\) at \((0,0)\) and satisfies
\(-d\Delta(\widetilde{v}_{1}-\widetilde{v}_{2})=f(\widetilde{v}_{1})-f(\widetilde{v}_ {2})\). As before, such a maximum cannot be attained also at some interior point, hence by Hopf's lemma
\[0>d\partial_{y}(\widetilde{v}_{1}-\widetilde{v}_{2})(0,0)=\nu(\widetilde{v}_{1} -\widetilde{v}_{2})(0,0)-\mu(\widetilde{u}_{1}-\widetilde{u}_{2})(0)=h\nu-\mu( \widetilde{u}_{1}-\widetilde{u}_{2})(0).\]
Since \(\sup(\widetilde{u}_{1}-\widetilde{u}_{2})\leqslant\sup(u_{1}-u_{2})\), we get a contradiction with the definition of \(h\).
Suppose now that
\[h=\frac{\mu}{\nu}\sup_{\mathbb{R}}(u_{1}-u_{2})>\sup_{\mathbb{R}\times \mathbb{R}^{+}}(v_{1}-v_{2}). \tag{5.5}\]
Consider a maximising sequence \((x_{n})_{n\in\mathbb{N}}\) for \(u_{1}-u_{2}\), then the limits (up to subsequences) \((\widetilde{u}_{j},\widetilde{v}_{j})\) of the translations \((u_{j}(x+x_{n}),v_{j}(x+x_{n},y))\), which, once again, satisfy a system analogous to (2.3). The difference \(\widetilde{u}_{1}-\widetilde{u}_{2}\) attains its maximum \(\frac{\nu}{\mu}h\) at the origin, whence by (3.1)
\[0\leqslant-D\mathcal{J}(\widetilde{u}_{1}-\widetilde{u}_{2})(0)=\nu( \widetilde{v}_{1}-\widetilde{v}_{2})(0,0)-\mu(\widetilde{u}_{1}-\widetilde{u} _{2})(0)=\nu(\widetilde{v}_{1}-\widetilde{v}_{2})(0,0)-\nu h.\]
This contradicts (5.5) The proof of the Liouville result is concluded.
Let us pass to the limits at infinity. The limit \(v_{\infty}^{r}(x,y)\to 0\) as \(y\to+\infty\), uniformly with respect to \(x\), readily follows from the negativity of \(f\) on \((0,+\infty)\). Consider now a diverging sequence \((x_{n})_{n\in\mathbb{N}}\) in \(\mathbb{R}\). The sequence of translations \((u_{\infty}^{r}(x+x_{n}),v_{\infty}^{r}(x+x_{n},y))\) converges locally uniformly (up to subsequences) towards a bounded, stationary solution \((\widetilde{u},\widetilde{v})\) to (2.3) with \(I_{0}\equiv 0\). We apply the Liouville-type we have just proved and infer that \((\widetilde{u},\widetilde{v})\equiv(0,0)\). This concludes the proof of the theorem.
We conclude the proof of Theorem 2.4 using the plane wave solutions of Section 4.3.
Proof of Theorem 2.4 - exponential decay.: Let us start with the case \(R_{0}<1\). In the first place, we linearise system (2.3) around \(v=0\) and we get (4.9), where \(f^{\prime}(0)=\alpha(R_{0}-1)<0\). We look for steady waves of the form (4.10) with \(c=0\). With analogous computation as in Section 4.3, we end up with the system
\[\begin{cases}b=\frac{\nu}{d}\left(\frac{\mu}{\mu-D\varphi_{L}(a)}-1\right)\\ a^{2}+b^{2}=-\frac{1}{d}f^{\prime}(0)\\ \gamma=\frac{\mu}{\nu+db}\;.\end{cases} \tag{5.6}\]
Since we need \(\gamma>0\), the last equation yields \(b>-\nu/d\), which in turns implies, owing to the first equation, \(|a|\!<\!a^{\infty}(D,L)\), where \(a^{\infty}(D,L)\) is the only positive root of \(D\varphi_{L}(a)\!=\!\mu\). In the \((a,b)\) plane, the first equation is the graph of a convex, even function \(b(a)\) vanishing at \(0\) and with asymptotes \(a=\pm a^{\infty}(D,L)\), while the second one is a circle centred at the origin with radius \(\sqrt{-f^{\prime}(0)/d}=\sqrt{\frac{\alpha}{d}(1-R_{0})}\). Hence there are two solutions \((\pm a_{*},b_{*},\gamma_{*})\), with
\[0<a_{*}<\min\Big{\{}a^{\infty}(D,L),\sqrt{\frac{\alpha}{d}(1-R_{0})}\Big{\}}, \qquad b_{*},\gamma_{*}>0. \tag{5.7}\]
Let us call \((u^{\pm},v^{\pm})\) the corresponding steady waves. Take then \(\overline{k}>0\) large enough so that \(\overline{k}v^{\pm}>v^{r}_{\infty}\) in the support of \(I_{0}\). We finally define
\[\overline{u}:=\min(u^{r}_{\infty},\overline{k}u^{-},\overline{k}u^{+}),\qquad \overline{v}:=\min(v^{r}_{\infty},\overline{k}v^{-},\overline{k}v^{+}).\]
Since \(\overline{v}=v^{r}_{\infty}\) whenever \(I_{0}>0\), the pair \((\overline{u},\overline{v})\) is a generalised super-solution to (2.3). By comparison, the solution \((u,v)\) to (2.3)-(2.6) stays below \((\overline{u},\overline{v})\) for all times, and we then deduce from the stability result of Theorem 2.4 that \((u^{r}_{\infty},v^{r}_{\infty})\leqslant(\overline{u},\overline{v})\), i.e.
\[(u^{r}_{\infty}(x),v^{r}_{\infty}(x,y))\leqslant\overline{k}e^{-a_{*}|x|}(1, \gamma_{*}e^{-b_{*}y}).\]
Let us turn to the lower bound. Steady waves with \(b=b_{*}\), \(\gamma=\gamma_{*}\) and any \(a>a_{*}\), are sub-solutions to the linearised problem (4.9). However, in order to get suitable sub-solutions for the nonlinear problem, we penalise \(f^{\prime}(0)\) by a small \(\delta>0\) and we also truncate the domain at a large value \(Y\) of \(y\), that is, we consider (4.22). As shown in the Appendix, this translates into a slight perturbation of the original system (4.9), namely, for given \(a>a_{*}\), the truncated wave
\[(\underline{u}(x),\underline{v}(x,y))=e^{-ax}\big{(}1,\gamma_{*}(e^{-b_{*}y}- e^{b_{*}y-2b_{*}Y})\big{)},\]
is a sub-solution to the penalised system provided \(\delta\) is sufficiently small and \(Y\) is large. We take \(\underline{k}>0\) small enough so that \(\underline{k}(\underline{u},\underline{v})\) is a sub-solution to (2.3) in the half-strip \(x>0\), \(0\leqslant y\leqslant Y\) and, in addition, \(\underline{k}(\underline{u}(0),\underline{v}(0,y))<(u^{r}_{\infty}(0),v^{r}_{ \infty}(0,y))\) for \(0\leqslant y\leqslant Y\). Next, take \(M\) large enough so that \(M(\nu/\mu,1)\) is a super-solution to (2.3) and moreover \(M(\nu/\mu,1)>\underline{k}(\underline{u},\underline{v})\) for \(x\geqslant 0\), \(0\leqslant y\leqslant Y\). The solution \((\widetilde{u},\widetilde{v})\) with initial datum \(M(\nu/\mu,1)\) is non-increasing in \(t\) and converges to a non-negative steady state, which is necessarily \((u^{r}_{\infty},v^{r}_{\infty})\) owing to the Liouville result. In particular \((\widetilde{u}(t,0),\widetilde{v}(t,0,y))>\underline{k}(\underline{u}(0), \underline{v}(0,y))\) for \(t>0\), \(0\leqslant y\leqslant Y\). We can then apply the comparison principle in the half-strip \(x>0\), \(0\leqslant y\leqslant Y\), and infer that \((\widetilde{u},\widetilde{v})>\underline{k}(\underline{u},\underline{v})\) there, for all \(t>0\). Passing to the limit \(t\to+\infty\) yields \((u^{r}_{\infty},v^{r}_{\infty})\geqslant\underline{k}(\underline{u}, \underline{v})\) for \(x\geqslant 0\). The specular estimate for \(x<0\) holds true by the symmetry of \((u^{r}_{\infty},v^{r}_{\infty})\). The proof of the case \(R_{0}<1\) is thereby concluded owing to the arbitrariness of \(a>a_{*}\).
In the case \(R_{0}>1\) the argument is analogous. One linearises the system around \((\nu/\mu,1)v_{*}\) and gets (5.6) with \(f^{\prime}(0)\) replaced by \(f^{\prime}(v_{*})\), which is also negative. One then finds the super-solutions to (2.3) outside the support of \(I_{0}\) in the form \((\nu/\mu,1)v_{*}+(u^{\pm},v^{\pm})\), the sub-solution in the form \((\nu/\mu,1)v_{*}+\underline{k}(\underline{u},\underline{v})\), and concludes as before.
Let us study the exponential rate \(a_{*}\) in Theorem 2.4 for extreme values of \(L\), that is, when the range of contaminations on the road is large or small. Let us focus, as in the above proof, on the case \(R_{0}<1\). Recall that \(a_{*}<a^{\infty}(D,L)\) by (5.7), where \(a^{\infty}(D,L)\) is defined by \(D\varphi_{L}(a^{\infty}(D,L))=\mu\). Since \(\varphi_{L}(a)=\varphi_{1}(aL)\), one derives \(a^{\infty}(D,L)\to 0\) as \(L\to+\infty\), hence the same is true for \(a_{*}\), and then, form the equation of the circle in (5.6),
\[b_{*}\to\sqrt{\frac{\alpha}{d}(1-R_{0})}=:\rho\quad\text{ as }\ L\to+\infty.\]
Thus, from the first equation in (5.6) one gets
\[D\varphi_{L}(a_{*})=D\varphi_{1}(a_{*}L)\to\frac{d\mu\rho}{\nu+d\rho}\quad \text{ as }\ L\to+\infty.\]
If, on the contrary, \(L\) is small - that is, we are close to the classical local diffusion - we have
\[D\varphi_{L}(a)=D\varphi_{1}(aL)\sim Da^{2}L^{2}\langle x^{2}K\rangle\to 0\quad \text{ as }\;L\to 0,\]
locally uniformly in \(a\). This yields, thanks to the first equation in (5.6), that \(b\to 0\) as \(L\to 0\), whence, by the second equation, that \(a_{*}\to\rho=\sqrt{\frac{a}{d}(1-R_{0})}\) as \(L\to 0\). This limit indeed coincides with the asymptotic exponential rate that holds for the local model, see [10, Theorem 4.2]. In any case, since by (4.4) we have that \(\varphi_{L}(a)=\varphi_{1}(aL)\geqslant\langle x^{2}K\rangle a^{2}L^{2}\), it follows that
\[a_{*}<a^{\infty}(D,L)\leqslant\sqrt{\frac{\mu}{\langle x^{2}K\rangle DL^{2}}}. \tag{5.8}\]
This shows that \(a_{*}\) can be small, i.e. the solution has a thick tail, even if \(D\) is small, provided that \(L\) is large, or, on the contrary, if \(L\) is small but \(D\) is sufficiently large.
We conclude with the proof of Theorem 2.5, which just consists in showing that the compactly supported function \(I_{0}\) does not affect the behaviour of the solution far from the origin.
Proof of Theorem 2.5.: Since \(R_{0}>1\), the function \(f\) is of the Fisher-KPP type. We call \(c_{{}_{\text{\tiny{SIR}}}}^{{}^{\!\!T}}\) the speed \(c_{*}\) provided by Theorem 2.3 with such a \(f\). Hence the comparison between \(c_{{}_{\text{\tiny{SIR}}}}^{{}^{\!\!T}}\) and the standard speed \(c_{{}_{\text{\tiny{SIR}}}}\) stated in Theorem 2.5 hold, with \(D_{*}\) given by (4.11). It remains to show that \(c_{{}_{\text{\tiny{SIR}}}}^{{}^{\!\!T}}\) is actually the spreading speed for (2.3)-(2.5).
The pair \((u,v)\) is a super-solution to (1.2), hence by the comparison principle and Theorem 2.3 we infer
\[\forall\varepsilon<c_{{}_{\text{\tiny{SIR}}}}^{{}^{\!\!T}},\quad\liminf_{t \to+\infty}\inf_{|x|\leqslant(c_{{}_{\text{\tiny{SIR}}}}^{{}^{\!\!T}}-c)t} \big{(}u(t,x),v(t,x,y)\big{)}\geqslant\Big{(}\frac{\nu}{\mu},1\Big{)}v_{*},\]
locally uniformly with respect to \(y\geqslant 0\) (in formula (2.7) one has \(v_{*}=1\) as the positive zero of \(f\)). Recall from Theorem 2.4 that \((\nu/\mu,1)v_{*}\) is the limit as \(x\to\pm\infty\) of the steady state \((u_{\infty}^{r},v_{\infty}^{r})\). We further know from Theorem 2.4 that \((u,v)\to(u_{\infty}^{r},v_{\infty}^{r})\) as \(t\to+\infty\) locally uniformly in space, and, by comparison, that \((u,v)\leqslant(u_{\infty}^{r},v_{\infty}^{r})\). All this facts together imply the validity of the first limit of Theorem 2.5.
The second limit directly follows by comparison with the plane waves provided by the case 2 of Lemma 4.1 (recall that \(c_{*}\) in Theorem 2.3 is precisely given by \(c_{*}(D,L)\) of Lemma 4.1). Indeed, being super-solutions to the linearised system (4.9), it is clear that they are super-solutions to (2.3) as well, up to multiplication by a large constant.
### The case of pure transport on the road
Another way of analysing a nonlocal effect of a road on the spreading of an epidemic, is by considering a pure transport equation on the line. Namely, we introduce the system
\[\left\{\begin{array}{rcl}\partial_{t}I-d\Delta I+\alpha I&=&\beta SI&(t>0,\;( x,y)\in\mathbb{R}_{+}^{2})\\ \partial_{t}S&=&-\beta SI&(t>0,\;(x,y)\in\mathbb{R}_{+}^{2})\\ -d\partial_{y}I&=&\mu T-\nu I&(t>0,\;x\in\mathbb{R},\;y=0)\\ \partial_{t}T+q\partial_{x}T&=&\nu I-\mu T&(t>0,\;x\in\mathbb{R},\;y=0).\end{array}\right. \tag{5.9}\]
where \(q\in\mathbb{R}\) is a given constant. The system for the cumulative densities \(u,v\) reads
\[\left\{\begin{array}{rcl}\partial_{t}v-d\Delta v&=&f(v)+I_{0}(x,y)&(t>0,\ (x,y) \in\mathbb{R}_{+}^{2})\\ -d\partial_{y}v&=&\mu u-\nu v&(t>0,\ x\in\mathbb{R},\ y=0)\\ \partial_{t}u+q\partial_{x}u&=&\nu v-\mu u&(t>0,\ x\in\mathbb{R},\ y=0),\end{array}\right. \tag{5.10}\]
with \(f\) given by (2.4). As in Section 4.3, the spreading speed \(c_{*}\) for this new system will be given by the minimal \(c\) such that the linearised system admits plane wave super-solutions of the type (4.10). As a matter of fact, because the system is no longer symmetric in the \(x\) variable, there will be two distinct spreading speeds, \(c_{*}^{\pm}\), one leftward and one rightward.
**Theorem 5.1**.: _Assume that \(R_{0}>1\). Let \((u,v)\) be the solution to (5.9), (2.4)-(2.6). Then, there exist \(c_{*}^{\pm}>0\) such that, for all \(\varepsilon>0\), it holds_
\[\liminf_{t\to+\infty}\inf_{-(c_{*}^{-}-\varepsilon)t\leqslant x\leqslant(c_{ *}^{+}-\varepsilon)t}\big{|}(u(t,x),v(t,x,y))\big{|}>0,\]
\[\lim_{t\to+\infty}\sup_{x\leqslant-(c_{*}^{-}+\varepsilon)t}\big{|}(u(t,x),v (t,x,y))\big{|}=0,\qquad\lim_{t\to+\infty}\sup_{x\geqslant(c_{*}^{+}+ \varepsilon)t}\big{|}(u(t,x),v(t,x,y))\big{|}=0,\]
_locally uniformly with respect to \(y\geqslant 0\)._
_In addition, the spreading speeds \(c_{*}^{\pm}\) satisfy_
\[c_{*}^{\pm}\,\begin{cases}=c_{\mbox{\tiny\it SIR}}&\mbox{if}\ \pm q\leqslant c_{ \mbox{\tiny\it SIR}}\\ \in(c_{\mbox{\tiny\it SIR}},|q|)&\mbox{if}\ \pm q>c_{\mbox{\tiny\it SIR}} \end{cases}\qquad\mbox{with}\ \ c_{\mbox{\tiny\it SIR}}:=2\sqrt{d\alpha(R_{0}-1)}.\]
_Finally, \(c_{*}^{\pm}/|q|\) converge to a positive constant \(\kappa_{*}\in(0,1)\) as \(q\to\pm\infty\)._
Recalling that \(S=S_{0}e^{-\beta v}\), the above result shows that the epidemic wave moves at the leftward and rightward asymptotic speeds \(c_{*}^{\pm}\) respectively. These speeds are always no less than \(c_{\mbox{\tiny\it SIR}}\), which means that the transport term \(q\) on the line does not slow down the spreading speed in the opposite direction, no matter how strong it is. On the contrary, as soon as the intensity \(q\) is larger than the classical speed \(c_{\mbox{\tiny\it SIR}}\), the spreading speed in the direction of the transport is enhanced, but, however, it never reaches the value of \(q\) itself.
Proof of Theorem 5.1.: The problem of the existence of plane wave solutions for the linearised system reduces to the algebraic system
\[\left\{\begin{array}{rcl}(c-q)a&=&-\frac{d\mu b}{\nu+db}\\ -(a^{2}+b^{2})+\frac{ca}{d}&=&\frac{c_{\kappa}^{2}}{4d^{2}}\,.\end{array}\right. \tag{5.11}\]
Consider the case \(q\leqslant c_{\kappa}\). Recall that the second equation in (5.11) admits solutions, describing a circle \(\Gamma_{2}\), if and only if \(c\geqslant c_{\kappa}\). For \(c=c_{\kappa}\) the circle reduces to the point \((a,b)=(c_{\kappa}/(2d),0)\), which satisfies the inequality "\(\geqslant\)" in the first equation of (5.11). It follows that in such a case, (5.10) admits a super-solution of the type (4.10) if and only if \(c\geqslant c_{*}(q):=c_{\kappa}\).
Next, consider the case \(q>c_{{}_{\!K}}\). For \(c\geqslant q\), any point \((a,b)\in\Gamma_{1}\) with \(a,b>0\) satisfies the inequality "\(\geqslant\)" in the first equation of (5.11), that is, (5.10) admits supersolutions of the form (4.10). Conversely, for \(c=c_{{}_{\!K}}\), the "circle" \((a,b)=(c_{{}_{\!K}}/(2d),0)\) satisfies "\(<\)" in the first equation of (5.11), i.e., the corresponding plane wave is a sub-solution to the linearised system. This means that there is a first value \(c=c_{*}(q)\in(c_{{}_{\!K}},q)\) at which the two curves in (5.11) are tangent, which provides us with a plane wave super-solution to (5.10).
Let us investigate the behaviour of \(c_{*}\) as \(q\to+\infty\). We write \(c=\kappa q\) with \(\kappa>0\) and \(\alpha=aq\). The system rewrites as
\[\left\{\begin{array}{rcl}(1-\kappa)\alpha&=&\frac{d\mu b}{\nu+ db} \\ -\Big{(}\frac{\alpha^{2}}{q^{2}}+b^{2}\Big{)}+\frac{\kappa\alpha}{d}&=&\frac{ c_{{}_{\!K}}^{2}}{4d^{2}}\,.\end{array}\right. \tag{5.12}\]
Dropping the term \(\alpha^{2}/q^{2}\) (that will be justified at the end of the computation) we get
\[\Big{(}\frac{c_{{}_{\!K}}^{2}}{4d^{2}}+b^{2}\Big{)}\frac{1-\kappa}{\kappa}= \frac{d\mu b}{\nu+db}.\]
Consider \(k\in(0,1)\). For \(\kappa\sim 0\) this equation does not admit solution \(b\geqslant 0\), whereas for \(\kappa\sim 1\) it does. There exists then a minimal \(\kappa=\kappa_{*}\in(0,1)\) for which a solution exists. We then recover the same existence result for system (5.12) when \(q\to+\infty\), with a minimal value \(\kappa_{q}\to\kappa_{*}\). This shows that \(c_{*}\to\kappa_{*}q\) as \(q\to+\infty\).
## 6 Discussion
We have proposed and analysed a model that quantifies the effect of a line of fast, nonlocal diffusion, both on the propagation of fronts in models of reaction-diffusion for biological invasions, and of the \(SIR\) type for epidemics. Such a modelling for the diffusion processes on the line is relevant, as it makes possible long range displacements along lines of communications in addition to local ones. This type of effects is widely recognised to exist. Our aim here is to provide a rigorous formulation of this feature and to analyse it mathematically. While we had previously analysed the effect of a line having a diffusion of its own on reaction-diffusion propagation [8], [10], we had indeed represented the diffusion by the standard Laplacian. The nonlocal dispersal that we are proposing here encompasses and confirms the results that we have already obtained, and introduces further elements of understanding.
The nonlocal diffusion on the line is characterised by two parameters: its intensity \(D\), that essentially measures the importance of the traffic, and the parameter \(L\), that represents the characteristic length of individual travel. This additional parameter amplifies the effect of the road on all the aspects of the overall dynamics.
Regardless of the size of all other parameters, we have shown that the spreading velocity on the line behaves like \(\sqrt{DL^{2}}\). This really shows that the dynamics on the line is that of an effective reaction-diffusion process of the Fisher-KPP type, with the kernel \(DK_{L}(x)\), and a reaction term of the form \(\gamma u(1-u)\), with \(\gamma\) tailored so as to obtain the spreading speed. This broadens the picture that we had already obtained
when the diffusion on the line is given by a standard diffusion operator \(-D\partial_{xx}\). In [8], we showed that the spreading velocity grows like \(\sqrt{D}\). Such an effect was also observed on long range diffusion operators of the form \((-\partial_{xx})^{\alpha}\)\((0<\alpha<1)\) in [5]. In this last case we proved that the spreading speed is exponential. The first noticeable effect of the nonlocal diffusion that we are considering is that the spreading velocity can be quite large if the range \(L\) of the dispersal on the road is large, even if the intensity of the traffic is modest.
Let us concentrate on the model for the propagation of epidemics. The pandemic threshold \(R_{0}\) is the same through all the models we have studied so far, from the classical \(SIR\) model to the nonlocal model we are dealing with in this paper. However, the presence of the nonlocal operator on the line has an important quantitative impact on the system. Let us first focus on the case \(R_{0}<1\). The epidemic does not spread, the population reaches a final state where much more infected individuals may nonetheless be found at very large distance from the origin of the outbreak in the presence of the road than without it. This is reflected in the asymptotics of the limit, as \(t\to+\infty\), of the cumulative density \(I_{tot}\) of infected, that is \(S_{0}(1-e^{-\beta v_{\infty}^{\infty}})\), which decays exponentially at infinity with decay rate \(a_{*}\), (compare Theorem 2.4). The bound (5.8) indeed shows that, even if \(D\) is small - which would, in principle, make the decay exponent \(a_{*}\) quite large, thus granting a fast exponential decay - the parameter \(L\) can be made large enough to make \(a_{*}\) arbitrarily small. If, on the contrary, \(L\) is small, thus once again, potentially allowing \(a_{*}\) to be quite large, this effect can be compensated by a very large diffusion coefficient \(D\). According to (5.8), what really matters is indeed the size of \(DL^{2}\), which, if large, yields a small decay rate \(a_{*}\).
The most important effects can be observed on the propagation velocity. Let us concentrate on the case when \(R_{0}\) is only slightly larger than \(1\), that is, a range where the epidemics progress would be expected to be slow and thus would not appear to pose a major public health concern. In such a case however the propagation speed is accelerated by a factor of order \(\sqrt{DL^{2}}\), see (5.4). Thus, the size of \(L\) gives an additional important boost to enhancement. In particular, few individuals moving very far are sufficient to produce an important increase of the spreading speed of the epidemics, all other parameters being small. We retrieve, in a way that is even stronger than in [10], the fast propagation effect even in the case of a seemingly mild epidemic wave.
We also observe that the spreading speed is asymptotic to \(\sqrt{DL^{2}}\) times a function that is _decreasing_ with respect to \(d\), given by (4.21). This monotonicity is rather counterintuitive, and is yet another manifestation of the complexity of the interaction between the dynamics in the field and that on the road. One possible interpretation is the following: the flux of individuals entering the road, \(dv_{y}\), is proportional to \(d\). It is also equal to \(\mu u-\nu v\), a negative quantity for the linear waves. As a consequence it is all the more negative as \(d\) is large, thus weakening the effect of the road and resulting in a slowdown effect when \(d\) becomes larger. One could show that this type of monotonicity (that we observe here for the first time) also holds for local diffusion models.
On the mathematical side, one observes an unexpected preservation of smoothness due to the interaction between the line and the upper half plane, that is not present in the classical nonlocal Fisher-KPP models.
We have finally discussed the effect of a pure unidirectional transport on the line,
and we have found another surprising result. The transport on the line does enhance the overall propagation, but with an important subtlety. If \(q\) is the size of the transport, as \(q\to+\infty\), the spreading speed in that direction tends to \(+\infty\) as \(\kappa_{*}q\), with \(\kappa_{*}\) positive but _strictly smaller_ than \(1\).
The fact that the spreading speed is strictly smaller than \(q\) (and that \(\kappa_{*}<1\)) can be interpreted as follows: infected individuals \(T\) are transported by the road with a speed \(q\), but if the latter is larger than the speed \(c_{\mbox{\tiny\it SIR}}\) in the field, the incoming/outcoming contribution of infected at their location, i.e. \(\nu I-\mu T\), is negative, and this slows down the speed of propagation.
This would not have been the case if contamination took place on the line too. Indeed, we showed in [9] that, for the model of biological invasion with local diffusion on the road, the spreading speed behaves like \(q\) (with factor \(1\)) as \(q\to+\infty\) provided that a reaction term is also present on the road, and it is sufficiently large compared with the exchange rate \(\mu\), see [9, Theorem 1.3]. As a corroboration of the above interpretation, one can indeed check in our proof that, in the present case, the spreading speed tends to \(q\) as \(\mu\to 0\), and the limit factor \(\kappa_{*}\) tends to \(1\).
## Appendix
In this appendix, we construct a solution of the form \((\widetilde{u}_{c}(x-ct),\widetilde{v}_{c}(x-ct,y))\), with \(c<c_{*}(D,L)\) close enough to \(c_{*}(D,L)\), to the penalised problem (4.22), with \(\delta>0\) sufficiently small and \(Y>0\) sufficiently large. In order to fulfil the Dirichlet condition at \(y=Y\), we consider a variant of the plane waves (4.10), namely
\[(u(t,x),v(t,x,y))=e^{-a(x-ct)}\big{(}1,\gamma(e^{-by}-e^{by-2bY})\big{)}.\]
Plugging it into (4.22) yields
\[\gamma=\frac{\mu}{\nu(1-e^{-2bY})+db(1+e^{-2bY})},\qquad b=g_{Y}^{-1}\circ G_{ 1}^{c}(a)=G_{2,\delta}^{c}(a),\]
where \(G_{1}^{c}\) is defined by (4.14) and
\[g_{Y}(b):=b\coth(Yb),\qquad G_{2,\delta}^{c}(a):=\frac{1}{2d}\sqrt{c^{2}-c_{k} ^{2}+4d\delta-(2da-1)^{2}}.\]
One sees that \(g_{Y}(b)\) is increasing for \(b\geqslant 0\) and converges locally to the identity, together with its derivatives, as \(Y\to+\infty\). When \(\delta\to 0\) and \(Y\to+\infty\) we end up with the previous system (4.13), that we recall admits solution if and only if \(c\geqslant c_{*}(D,L)\); for \(c=c_{*}(D,L)\) the unique solution is \(a_{*},b_{*},\gamma_{*}>0\), with
\[b_{*}=G_{1}^{c_{*}(D,L)}(a_{*})=G_{2,0}^{c_{*}(D,L)}(a_{*}).\]
One also directly checks that \((G_{1}^{c})^{\prime\prime}(a)>0\), at least for the values \(a>0\) where \(G_{1}^{c}(a)\geqslant 0\), and that \((G_{2,0}^{c_{*}(D,L)})^{\prime\prime}<0\) in its domain. Let us call \(h^{c}(a):=G_{1}^{c}(a)-G_{2,0}^{c}(a)\). This is an analytic function that satisfies \(h^{c}(a_{*})\searrow 0\) as \(c\nearrow c_{*}(D,L)\) (with strict monotonicity) and moreover \((h^{c_{*}(D,L)})^{\prime\prime}(a_{*})>0\). These properties allow us to apply Rouche's theorem and find, for \(c<c_{*}(D,L)\) close enough to \(c_{*}(D,L)\), two complex solutions
of \(h^{c}(a)=0\) with nonzero imaginary part of order \(\sqrt{h^{c}(a_{*})}\) (see [8, Appendix B] for the detailed argument). The same properties are fulfilled also by the perturbation \(h^{c}_{\delta,Y}:=g_{Y}^{-1}\circ G_{1}^{c}(a)-G_{2,\delta}^{c}(a)\) of \(h^{c}(a)\) (with \(c_{*}(D,L)\), \(a_{*}\) replaced by some slightly different values) provided \(\delta\) is sufficiently small and \(Y\) is sufficiently large. As a consequence, for \(c<c_{*}(D,L)\) close enough to \(c_{*}(D,L)\), we can find \(\delta\) small and \(Y\) large so that the equation \(h^{c}_{\delta,Y}(a)=0\) admits two complex solutions with nonzero imaginary part of order \(\sqrt{h^{c}(a_{*})}\). Pick one of them, together with the associated \(b=G_{2,\delta}^{c}(a)\) and \(\gamma\). This provides us with a complex plane-wave solution \((u(x-ct),v(x-ct,y))\) to (4.22). Finally, its real part
\[\big{(}\widetilde{u}_{c}(x-ct),\widetilde{v}_{c}(x-ct,y)\big{)}:=\big{(} \Re(u)(x-ct),\Re(v)(x-ct,y)\big{)},\]
is a real solution to (4.22). The positivity sets of \(\widetilde{u}_{c}\), \(\widetilde{v}_{c}\) fulfil (4.23), (4.24) with \(\omega_{c}=1/\Im(a)\to+\infty\) as \(c\nearrow c_{*}(D,L)\).
|
2309.11911 | InstructERC: Reforming Emotion Recognition in Conversation with
Multi-task Retrieval-Augmented Large Language Models | The field of emotion recognition of conversation (ERC) has been focusing on
separating sentence feature encoding and context modeling, lacking exploration
in generative paradigms based on unified designs. In this study, we propose a
novel approach, InstructERC, to reformulate the ERC task from a discriminative
framework to a generative framework based on Large Language Models (LLMs).
InstructERC makes three significant contributions: (1) it introduces a simple
yet effective retrieval template module, which helps the model explicitly
integrate multi-granularity dialogue supervision information. (2) We introduce
two additional emotion alignment tasks, namely speaker identification and
emotion prediction tasks, to implicitly model the dialogue role relationships
and future emotional tendencies in conversations. (3) Pioneeringly, we unify
emotion labels across benchmarks through the feeling wheel to fit real
application scenarios. InstructERC still perform impressively on this unified
dataset. Our LLM-based plugin framework significantly outperforms all previous
models and achieves comprehensive SOTA on three commonly used ERC datasets.
Extensive analysis of parameter-efficient and data-scaling experiments provides
empirical guidance for applying it in practical scenarios. | Shanglin Lei, Guanting Dong, Xiaoping Wang, Keheng Wang, Runqi Qiao, Sirui Wang | 2023-09-21T09:22:07Z | http://arxiv.org/abs/2309.11911v6 | InstructERC: Reforming Emotion Recognition in Conversation with a Retrieval Multi-task LLMs Framework
###### Abstract
The development of emotion recognition in dialogue (ERC) has been consistently hindered by the complexity of pipeline designs, leading to ERC models that often overfit to specific datasets and dialogue patterns. In this study, we propose a novel approach, namely InstructERC, to reformulate the ERC task from a discriminative framework to a generative framework based on Large Language Models (LLMs). InstructERC has four significant contributions: Firstly, InstructERC introduces a simple yet effective retrieval template module, which helps the model explicitly integrate multi-granularity dialogue supervision information by concatenating the historical dialog content, label statement, and emotional domain demonstrations with high semantic similarity. Furthermore, we introduce two additional emotion alignment tasks, namely speaker identification and emotion prediction tasks, to implicitly model the dialogue role relationships and future emotional tendencies in conversations. Our LLM-based plug-and-play plugin framework significantly outperforms all previous models and achieves comprehensive SOTA on three commonly used ERC datasets. Additionally, we have undertaken the task of unifying label mapping and modeling across three ERC datasets for the first time, showcasing the LLM's robust generalization capabilities. Extensive analysis of parameter-efficient, data-scaling and data mixing experiments provide empirical guidance for applying InstructERC in practical scenarios. Our code has been released in Github.
## I Introduction
"The question is not whether intelligent machines can have emotions, but whether machines without emotions can achieve intelligence", as pointed out by the pioneer of artificial intelligence, Minsky, in his book "Society of Mind" [1]. Empowering machines with the ability to understand emotions in various scenarios has always been the unwavering direction of researchers. In recent years, the task of dialogue emotion recognition has become a hot research topic in the field of natural language processing (NLP) due to its enormous potential application scenarios in human-computer interaction [2] and dialogue systems [3].
In contrast to conventional binary sentiment analysis tasks [4], which only rely on text with explicit attitude tendencies, the emotion recognition in conversation (ERC) task aims to identify more fine-grained emotional tendencies in each sentence of a conversation. Specifically, for a given complete dialogue sequence input and a set of emotional labels, the model is required to accurately assign an emotional label to each sentence. Intuitively, the recognition of emotional tendencies in the target sentence is heavily influenced by its historical utterances [5], and there is significant variation in how different speakers perceive and express emotions [6]. Consequently, it becomes crucial to intricately model both the speakers and the context of the dialogue.
Currently, the ERC field is hindered by four significant development challenges:
* **Highly ERC-specific Approaches leading to overfitting**: A series of previous works in Emotion Recognition in Conversation (ERC) have focused on different ERC-specific design paradigms, such as transformer-based, GNN-based, and recurrent-based models, as shown in
Fig. 1: The illustration of different paradigms for ERC
Figure 1. These different ERC model architectures have each achieved state-of-the-art (SOTA) on various ERC datasets, as indicated in Table II, with each model belonging to a different architecture. This indicates a potential overfitting issue to specific datasets, underscoring the imperative to investigate more generalized modeling methods.
* **Lack of contextual information leads to insufficient emotional state modeling**: only a minority of models adapt an end-to-end approach, such as [7] or [8], leveraging corresponding small-scale pre-trained language models for ERC emotion recognition directly. However, the limitation lies in their maximum input token capacity, which is restricted no more than 512. This limitation frequently makes these models unable to handle dialogues exceeding four sentences, greatly hindering their effectiveness in ERC tasks where detailed encoding of prior utterances is essential for understanding the emotional context at the sentence level.
* **Unified Generative Modeling Need**: Lastly, while [9] has pioneered the use of a generative method to unify the modeling of both MSA and ERC tasks. In real application scenarios, unified modeling across datasets can bring more powerful multi-scenario adaptation capabilities to the model. However, Conducting unified label mapping and modeling for multiple ERC datasets is a blank field, which is worth exploring.
Fortunately, the recent successful application and emergence capabilities of pre-trained large language models (LLMs) have demonstrated remarkable performance in natural language reasoning tasks. By using a generative architecture and larger input token window, LLMs unify the output and input of different tasks and have shown significant performance improvements in all NLP tasks. Despite their powerful capabilities, enabling these abilities for specific sub-tasks requires high-quality prompts [10, 11] and designs to fill the reasoning gap. Therefore, how to use LLMs framework to reconstruct ERC while considering context modeling, speaker modeling, and capturing conversation relationships poses a significant challenge in pushing this framework towards a real dialogue system.
In this work, we reformulate the ERC task using LLMs. Specifically, we design a simple but efficient retrieval template module, which consists of instruction, historical utterance, label statements, and emotional domain retrieval to explicitly integrate multi-granularity dialogue supervision information during reasoning. In addition, we separately design two auxiliary tasks for the ERC task: speaker identification task and emotion prediction task. The speaker identification task assists LLMs in modeling dialogue role relationships by predicting the speaker of each sentence, while the emotion prediction task models future emotional tendencies in conversations. Finally, and most importantly, for the first time in the ERC field, we align labels for three conversational emotion recognition datasets and conducted unified dataset modeling. We further explored data mixing strategies and data scaling.
In conclusion, our work can be outlined as follows:
* To the best of our knowledge, we are the first to reformulate the ERC task as a unified Seq2Seq paradigm based on LLMs and present a effective instruction template which can adapt to different dialog scenarios.
* We propose two novel emotional auxiliary tasks to implicitly model the dialogue role relationships and future emotional tendencies in conversations.
* Our InstructERC significantly outperforms all previous models and achieves comprehensive SOTA on three commonly used ERC datasets. Further analysis provides empirical guidance for finetuning ways in LoRA and All-parameters.
* We conducted, for the first time, a unified label mapping for three common ERC datasets, and unified data scaling as well as exploration of different data mixing strategies. We discovered a phenomenon of low-resource gains and high-resource containment, providing empirical guidance for industrial practical applications.
## II Related Work
### _Large Language Models_
The emergence of large-scale language models (LLMs) have brought revolutionary transformation to the field of natural language processing (NLP) [12]. LLMs, such as GPT-3 [13], LLaMA [14] and GPT-4 [15], have demonstrated impressive abilities on various tasks, as well as the use of external techniques such as reinforcement learning from human feedback (RLHF) [16]. LLMs based on generative framework even reformulate the multi modal perspective [17, 18]. More recently, the NLP community has been exploring various application directions for LLMs. For instance, chain-of-thought prompting and RFT [19, 20] enables LLMs to generate problem-solving processes step-by-step, significantly enhancing the model's reasoning ability. Researchers have utilized the interactive capabilities of LLMs to generate commands that invoke external tools for handling of downstream tasks [12]. Other researchers have proposed parameter-efficient fine-tuning (PEFT) to address the issue of excessive computational resource without sacrificing performance [21].
### _Emotion Recognition in Conversation_
After more than a decade of development, the field of Emotion Recognition in Conversation (ERC) has seen many outstanding works. These can be broadly classified into four categories: transformer-based, GNN-based, recurrent-based, and PLM-based.
Specifically, transformer-based works [22, 23, 24, 5, 25] attempt to establish long-range emotional correlations in conversational scenarios by directly adopting or modifying the original transformer block. These efforts have made significant contributions in this direction.
GNN-based works [26, 27, 28, 6] extensively use graphs and edges to model interactions between people in conversational scenarios and the influences between different modal
ities. They employ various forms of multi-layer graph neural networks to fit potential conversational relations, effectively exploring this direction.
Recurrent-based works [29, 30, 31, 32, 33] utilize various forms of RNNs, like LSTM and GRU, to model individual emotional states and global emotional impacts separately. They incorporate attention mechanisms or direct vector concatenation to represent personal and global emotional states collectively, marking effective exploration in this area.
Differing from the above three kinds of approaches, which use a two-stage training process, PLM-based works [7, 8, 9] rely on pre-trained models for end-to-end ERC modeling. However, they are limited by the maximum input tokens and typically do not exceed five sentences of conversational context. Compared to the aforementioned discriminative methods, PLM-based approaches are more concise.
In all four methodologies, some models [34, 35, 36, 37, 38] have incorporated common sense knowledge, injecting this into the models to enhance their performance in ERC tasks.
### _Data scaling exploration_
The remarkable capabilities of Large Language Models stem from scaling up model sizes, data volumes, and computational resources substantially. Investigating their effectiveness across different scales is crucial. There has been significant research in the LLM field focusing on scaling laws in areas such as pre-training [39], transfer learning [40], and preference modeling [41]. However, given the smaller size of the ERC dataset and the relatively unexplored domain of using LLMs for ERC, this work pioneers the exploration of the relationship between data scaling and model performance specifically for ERC datasets. This provides fresh perspectives on how data scale impacts the efficiency of language models.
## III Methodology
In this section, we present a comprehensive overview of the proposed InstructERC framework shown as Figure 3. Firstly, we provide a brief introduction to the task definition of ERC. Next, we discuss the framework of InstructERC, which consists of two major parts: retrieval template module and emotional alignment tasks. Finally, we introduce training and inference process of our framework.
### _Problem Definition_
Assuming a dialogue text \(U=[u_{1},u_{2},...u_{n}]\) of length \(n\) is given, which includes \(M\) speakers/parties \(p_{1},p_{2},...,p_{M}\) (\(M\geq 2\)) in the dialogue, and each utterance \(u_{i}\) spoken by the corresponding speaker \(p_{K(u_{i})}\). Function \(K\) is employed to establish a mapping between each utterance and its corresponding speaker.
In the discriminative framework, researchers first fine-tune an Pretrained Language Model with the context-free utterance, extract the feature vector at the CLS position as the input for the downstream ERC model. The task of ERC in this case is to map the feature vector of the given utterance to a scalar between 1 and \(o\), where \(o\) represents the number of emotional labels in the dataset.
In the generative framework based on LLMs, for a given utterance, we process it into formatted text according to the pre-designed template and input it into LLMs. The objective of ERC in this case is to enable LLMs generate the most reasonable text emotional label, which must belong to the predefined text emotional label set \(\mathcal{E}=\{e_{1},e_{2},...,e_{o}\}\). \(o\) is the number of emotional categories.
### _Retrieval Template Module_
To better transfer and utilize the inference ability of pre-trained large language models, we reconstruct the ERC task to the seq2seq form and solve it through fine-tuning LLMs. Therefore, we construct a efficient retrieval template module to bridge the gap when applying LLMs to specific NLP subtasks. As shown in Figure 2, for ERC task, each input consists of four parts: instructions, historical content, label statement, and demonstration-retrieve.
**Instruction.** The instructions serve to provide the model with a well-defined role, precise details of the ERC task, and a standardized format for the input dialogue text. For the primary ERC task, our instruction \(u_{i,I}\) is shown in Figure 2.
**Historical Content.** The ERC task is heavily reliant on contextual information, yet in daily conversations, the affective state of a speaker in the present moment is impervious to the emotional influence of future utterances. Therefore, the historical content that is included in the model's input is limited to those utterances that precede the current recognized utterance. We employ a hyperparameter, the historical window (denoted as \(w\)), to indicate the specific rounds of historical dialogue along with the corresponding speaker information. For the emotion recognition of the target utterance \(u_{n}\), its historical content \(u_{i,H}\) is shown in Figure 2.
**Label Statement.** To confine the model's output within a finite range of labels, facilitate statistical analysis of the model's output, and enable the model to focus on the current utterance being recognized, our label statement \(u_{i,L}\) is shown in Figure 2.
**Demonstration Retrieval.** In order to further integrate emotional information to assist reasoning, we have developed a domain demonstration recall module based on semantic similarity. In detail, we construct a domain base \(\mathcal{D}_{domain}\) from the training dataset that removes speaker identity information and balances the number of emotion labels, which ensures that the demonstrations is not influenced by the distribution of speakers or emotion labels in the dataset. For a given utterance \(u_{i}\) to be identified, we retrieve the most relevant ERC example from \(\mathcal{D}_{domain}\) as the demonstration. To perform the retrieval, we use a bidirectional encoder SBERT [42] to find the most semantically similar ERC example \(d_{rvl}\). SBERT generates independent CLS embeddings for the target utterance \(u_{i}\) and each element \(d_{j}\) in \(\mathcal{D}_{domain}\). After sorting all target-demonstration pairs by cosine similarity, we select the pair with the highest score as the most relevant element
\(d_{rel}\). An abstract mathematical description of this process is as follows:
\[d_{rvl_{i}}=\operatorname*{argmax}_{d_{j}\in\mathcal{D}_{domain}}\operatorname*{SBERT }(u_{i},d_{j}) \tag{1}\]
The textual input \(u_{i,D}\) for the demonstration retrieval part is shown in Figure 2. In summary, after constructing the Retrieval template, the simplified input \(x_{i}\) for the main task is as follows:
\[x_{i}=[u_{i,I};u_{i,H};u_{i,L};u_{i,D}] \tag{2}\]
where [;] means the textual concatenation, \(u_{i,I}\), \(u_{i,H}\), \(u_{i,L}\), and \(u_{i,D}\) indicate Instructions, Historical content, Label statement, demonstration retrieval for a given utterance \(u_{i}\).
### _Emotional alignment tasks_
To better capture the dialogue role relationships and future emotional tendencies in conversations, we have incorporated two auxiliary tasks, namely speaker identification and emotion impact prediction, which constitute the fine-grained subtasks of the InstructERC framework. The model is jointly trained with these auxiliary tasks to improve its overall performance, which is illustrated in Figure 3
**Speaker Identification task.** Emotions are expressed differently among different speakers. Previous models have used techniques such as speaker-based masked attention modules or multiple GRUs to capture the emotional expression features of different characters. This modeling of emotional expression in the task can also be transformed into a generative task using our InstructERC. while MTL [22] once introduced a similar task, they acknowledged their model's limitation in not recognizing the speakers of utterances, which can be crucial for accurately matching the speaking characteristics of different individuals in real-world applications. To enable the LLM to capture the speaking styles of different individuals, the model is trained to really identify the relevant speaker for a given utterance, without considering the historical context. For a given dataset, a predefined set of speaker labels is provided. Consistent with the main task, the Instruction text input \(x_{i}^{p}\) for this task is constructed as follows:
_"Now you are an expert of sentiment and emotional analysis. Please select the Speaker label of the utterance \(<\)Speaker:\(u_{i}\)\(>\) from \(<\)\(p_{1}\),...,\(p_{M}\)\(>\)"_
The loss function for the Speaker Identification is as follows:
\[\mathcal{L}_{p}=\sum_{i}^{N}-\log P(\mu_{i}|x_{i}^{p},\theta) \tag{3}\]
Here, \(\mu_{i}\) represents the token of the corresponding speaker label for the given speaker identification task input sample \(x_{i}^{p}\). Unless otherwise specified, \(N\) stands for the total number of utterances in the dataset, while \(\theta\) represents the parameters of the LLM.
**Emotion Impact Prediction task.** In the daily conversations, the intricate relationships between individuals can have a significant impact on the emotional states of subsequent dialog. Prior research has attempted to address this issue by constructing a dialogue relationship graph and utilizing a complex graph neural network to model the emotional impacts of these relationships. However, these methods are often associated with a highly intricate data preprocessing pipeline and are susceptible to overfitting on certain datasets. To address these issues, we propose a generative framework for the emotion impact prediction task, which implicitly captures the interplay between dialogues and emotional impacts.
To be specific, we maintain the instruction part \(u_{i,I}\) of the input \(x_{i}\) of the main task and modify the historical content \(u_{i,H}^{e}\) from \(u_{i,H}\). For the Emotion Impact Prediction task, \(u_{i,H}^{e}\) will not include the target statement \(u_{i}\). The corresponding label statement \(u_{i,L}^{e}\) is modified as follows:
_"Based on the above historical utterances, the next utterance is spoken by \(<\)\(P_{K(u_{i})}\)\(>\), please predict the emotion states of \(<\)\(P_{K(u_{i})}\)\(>\)from \(<\)\(e_{1}\), \(e_{2}\),..., \(e_{o}\)\(>\):"_
Hence, the overall input for emotion impact prediction is:
\[x_{i}^{e}=[u_{i,I};u_{i,H}^{e},u_{i,L}^{e}] \tag{4}\]
The loss calculation for the emotion impact prediction task is as follows:
\[\mathcal{L}_{e}=\sum_{i}^{N}-\log P(\epsilon_{i}|x_{i}^{e},\theta) \tag{5}\]
Here, \(\epsilon_{i}\) represents the emotional label token of the text label \(e_{i}\) corresponding to the formatted input utterance \(x_{i}\).
Fig. 2: The Schematic of Retrieval Template Module.
### _Overview of InstructERC_
To sum up the instruction based generative framework for ERC, given an input utterance \(x_{i}\) after concatenating the retrieval template \(d_{rel}\) and a LLM, the model returns the logits \(g_{i}\) and the generated text \(y_{i}\) for the entire sentence, including both input and output tokens. This is represented by the following equation:
\[y_{i},\mathbf{g_{i}}=\mathrm{LLM}(x_{i},\theta) \tag{6}\]
Here, \(\theta\) is the same as mentioned. The LLM predicts the conditional probability \(p(\gamma_{i}|x_{i},\theta)\) of generating each token \(\gamma_{i}\) of the generated text \(y_{i}\) until the end symbol \(<\)eos\(>\)is outputted. As for logits \(\mathbf{g_{i}}\in\mathbf{R}^{L\times V}\), where \(L\) and \(V\) denote the length of the entire sentence and the size of the vocabulary used by the LLM, respectively.
In accordance with the original training method of LLMs, we adopt the next token prediction loss to measure the model's output error. Therefore, the loss calculation of the main task, denoted as \(\mathcal{L}_{main}\), is defined as follows:
\[\mathcal{L}_{main}=\sum_{i}^{N}-\log P(\epsilon_{i}|x_{i},\theta) \tag{7}\]
**Training and Inference.** During training and inference, our retrieval process, emotional alignment tasks and main tasks in InstructERC can be divided into two stages:
In the first stage of joint training, the characteristics of the speaker intuitively form the basis of emotional expression. Therefore, we use the speaker identification task for LLM pre-training to fine-tune speaker characteristics, which aims to preheat parameters for subsequent ERC tasks..
In the second stage, we fine-tune LLM using both the ERC main task and the emotion influence prediction task to improve overall performance. The training loss at this stage is \(\mathcal{L}_{main}+\alpha*\mathcal{L}_{e}\), where \(\alpha\) is a hyperparameter The parameter \(\alpha\) is used to adjust the weight of the emotion influence prediction task loss in the second overall joint training loss.
The difference of demonstration retrieval on training and inference stage is shown in figure 2, we limit the retrieved examples to those with the same emotion label as the current recognized speech, namely Same label pairing,in order to provide more diverse emotional understanding while avoiding excessive noise during training. During inference, there are no restrictions on the retrieved demonstrations due to the labels are unknown, namely all labels pairing. The retrieval results, simply referred as \(d_{rel}\), are specialized as \(d_{rel}^{t}\) and \(d_{rel}^{i}\) in training and inference stage, respectively.
## IV Experiments and Results
### _Dataset_
We evaluate the efficacy of InstructERC on three standard benchmark datasets: IEMOCAP, MELD, and EmoryNLP.
**IEMOCAP**[43] is a dataset recorded as dyadic conversational video clips with eight speaker participating in the training set while two speaker in testing set. Emotional tags in IEMOCAP are _happy, sad, neutral, angry, excited,_ and _frustrated._
**MELD**[44] is a multimodal dataset that has been expanded from the EmotionLines dataset. MELD is obtained from the popular TV show _Friends_ and comprises over 1400 dialogues and 13000 utterances, each of which is labeled with emotion and sentiment classes. The emotion classes include _(i.e., happy/joy, anger, fear, disgust, sadness, surprise,_ and _neutral_), while the sentiment classes consist of _positive, negative, or neutral_.
Fig. 3: The overview of InstructERC framework
**EmoryNLP**[45] is a dataset also collected from the TV series _Friends_. The dataset comprises utterances that are categorized into seven distinct emotional classes, namely _neutral, joyful, peaceful, powerful, scared, mad_, and _sad_, while the sentiment classes consist of _positive, negative, or neutral_.
**Evaluation metrics.** To maintain consistency with previous methods, we use accuracy (Acc) and weighted-F1 (W-avg F1) as the evaluation metrics. Due to the severe class imbalance in the EmoryNLP and MELD datasets, as illustrated in Figure 4, the weighted-F1 metric, as opposed to accuracy (Acc), is more reflective of the true performance of the model. For each method, we conduct tests with five random seeds and report the average results from the test sets. Specifically, for the ablation experiments, we conduct significance tests comparing the results of each ablated version with those of the complete model.
This study exclusively focuses on the emotional classes and the text modality in these datasets. Moreover, we ensure consistency with COSMIC regarding the train/val/test splits. The specifics of the datasets are outlined in Table I.
### _Baselines_
For discriminative ERC models, we selected a **SOTA** baseline for each method.
**Transformer-based**:
* **KET**[36] introduces Knowledge-Enriched Transformers that incorporate commonsense knowledge (CSK) using a hierarchical self-attention mechanism and a context-aware graph attention process.
* **TODKAT**[37] is a language model enhanced with a specialized layer for topic detection. This layer, combined with commonsense statements from a knowledge base tailored to the dialogue context, augments the model's conversational capabilities by providing deeper contextual understanding.
* **MTL**[22] leverages Speaker Identification (SI) as an auxiliary task (but does not really recognize each speakers) to improve the representation of utterances within conversations.
* **CoG-BART**[46] employs the pretrained encoder-decoder model BART as its foundational architecture. An auxiliary task of response generation is added to augment the model's proficiency in understanding context information.
* **M2FNet**[25] proposes Multi-modal Fusion Network, leveraging a novel feature extraction and multi-head attention-based fusion mechanism to capture emotion-relevant features from visual, audio, and text data.
* **SPCL+CL**[23] integrates a prompt-based BERT framework with supervised prototypical contrastive learning, as outlined in works by and, complemented by curriculum learning concepts from.
* **Hidialog**[24] uses special tokens and turn-level attention to create hierarchical turn embeddings, and then applying a heterogeneous graph module to enhance embeddings for more accurate dialogue interpretation.
**Recurrent-based**:
* **SACL-LSTM**[29] extracts structured representations using contrast-aware adversarial training and joint class-spread contrastive learning, an additional adversarial strategy is added to enhance context robustness.
* **HCAN**[30] integrates a hybrid recurrent and attention-based module for global emotion tracking and introduces Emotional Attribution Encoding for detailed emotion analysis in conversations.
* **EmotionIC**[5] features three key components: Identity Masked Multi-Head Attention (IMMHA) for global context, Dialogue-based Gated Recurrent Unit (DiaGRU) for local context, and Skip-chain Conditional Random Field (SkipCRF) for detailed emotion flow detection, combining attention and recurrence methods for a comprehensive approach.
* **CauAIN**[47] involves extracting causal indicators rooted in commonsense knowledge, which assists in the trace-back of causal utterances. Importantly, this process encompasses both the retrieval and traceback stages, taking into account the dynamics of interactions within and between speakers concurrently.
* **COIN**[48] introduces a conversational model that leverages state interactions and a hierarchical global interaction module for enhanced emotion detection, also utilizing adversarial examples to improve robustness and generalization in multimodal contexts.
* **ICON**[32] hierarchically processes emotional influences at both individual and inter-speaker levels, integrating these insights into global memories, facilitating the generation of detailed contextual summaries.
Fig. 4: Statistical Distribution of Classes Across Datasets
* **DialogueRNN**[31] employs a recurrent neural network based approach that meticulously tracks the states of individual participants throughout a conversation.
* **DialogueCRN**[49] is designed with multi-turn reasoning modules that extract and integrate emotional clues. These modules perform an iterative process of intuitive retrieval and conscious reasoning,
* **COSMIC**[35] is a conversational model that integrates commonsense knowledge to enhance its performance, which injects commonsense knowledge into GRUs to capture features related to the internal state, external state, and intent state.
* **GNN-based**:
* **DialogueGCN**[26] creates a graph modeling speakers' interactions to mimic the structure of a dialogue. It employs a Graph Convolutional Network for information propagation.
* **RGAT**[27] incorporates position encodings into the RGAT framework to account for speaker and sequential dependencies in its analysis.
* **GraphCFC**[28] is a module that efficiently models contextual and interactive information for ERC task. It uses multiple extractors and PairCC strategy to address the heterogeneity gap in multimodal fusion.
* **DAG-ERC**[6] views the internal structure of dialogue as a directed acyclic graph, encoding utterances to intuitively model the flow of conversation context.
* **SKAIG**[38] uses a connected graph to enhance the targeted utterance with information from the past and future context, and utilizes CommonSense Knowledge (CSK) to enrich edges with knowledge representations.
**PLM-based**:
* **DialogXL**[7] adapts the recurrence mechanism of XLNet to accommodate longer historical contexts. It also integrates dialogue-aware self-attention to effectively handle the complexities of multi-party structures in conversations.
* **EmoBERTa**[8] leverages RoBERTa for ERC by prepending speaker names and inserting separation tokens between utterances, which models emotional influence based on both intra- and inter-speaker context in an end-to-end fashion.
* **UniMSE**[50] is a framework that unifies multimodal sentiment analysis and emotion recognition in conversation tasks. This framework performs modality fusion at both the syntactic and semantic levels.
* **KI-NET**[34] uses commonsense knowledge and sentiment lexicons. It features a self-matching module and an auxiliary task for Sentiment Polarity Intensity Prediction, aiding in ERC.
**LLM-based**:
* **ChatGLM-6B & ChatGLM2-6B**: ChatGLM-6B is an open-source conversational language model [51] for Chinese and English. It has 6.2 billion parameters and is optimized for Chinese QA. It has been trained on 1 trillion Chinese and English identifiers and further improved through various techniques. ChatGLM2-6B is the second generation of the model, pre-trained on 1.4 trillion Chinese and English identifiers with human preference alignment training. It extends the context window to 32K and speeds up inference with Multi-Query Attention.
* **Llama-7B & Llama2-7B**: Llama-7B is the 7B parameters' version of the a collection of foundation language models [14] ranging from 7B to 65B parameters, which is trained on trillions of tokens. Llama2-7B pretrained models are trained on 2 trillion tokens, and have double the context length than Llama 1. Its fine-tuned models have been trained on over 1 million human annotations.
### _Implementation Details_
We use ChatGLM and Llama as our backbone model. Considering the efficiency and effectiveness of Parameter-Efficient-Fine-Tuning (PEFT), we adopt LoRA [21] and insert low-rank adapters after self-attention layers. We set the dimension of adapters to 16 a nd the learning rate to 2e-4. The learning rate is set to 2e-5 for all parameters' finetune. The histotical window is set to 1, 5, 12, 20 for iemocap, meld and EmoryNLP respectively for all experiments. The retrieval parameter "TopK" is set to Top1 -- emprically. The hypermeter \(\alpha\) is set to 0.1 during training. Greedy search is used during inference if not specified. Moreover, our experiments are conducted by taking the average of three runs with no hyperparameter searching. We train with FP16 precision on 4 \(\times\) 80G Nvidia A100 GPUs.
### _Main Results_
Table II illustrates the results of comparing our InstructERC model with other models and backbones from different perspectives. Based on this, We make the following observations: (1) Our methods achieves significant improvements over the SOTA of discriminative models on all benchmarks. Specifically, we outperform UniMSE, SACL-LSTM, and EmotionIC by 0.73%, 2.70%, and 1.36% on iemocap, meld, and EmoryNLP, respectively. Notably, we completely outperformed multimodal models on two benchmarks using only single-text modality data, demonstrating the extreme utilization of our method for textual data.
(2) To gain an insight into LLM models under different supervision scenarios for ERC task, we conduct experiments on Zero-shot + InstructERC and LoRA + InstructERC settings. It can be observed that even with carefully designed primary task instructions, LLMs still struggle in zero-shot scenarios, which further confirms the existence of a significant reasoning gap in their application to ERC sub-task. Furthermore, by utilizing the LoRA + InstructERC, the performance of the four LLMs has significantly improved, especially on the IEMOCAP dataset. This fully demonstrates the effectiveness and generalization ability of our InstructERC framework, which greatly enhances the emotion recognition capability of LLM in long texts.
\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c} \hline \hline \multirow{2}{*}{Datasets} & \multicolumn{2}{c|}{Conversations} & \multicolumn{3}{c|}{Utterances} & \multirow{2}{*}{classes} & \multirow{2}{*}{type} & \multirow{2}{*}{avg\_utt} & \multirow{2}{*}{Evaluation} \\ & Train & Val & Test & Train & Val & Test & & & \\ \hline IEMOCAP & 108 & 12 & 31 & 5163 & 647 & 1623 & 6 & two-person & 47 & Weighted-F1 / Acc \\ MELD & 1038 & 114 & 280 & 9989 & 1109 & 2610 & 7 & multi-party & 9 & Weighted-F1 \\ EmoryNLP & 713 & 99 & 85 & 9934 & 1344 & 1328 & 7 & multi-party & 11 & Weighted-F1 \\ \hline \hline \end{tabular} TABLE II: The main results on three benchmarks.
\begin{tabular}{c|c|c|c|c|c|c|c|c|c} \hline \hline \multirow{2}{*}{Dataset Models} & \multicolumn{2}{c|}{Parameters} & \multicolumn{2}{c|}{IEMOCAP} & \multicolumn{2}{c|}{MELD} & \multicolumn{2}{c|}{EmoryNLP} & \multicolumn{2}{c|}{Average} & \multicolumn{1}{c|}{Extra} & \multicolumn{1}{c}{Model type} \\ & & W-avg F1 & ACC & W-avg F1 & W-avg F1 & W-avg F1 & \multicolumn{1}{c|}{Knowledge} & \\ \hline \hline \multicolumn{10}{c}{Small-scale Discriminant ERC-specific Model} \\ \hline KET\({}^{*}\) & 2.6M & 59.56 & 61.33 & 58.18 & 34.39 & 50.17 & ConceptNet & transformer-based \\ TODKAT\({}^{\dagger}\) & 330M & 61.33 & 62.35 & 65.47 & 38.69 & 55.16 & COMET & transformer-based \\ MTL\({}^{*}\) & 1.2M & — & 62.45 & 61.90 & 35.92 & — & ✗ & transformer-based \\ CoG-BART\({}^{*}\) & 415.1M & 64.87 & 64.95 & 63.82 & 37.33 & 55.34 & ✗ & transformer-based \\ M2FNet\({}^{*}\) & — & 69.86 & 69.69 & 66.71 & — & — & ✗ & transformer-based \\ SPCL\({}^{\dagger}\) & 356.7M & 68.42 & 69.21 & 66.13 & _40.25_ & 58.26 & ✗ & transformer-based \\ Hidalog\({}^{*}\) & — & — & — & 66.96 & — & — & ✗ & transformer-based \\ SACL-LSTM\({}^{*}\) & 2.6M & 69.22 & 69.08 & 66.45 & 39.65 & 58.44 & ✗ & recurrent-based \\ HCAN\({}^{\dagger}\) & 3.5M & 69.21 & 69.13 & 66.24 & 39.67 & 58.37 & ✗ & recurrent-based \\ ICON\({}^{*}\) & 0.5M & 63.50 & 64.00 & — & — & — & ✗ & recurrent-based \\ DialogueRNN\({}^{\dagger}\) & 9.9M & 64.65 & 64.85 & 65.30 & 37.54 & 55.83 & ✗ & recurrent-based \\ DialogueCRN\({}^{\dagger}\) & 3.3M & 67.53 & 67.39 & 65.77 & 38.79 & 57.36 & ✗ & recurrent-based \\ EmotionIC\({}^{*}\) & — & 69.50 & 69.44 & 66.40 & 40.01 & _58.63_ & ✗ & recurrent-based \\ CauLIN\({}^{*}\) & 6.1M & 65.01 & 65.08 & 64.89 & 37.87 & 55.92 & ATOMIC & recurrent-based \\ COIN\({}^{*}\) & 0.5M & 65.37 & 66.05 & — & — & — & ✗ & recurrent-based \\ COSMIC\({}^{\dagger}\) & 11.9M & 65.03 & 63.43 & 63.43 & 38.49 & 55.65 & COMET & recurrent-based \\ DialogueGCN\({}^{\dagger}\) & 2.1M & 62.11 & 62.49 & 62.68 & 36.43 & 53.14 & ✗ & GNN-based \\ RGAT\({}^{*}\) & 13M & 65.22 & — & 60.91 & 34.42 & 53.52 & ✗ & GNN-based \\ SKAIG\({}^{*}\) & — & 66.96 & — & 65.18 & 38.88 & 57.01 & COMET & GNN-based \\ DAG-ERC\({}^{\dagger}\) & 9.5M & 66.54 & 66.53 & 63.36 & 38.29 & 56.06 & ✗ & GNN-based \\ GraphCFC\({}^{*}\) & — & 68.91 & 69.13 & 58.86 & — & — & ✗ & GNN-based \\ \hline \hline \multicolumn{10}{c}{Small-scale Pretrained Language Model} \\ \hline KI-NET\({}^{*}\) & 500M & 67.00 & — & 63.24 & — & — & ConceptNet & Encoder-Decoder \\ DialogueXL\({}^{*}\) & 510M & 65.94 & — & 62.41 & 34.73 & 54.36 & ✗ & Encoder-Decoder \\ EmoBERTa\({}^{*}\) & 355M & 68.57 & — & 65.61 & — & — & ✗ & Encoder \\ UniMSE\({}^{*}\) & 220M & _70.66_ & — & 65.51 & — & — & ✗ & Encoder-Decoder \\ \hline \hline \multicolumn{10}{c}{Zero-shot + InstructERC} \\ \hline ChatGLM \({}^{\dagger}\) & 6B & _38.6_ & 39.72 & _38.8_ & 19.6 & _32.33_ & ✗ & LLM-based \\ ChatGLM2 \({}^{\dagger}\) & 6B & 21.1 & 23.7 & 21.8 & _24.4_ & 22.43 & ✗ & LLM-based \\ Llama \({}^{\dagger}\) & 7B & 0.753 & 1.32 & 9.12 & 5.31 & 5.06 & ✗ & LLM-based \\ Llama2 \({}^{\dagger}\) & 7B & 2.774 & 3.54 & 16.28 & 8.36 & 9.46 & ✗ & LLM-based \\ \hline \hline \multicolumn{10}{c}{LoRA + Backbone} \\ \hline ChatGLM \({}^{\dagger}\) & 6B & 18.94 & 17.98 & 40.54 & 25.71 & 28.07 & ✗ & LLM-based \\ ChatGLM2\({}^{\dagger}\) & 6B & 52.88 & 54.13 & 64.85 & 37.69 & 51.80 & ✗ & LLM-based \\ Llama\({}^{\dagger}\) & 7B & 55.81 & 57.27 & _66.15_ & 37.98 & 53.21 & ✗ & LLM-based \\ Llama2\({}^{\dagger}\) & 7B & _55.96_ & 55.53 & 65.84 & _38.21_ & _53.33_ & ✗ & LLM-based \\ \hline \hline \multicolumn{10}{c}{LoRA + InstructERC} \\ \hline ChatGLM\({}^{\dagger}\) & 6B & 36.04 & 38.36 & 46.41 & 30.86 & 37.77 & ✗ & LLM-based \\ ChatGLM2\({}^{\dagger}\) & 6B & 67.54 & 66.91 & 65.58 & 39.09 & 57.40 & ✗ & LLM-based \\ Llama\({}^{\dagger}\) & 7B & 64.17 & 64.72 & 67.62 & 39.34 & 57.04 & ✗ & LLM-based \\ Llama2\({}^{\dagger}\) & 7B & **71.39** & **71.68** & **69.15** & **41.37** & **60.64** & ✗ & LLM-based \\ \hline \hline \end{tabular} TABLE III: The best-performing results of other models are highlighted in gold font, while SOTA results across all models are emphasized in red font. Models annotated with an * indicate results sourced from the model’s paper, and a (\(\dagger\)) denotes results from reproductions conducted by the authors.
(3) InstructionERC is a plug-and-play method that can be adapted to multiple generative frameworks, such as prefix decoder or causal decoder. Our unified alignment task and demonstration construction strategy are not tailored to any specific dataset design, highlighting the strong transferability and generalization capability of our approach.
### _Ablution study_
We conduct an ablation study to investigate the characteristics of the main components in InstructERC. Table III shows the ablation results, and "w/o" denotes the model performance without a specific module. We have following observations: 1) The performance of InstructERC drops when removing any one component, which suggests that every part of the design is necessary 2) Removing any one Emotional alignment task results in great performance degradation. This is consistent with our conjecture since speaker identification and emotion impact prediction provide relatively orthogonal semantic information from two perspectives. Missing each part will make the semantic space more chaotic and make the emotion recognition effect worse. 3) Taking away the domain retrieval module resulted in a steady decline on all three datasets, demonstrating the important role of domain information in dialogue modeling. 4) Removing joint alignment task tasks causes obvious performance degradation compared with removing one of them, which indicates that jointly pre-training objectives have a mutually reinforcing effect. 5) Replacing LoRA with full-parameter fine-tuning results in a significant drop in performance, which indicates that the parameter-efficient approach is effective in preventing overfitting of LLMs on the ERC task. For detailed analysis, please refer to the "All Parameters vs Parameter Efficiency" section.
In the historical window exploration study, we examine how different sizes of historical windows affect emotion recognition tasks. Due to token limitations, we set the upper limit for conversational turns to 20. This is an upgrade from earlier, smaller Pretrained Language Models (PLMs), which only support up to 5 turns. We find that a window of 12 turns is optimal for capturing the necessary historical context. In general, expanding the count of historical turns aids in enhancing the accuracy of emotion detection, a trend that is readily observable in the IEMOCAP dataset featured long-term turns. However, there's a point where adding more historical turns doesn't lead to better results and might even harm performance, especially for datasets like MELD and EmoryNLP, which have an average length of 6 to 7 turns. However, these insights are beyond the reach of smaller PLMs that top out at 5 turns.
### _All Parameters vs Parameter Efficiency_
In order to investigate the effect of different parameter fine-tuning methods on the ERC task, we conducted comparative experiments in Table V. We have the following observations:
(1) The all parameter fine-tuning performs weaker than LoRA's fine-tuning on all backbones on average performance (especially ChatGLM with a 9.32 % improvement). It is worth noting that the best performance of the full parameter method is often achieved in the first 1-3 epochs in the experiment. These findings demonstrate that parameter-efficient methods are more suitable for LLMs in ERC tasks. (2) From the perspective of model structure, the average performance of full parameter ChatGLM even decreases compared to the zero-shot results in Table II (from 32.33% to 28.38%), while replacing it with LoRA brings a significant improvement (from 32.33% to 37.77%). Other decoder-only backbones do not show such drastic performance fluctuations, which further indicates that the prefix-decoder paradigm is unstable in ERC tasks compared to the casual decoder, and parameter-efficient frameworks can effectively alleviate this problem.
(3) From the perspective of datasets, compared to full parameter fine-tuning, the performance gain of the LoRA method in MELD and EmoryNLP is significantly greater than that in
IEMOCAP. We believe that this is related to the characteristics of theses datasets: IEMOCAP has long dialogue texts and multiple conversation rounds, these strong supervision signals lead to good performance in both settings. However, MELD and Emory have fewer dialogue rounds, diverse speakers, and imbalanced categories. Low-parameter methods can effectively prevent LLMs from overfitting to certain semantic patterns of dialogues format and speaker's habits, thereby enhancing the generalization ability of emotion recognition in conversation.
### _Scaling Analysis in Low-source Scenario_
In this section, we gain an insight into the scaling relationship of data and performance for different parameter fine-tuning settings (LoRA & All Parameter), as shown in Figure 5.
**Parameter-efficient Scaling Analysis**: On the IEMOCAP dataset, our scaling curve initially increases (from 1/16 to 1/4) and then stabilizes. This may be because the dataset has long dialogue texts and multiple dialogue rounds, leading to increased diversity with the addition of early data. However, as the supervision signal strengthens, the performance gain gradually weakens. For datasets with fewer dialogue rounds and imbalanced categories, such as MELD and EmoryNLP, our method only yields a small gain in extremely low-resource scenarios (from 1/16 to 1/4) and achieves a relatively stable performance improvement with the increase of data (from 1/2 to 1). This finding supports the idea that when a unit-scaling of data only provides weak supervision signals, the data size needs to exceed a certain threshold (1/4 - 1/2) to achieve significant improvement.
**Full-Parameter Scaling Analysis**: The scaling curves of full-parameter settings on the IEMOCAP and EmoryNLP datasets showed significant fluctuations and performance degradation in two intervals (from 1/16 to 1/8, 1/4 to 1/2) compared to LoRA. Fine-tuning large models with all parameters may cause redundant parameters to overfit the patterns in the current dialogue, which hinders the model's ability to generalize new supervised signals as data volume increases. The MELD dataset also exhibited performance degradation with data augmentation (from 1/4 to 1). These findings demonstrate the stability and robustness of parameter-efficient fine-tuning in the ERC task, providing empirical guidance for large models in industrial interfaces with ERC tasks of varying data characteristics.
## V unified Dataset Experiments
To further substantiate the efficacy and robustness of our framework, we conduct a compelling experiment involving a unified dataset. Within the settings of this experiment, all emotional labels across the datasets are standardized, and all speaker labels are also consolidated. Subsequently, we conduct data scaling experiments on the processed unified dataset. The evaluation method employed in the experimental results, utilizing the weighted F1 score, aligned with the evalution method delineated in Section Experiments.
### _Unified dataset labeling_
We continue to use the previous datasets IEMOCAP, MELD, and EmoryNLP. According to The Feeling Wheel [52] proposed in 1982, as shown in subfigure of Figure 6, we align all emotional labels from three datasets with this standard, the details of which are shown in Tabel VII. After completion of label mapping, there are a total of 9 types of emotional labels, which are _joyful, sad, neutral, mad, excited, powerful, fear, peaceful and disgust_. Furthermore, due to the uniqueness of character labels in each dataset, we have renumbered them using a One-hot encoding approach, as demonstrated in the "One-hot Speaker Label Mapping" subfigure of Figure 6.
### _Unified dataset Experiment_
We still utilize the LoRA method in PEFT to train InstructERC on the unified dataset, and the training results are evaluated on the three datasets respectively. As mentioned above, these datasets have significant variations in sample size and class imbalance within each dataset. To explore the impact of different sampling methods on the final performance, two data scaling approaches were experimented with: total mix and ratio mix.
In the total mix approach, all datasets are combined for uniform sampling. Conversely, in the ratio mix approach,
Fig. 5: The scaling relationship of data and performance for different parameter fine-tuning settings (LoRA & All Parameters)
datasets are sampled separately and then combined. Both approaches maintain the same quantity of training data, but due to the larger absolute number of training samples in MELD and EmoryNLP, the total mix approach results in a higher proportion of samples from these two datasets when varying data scaling is applied. On the basis of the following, we further explore the impact of data sampling ratio on the model's performance.The details of results are shown in Table VI, and a more intuitive presentation is shown in Figure 7.
#### Iv-A1 The robustness of InstructERC
As depicted in the first row of Table VI, upon fine-tuning InstructERC using the unified dataset, there is a slight decrease in the performance of the three benchmarks compared to the SOTA under single dataset training. However, a relatively high Weighted F1 score (W-F1) can still be maintained simultaneously on these three benchmarks, particularly the performance of MELD, which continues to surpass the SOTA level of all small models. Consequently, it is evident that our approach to dataset processing is simple, yet efficient. Furthermore, InstructERC, grounded on the Llama2-7B large language model base, exhibits exceptional robustness, capable of concurrently acquiring emotional paradigms from a multitude of distinct distributions, a feat previously unattainable by small models.
#### Iv-A2 The data scaling exploration
Large language models possess formidable learning capabilities, thus validating the data scaling relationship is a crucial part of our framework. We conduct data scaling experiments on the unified dataset from 1 to 1/64. As the scale of trainig data exponentially decreases from 1 to 1/32 within the range, the performance of the model on the three benchmarks exhibits a slight fluctuation in linear decline. This is consistent with the findings of some existing explorations in large models [53].
#### Iv-A3 The Low resource mutual gain
We also surprised to discover that during the final stage of training data reduction from 1/32 to 1/64, the Total Mix and Ratio Mix strategies continue to exhibit a linear performance decline. However, the performance of the model trained under the single method experiences a drastic drop, as depicted in Figure 7. We posit that data from different scenarios endows the model with the capability to comprehend emotions from diverse perspectives. This, in turn, allows the model to achieve robust enhancements under various data conditions. Such mutual gain is particularly pronounced in low resource scenarios (1/64).
#### Iv-A4 The exploring of different mixing strategies
We have further investigated the impact of different mixing strategies on data scaling. These two strategies maintain consistency in the number of training data. In various sampling strategies, IEMOCAP, with the smallest sample proportion, exhibits inferior performance in ratio mixed sampling compared to mixed-ratio sampling, while MELD, having the largest sample proportion, shows the reverse trend. This can be explained by two key factors:
Data Representativeness: In total mix sampling, where each dataset's samples are equally likely to be selected, the unique traits of smaller datasets like IEMOCAP may be obscured by larger ones like MELD. In contrast, ratio mix sampling, which represents each dataset proportionally to its original sample size, may better highlight the characteristics and influence of smaller datasets.
Effect of Class Imbalance: In smaller datasets with internal class imbalances, total mix sampling could exacerbate these imbalances. For instance, if IEMOCAP has a relatively smaller number of samples in a certain category, total mix sampling might further intensify this imbalance during model training.
Fig. 6: Unified Label Mapping Across three Open-source Benchmarks. The Feeling Wheel is proposed by [52]
Ratio mix sampling, however, better preserves the original class proportions of the datasets, potentially mitigating class imbalance impacts to a degree.
These insights indicate the importance of considering dataset size and class distribution when selecting sampling strategies to ensure optimal model performance and generalizability.
## VI Conclusion
In conclusion, our study introduces InstructERC, a transformative approach that redefines the ERC task within a generative framework utilizing Large Language Models (LLMs). InstructERC incorporates a unique retrieval template with an emotional domain retrieval module, enabling adaptation to varying conversation lengths and offering highly relevant emotional recognition demonstrations. The historical window exploration experiment illustrates what is the optimal number of conversational turns for context modeling, which is unreachable for previous works due to the token limitation. Additionally, it integrates two novel tasks: speaker identification and emotion prediction, which effectively model complex conversational dynamics and speaker relationships. This approach allows for more nuanced ERC information integration. Significantly, our LLM-based plug-in framework surpasses all prior models, setting new benchmarks on three ERC datasets. We also pioneer in unifying label mapping and modeling across these datasets, demonstrating the LLM's robust generalization capabilities. Futhermore, the low resource mutual gain phenomenon is discovered in data scaling exploration experiments. Our extensive analysis provides practical insights for implementing InstructERC in real-world scenarios, highlighting its efficiency and effectiveness in emotion recognition within dialogues.
## VII Acknowledgments
This research received funding from the National Natural Science Foundation of China under grants 62236005, 61936004, and U1913602. Additionally, sincere gratitude is extended to Meituan Inc. for providing support in the form of an Nvidia A100 computing cluster for experimental purposes.
Fig. 7: The data scaling law demonstrated on three benchmarks using different data mixing strategies |
2309.09186 | Spline-Based Minimum-Curvature Trajectory Optimization for Autonomous
Racing | We propose a novel B-spline trajectory optimization method for autonomous
racing. We consider the unavailability of sophisticated race car and race track
dynamics in early-stage autonomous motorsports development and derive methods
that work with limited dynamics data and additional conservative constraints.
We formulate a minimum-curvature optimization problem with only the spline
control points as optimization variables. We then compare the current
state-of-the-art method with our optimization result, which achieves a similar
level of optimality with a 90% reduction on the decision variable dimension,
and in addition offers mathematical smoothness guarantee and flexible
manipulation options. We concurrently reduce the problem computation time from
seconds to milliseconds for a long race track, enabling future online
adaptation of the previously offline technique. | Haoru Xue, Tianwei Yue, John M. Dolan | 2023-09-17T07:20:12Z | http://arxiv.org/abs/2309.09186v1 | # Spline-Based Minimum-Curvature Trajectory Optimization for Autonomous Racing
###### Abstract
We propose a novel B-spline trajectory optimization method for autonomous racing. We consider the unavailability of sophisticated race car and race track dynamics in early-stage autonomous motorsports development and derive methods that work with limited dynamics data and additional conservative constraints. We formulate a minimum-curvature optimization problem with only the spline control points as optimization variables. We then compare the current state-of-the-art method with our optimization result, which achieves a similar level of optimality with a 90% reduction on the decision variable dimension, and in addition offers mathematical smoothness guarantee and flexible manipulation options. We concurrently reduce the problem computation time from seconds to milliseconds for a long race track, enabling future online adaptation of the previously offline technique.
## I Introduction
### _Offline Trajectory Optimization in Autonomous Racing_
Offline trajectory optimization (OTO) is widely used in modern autonomous racing. By leveraging sophisticated prior knowledge of the race track (geometries, friction conditions, etc.) and knowledge of the race car (tire model, power train, etc.), an optimization program can be run in a reasonable time frame to achieve best lap time. The result can then significantly reduce the online computation load in sample-based planning [14] and model predictive control [12].
Recent advancements in high-speed autonomous racing present new challenges and opportunities for evaluating OTO algorithms. Lack of prior race car and race track data is a significant challenge for university-level research and racing development. For example, although autonomous race cars in the Indy Autonomous Challenge (IAC) have reached a top speed of over 320 km/h, the teams still have limited access to critical data such as tire model parameters and load transfer characteristics, especially in the earlier stages of development, when estimation of these parameters is not viable with the limited data gathered. Therefore, a simple trajectory optimization algorithm should account for this data-scarce use case and support early development efforts.
OTO is often used to generate a reference safety set in the development phase of an autonomous race car, which is often desired when the handling limit of the vehicle is yet to be determined. Instead of directly applying an experimental tire model as the optimization limit, it is often desired to work with more conservative handling constraints. The traction circle (ellipse, or diamond) is an intuitive and effective alternative to the raw tire parameters. By controlling the maximum acceptable lateral and longitudinal acceleration in each direction as hyper-parameters, an OTO approach can generate different trajectories subjected to additional dynamics constraints. In addition, the track geometry constraints can be altered to impose extra track limits, and generate trajectories that pass through the non-optimal part of the race track, which provides a planning and control reference when the vehicle is forced into these regions.
### _Related Work_
Prior to autonomous racing, the generation of an optimal velocity profile given a fixed trajectory was first studied in the motorsports domain. Quasi-steady state (QSS) approaches have been developed since the 1980s [2][3][4][13]. The full trajectory is broken down into small segments, through which the race car is assumed to have steady-state behavior. The algorithm starts with segments corresponding to peak curvature on the trajectory, which are considered the "bottlenecks". The algorithm then proceeds to generate a full velocity profile entering and exiting these bottlenecks, considering only the neighboring vehicle states subject to the dynamic limits, until the velocity profiles of these bottlenecks meet each other [6]. This method is known for its robustness and fast run time, but it fails to capture transient effects such as load transfer characteristics and damper dynamics [6]. We adopt a similar method in our work, whose algorithm will be formally proposed in a later section.
The optimization of the trajectory geometry in autonomous racing has been studied in Braghin et al. and Kapania et al. with the "minimum curvature" heuristic, which states that an optimal racing trajectory minimizes the sum of curvatures around the track to minimize lap time [1][11]. Heilmeier et al. extend the idea to a quadratic programming (QP) formulation with improvements to the curvature calculation. They also apply spline interpolation to the noisy raw data and the final output to obtain a smooth trajectory [9]. Their work was extensively used by the TUM team in recent high-speed autonomous racing events such as Roborace and IAC. However, to guarantee that the continuity of the trajectory during and after optimization is at least \(C^{2}\) (continuous position, velocity, and acceleration), it is insufficient to optimize with respect to discrete samples along the trajectory (3.0 m interval in [9]), although spline interpolation could be applied in postprocessing.
### _Goal and Scope of This Work_
In our work, we propose a new optimization formulation for minimum-curvature OTO based on B-splines that ensures the continuity of the trajectory throughout the optimization iterations. We also consider the unavailability of sophisticated race car and race track dynamics in early-stage autonomous racing development, and derive methods that work with limited dynamics data and additional conservative constraints.
We introduce the math related to the B-spline and our cost function in Section II. We then formulate the optimization problem in Section III, and discuss the QSS algorithm to calculate the velocity profile, which is used to evaluate the generated trajectory. Finally, in Section IV, we compare the optimization results with previous works.
## II Background
A single B-spline of order \(n\) is a parametrized curve, denoted as \(B_{i,n}(x)\). It can be uniquely constructed from a series of nondecreasing knot points \(t_{0},t_{1},\ldots,t_{N}\), subjected to
\[\sum_{i=1}^{N-n}B_{i,n}(x)=1 \tag{1}\]
A B-spline has non-zero values only in the range of knot vectors \(t_{i}<x\leq t_{i+n}\). Higher-order B-splines can be recursively defined.
\[\begin{split} B_{i,n+1}(x)&=w_{i,n}(x)B_{i,n}(x)+(1 -w_{i+1,n}(x))B_{i+1,n}(x)\\ \text{where}\ w_{i,k}(x)&=\begin{cases}\frac{x-t_{i }}{t_{i+1}-t_{i}},&t_{i+k}\neq t_{i}\\ 0,&\text{otherwise}\end{cases}\end{split} \tag{2}\]
The resulting basis functions are \(C^{n-2}\) continuous and overlap throughout the knot sequence, which is visualized in fig. 1.
These basis functions allow us to define a spline on \(t_{0},t_{N}\) that is a linear combination of the basis functions:
\[T_{n}(x)=\sum_{i=1}^{N-n}\alpha_{i}B_{i,n}(x) \tag{3}\]
The weights \(\alpha_{0}\ldots\alpha_{N-n}\) are also known as control points, which can be visualized in fig. 2. We use \(S=N-n\) to denote the number of control points. The shape of the curve can be manipulated by moving the control points while keeping the basis functions constant. The movement of the control points is the main subject of interest in this work, and we aim to derive an optimization formulation that optimizes their placement to form a curvature-optimal trajectory for autonomous racing.
To extend the 1D B-spline to handle a trajectory in the 2D plane, we take two sets of control points to parameterize the \(x\) and \(y\) coordinates separately on the same basis functions. That is, given a trajectory \(T(t):\mathbb{R}\rightarrow\mathbb{R}^{2}\) and a sequence of control points \(\mathbf{z}=[\alpha_{1},\ldots,\alpha_{S},\beta_{1},\ldots,\beta_{S}]^{T}\)
\[T(t,\mathbf{z})=(\sum_{i=1}^{S}\alpha_{i}B_{i,n}(t),\sum_{i=1}^{S}\beta_{i}B_ {i,n}(t)) \tag{4}\]
where \(\alpha_{i},\beta_{i}\) respectively denote the \(x\) and \(y\) coordinates of the control point. Conventionally, we use \(t\in[0.0,1.0]\) to parameterize the trajectory and denote progress along the track. We can also denote the two resulting 1D B-splines as \(T_{x}(t.\mathbf{z}),T_{y}(t,\mathbf{z})\).
Since we will be optimizing with respect to the control points, it is useful to take the derivative of a spline with respect to the control points, which is simply the corresponding basis function.
\[\frac{\partial T(t,\mathbf{z})}{\partial\alpha_{i}}=\frac{\partial T(t, \mathbf{z})}{\partial\beta_{i}}=B_{i,n}(t) \tag{5}\]
The curvature of a B-spline trajectory, as of any parametric curve equation, can be calculated as [9]
\[k(t)=\frac{T_{x}^{\prime}(t)T_{y}^{\prime\prime}(t)-T_{y}^{\prime}(t)T_{x}^{ \prime\prime}(t)}{(T_{x}^{\prime}(t)^{2}+T_{y}^{\prime}(t)^{2})^{\frac{3}{2}}} \tag{6}\]
B-Splines have certain advantageous properties for trajectory optimization problems with respect to the control point movements. First, each control point can manipulate a piece of segment holistically. Intuitively, given the same knots \(t_{0}\ldots t_{n}\), a control point has a longer influence range along the curve as the degree of the B-spline increases. Visually in Fig. 1, the basis function corresponding to that control point spans more intervals of knots. Second, since the basis function is evaluated to zero outside of its intervals according to (2), we can perform partial optimizations to a specific section of the trajectory without affecting the others. Lastly, the resulting trajectory from the control point movements is guaranteed to have the same order of continuity as the original trajectory since it is still a B-spline of the same order.
These features of B-splines make them suitable for autonomous racing applications. For a vehicle to have continuous velocity and acceleration profiles, the trajectory should be at least \(C^{2}\) continuous, subject to additional curvature constraints since vehicle kinematics is not omnidirectional. To optimize the vehicle's trajectory through a specific segment of the race track, we can control the scope of the optimization by controlling the number of control points to be included in the optimization problem, and obtain results that are perhaps
Fig. 1: Visualization of basis functions \(B_{i,n}(x)\) of order 2 to 5 [10]
more locally optimized for a particular turn, or more globally optimized through a combination of turns.
## III Method
### _Generating and Evaluating a Spline Trajectory_
As an example, in this work we will use the Monza Circuit, which is a 5.8 km (3.6 mile) long race track used in the Indy Autonomous Challenge. It has a combination of long straights, high-speed turns, and chicanes. We will also show experiments on an other race track in a later section.
We obtain the race track geometries from satellite images and geographical surveys. We represent the track using a reference center line, with left and right offsets to denote the distances to the track boundaries at every waypoint. We then draw a cubic B-spline interpolation of the reference center line using the least square periodic spline interpolation algorithm discussed in [7], which forms a closed-loop spline on \(t\in[0.0,1.0]\).
We then discretize the trajectory by taking waypoints at a constant 3-meter interval. This is done by performing a numerical integration to calculate the length of the trajectory:
\[L(t_{\text{min}},t_{\text{max}})=\int_{t_{\text{min}}}^{t_{\text{max}}}T_{x}^{ \prime}(t)^{2}+T_{y}^{\prime}(t)^{2}dt \tag{7}\]
and solving a subsequent root-finding problem to find \(t_{i}\) such that the trajectory advances by 3 m.
\[t_{i}=\underset{t_{i}}{\text{argmin}}\quad L(t_{i-1},t_{i})-3 \tag{8}\]
We note that the optimization is done on the continuous spline, and these discretization points are simply used for sampling the curvature throughout the trajectory. The discrete trajectory is also useful in the evaluation process to be described below.
### _Configuring the Vehicle Parameters_
As the main constraint on vehicle dynamics, we use a traction ellipse, which considers the longitudinal and lateral accelerations of the vehicle when accelerating, braking and cornering, or a combination of them. To characterize the shape of the ellipse, four parameters are used, which correspond to the maximum longitudinal acceleration, longitudinal deceleration, and left and right lateral accelerations. For the purpose of autonomous racing development, these parameters are easily adjustable to impose additional safety constraints within tire limits. For race cars with asymmetrical setups, such as those racing on an oval racing circuit, the left and right lateral accelerations can be adjusted separately. In our example, shown in Fig. 3, we impose a maximum acceleration of 10 m\(\,\)s\({}^{-2}\), deceleration of -20 m\(\,\)s\({}^{-2}\), and symmetric maximum cornering load of \(\pm\)15 m\(\,\)s\({}^{-2}\).
### _Simulating a Spline Trajectory_
We perform a simulation for the best lap time on the spline trajectory to obtain the velocity profile, which will be used in the evaluation. The QSS algorithm starts by examining the curvature profile of the discretized trajectory and using the minimum-curvature points as constraints for this simulation. The relationship between the vehicle's tangential velocity \(v\) and curvature \(k\) satisfies
\[v=\sqrt{a_{lat}/k} \tag{9}\]
where \(a_{lat}\) is the lateral acceleration. Therefore, to maximize vehicle velocity through the bottleneck, the vehicle should have zero longitudinal acceleration to maximize lateral acceleration according to the traction ellipse.
After obtaining this initial condition, the algorithm proceeds to generate an entry and exit velocity profile around the bottleneck points, until the two profiles of two bottlenecks converge. The trajectory is then further adjusted to ensure a smooth acceleration and velocity transition at the meeting points. Fig. 4 shows a baseline simulation done on the reference center line trajectory, with a heatmap indicating the velocity levels that the vehicle can achieve in various sections of the track.
Fig. 3: Geometry of traction ellipse used as constraint (blue), and actual tire constraints (red, simulated)
Fig. 2: Turn 1 and 2 of Monza Circuit after interpolation
### _Spline-Based Minimum Curvature Problem_
Building on [1] and [9], we consider the minimum-curvature optimization problem with respect to all control point coordinates \(\mathbf{z}\).
\[\min_{\mathbf{z}}\quad\sum_{j=1}^{M}k_{j}^{2}(t) \tag{10}\]
where \(k_{1}\ldots k_{M}\) are the curvatures of the discretization points within the span of the corresponding basis function of \(\mathbf{z}\). The difference from the previous work in the formulation is the optimization variable, which is the lateral movements of individual discretization points in the previous work, but is replaced with the control point placements in our work. This reduces the dimension of the decision variable from \(M\), the number of discretization points, to \(2S\), twice the number of control points. For Monza Circuit with 3-meter interval discretization, the dimension reduces from 1932 to 204.
We then substitute (6) into (10) and omit the constant terms in the problem. Discarding the \((t,\mathbf{z})\) notation in \(T(t,\mathbf{z})\) for simplicity, we arrive at a similar QP formulation to that in Heilmeier et al. [9]:
\[\min_{z}\quad T_{x}^{\prime\prime\mathcal{T}}P_{xx}T_{x}^{\prime\prime}+T_{y}^ {\prime\prime\mathcal{T}}P_{xy}T_{x}^{\prime\prime}+T_{y}^{\prime\prime \mathcal{T}}P_{yy}T_{y}^{\prime\prime} \tag{11}\]
where
\[P_{xx}=\begin{bmatrix}\frac{(T_{x_{1}})^{2}v_{1}}{((T_{x_{1}})^{ \prime 2}+(T_{y_{1}})^{\prime 2})^{3}}&0&\cdots&0\\ 0&\ddots&\cdots&0\\ \vdots&\vdots&\ddots&\vdots\\ 0&0&\cdots&\frac{(T_{y_{M}})^{2}v_{M}}{((T_{x_{M}})^{\prime 2}+(T_{y_{M}})^{ \prime 2})^{3}}\end{bmatrix}\]
\[P_{xy}=\begin{bmatrix}\frac{-2(T_{x_{1}})^{\prime}(T_{y_{1}})^{ \prime}v_{1}}{((T_{x_{1}})^{\prime 2}+(T_{y_{1}})^{\prime 2})^{3}}&0&\cdots&0\\ 0&\ddots&\cdots&0\\ \vdots&\vdots&\ddots&\vdots\\ 0&0&\cdots&\frac{-2(T_{x_{M}})^{\prime}(T_{y_{M}})^{\prime}v_{M}}{((T_{x_{M}}) ^{\prime 2}+(T_{y_{M}})^{\prime 2})^{3}}\end{bmatrix}\]
\[P_{yy}=\begin{bmatrix}\frac{(T_{x_{1}})^{2}v_{1}}{((T_{x_{1}})^{ \prime 2}+(T_{y_{1}})^{\prime 2})^{3}}&0&\cdots&0\\ 0&\ddots&\cdots&0\\ \vdots&\vdots&\ddots&\vdots\\ 0&0&\cdots&\frac{(T_{x_{M}})^{2}v_{M}}{((T_{x_{M}})^{\prime 2}+(T_{y_{M}})^{ \prime 2})^{3}}\end{bmatrix}\]
We now need to correlate \(T_{x}^{\prime\prime}\) and \(T_{y}^{\prime\prime}\) with the location of the \(j\)-th control point \(z_{j}=\left[\alpha_{j}\quad\beta_{j}\right]^{T}\) through the second-order derivative of the underlying B-spline basis. These lower-order bases also have a recursive closed-form solution and can be treated as constants [5].
\[T_{x}^{\prime\prime} =\sum_{i=1}^{j-1}\frac{d^{2}B_{i,n}(t)}{dt^{2}}x_{i}+\frac{d^{2}B_ {j,n}(t)}{dt^{2}}x_{j}+\sum_{i=j+1}^{S}\frac{d^{2}B_{i,n}(t)}{dt^{2}}x_{i}\] \[:=F_{x}+\frac{d^{2}B_{j,n}(t)}{dt^{2}}x_{j} \tag{12a}\] \[T_{y}^{\prime\prime} =\sum_{i=1}^{j-1}\frac{d^{2}B_{i,n}(t)}{dt^{2}}y_{i}+\frac{d^{2}B_ {j,n}(t)}{dt^{2}}y_{j}+\sum_{i=j+1}^{S}\frac{d^{2}B_{i,n}(t)}{dt^{2}}y_{i}\] \[:=F_{y}+\frac{d^{2}B_{j,n}(t)}{dt^{2}}y_{j} \tag{12b}\]
This shows that the relation between \(T_{x}^{\prime\prime}\) and \(x_{j}\), or \(T_{y}^{\prime\prime}\) and \(y_{j}\), is affine. If optimization is done with respect to all the control points at once, then \(F_{x}\) and \(F_{y}\) reduces to zero, since (12a) and (12b) now become
\[T_{x}^{\prime\prime} =\sum_{i=1}^{S}\frac{d^{2}B_{i,n}(t)}{dt^{2}}x_{i} \tag{13a}\] \[T_{y}^{\prime\prime} =\sum_{i=1}^{S}\frac{d^{2}B_{i,n}(t)}{dt^{2}}y_{i} \tag{13b}\]
In which \(\{x_{0},\ldots,x_{S}\}\) and \(\{y_{0},\ldots,y_{S}\}\) are all decision variables.
Substituting (13a) and (13b) into (11), we can formulate this problem as a standard QP.
\[\min_{\mathbf{z}}\quad\frac{1}{2}\mathbf{z}^{T}Hz+g^{T}\mathbf{z}\] (14a) where \[H =B_{x}^{T}P_{xx}B_{x}+B_{y}^{T}P_{xy}B_{x}+B_{y}^{T}P_{yy}B_{y} \tag{14b}\] \[g =(F_{x}^{T}P_{xx}B_{x}+F_{y}^{T}P_{xy}B_{y}+F_{y}^{T}P_{yy}B_{y})+\] \[(B_{x}^{T}P_{xx}F_{x}+B_{y}^{T}P_{xy}F_{x}+B_{y}^{T}P_{yy}F_{y})^{T} \tag{14c}\]
Fig. 4: Example simulation on the center line trajectory. The heatmap shows the velocity levels (m/s) at individual track sections.
and where
\[B_{x} =\begin{bmatrix}\mathbf{B}_{2}&\mathbf{0}^{M\times S}\end{bmatrix}^ {T} \tag{14d}\] \[B_{y} =\begin{bmatrix}\mathbf{0}^{M\times S}&\mathbf{B}_{2}\end{bmatrix}^ {T}\] (14e) \[\mathbf{B}_{2} =\begin{bmatrix}\frac{d^{2}B_{1,n}}{dt^{2}}(t_{1})&\cdots&\frac{d^ {2}B_{S,n}}{dt^{2}}(t_{1})\\ \vdots&\ddots&\vdots\\ \frac{d^{2}B_{1,n}}{dt^{2}}(t_{M})&\cdots&\frac{d^{2}B_{S,n}}{dt^{2}}(t_{M}) \end{bmatrix} \tag{14f}\]
Note that (14c) is evaluated to zero when considering all the control points since \(F_{x}=F_{y}=\mathbf{0}\).
Finally, we obtain the constraints for the optimization, which is the distance to the left and right boundaries. For the i-th discretization point, we use \(\mathbf{l}=\begin{bmatrix}l_{0}&\cdots&l_{M}\end{bmatrix}^{T}\) and \(\mathbf{r}=\begin{bmatrix}r_{0}&\cdots&r_{M}\end{bmatrix}^{T}\) to denote the center line's distances to the left and right boundaries. Then we use a local freet frame approximation to find the distance between the optimized discretization point and the original center line. This is done by applying a rigid body transformation to the boundary point from the fixed map frame into the center line's local frame.
\[A^{2M\times 2S}=\begin{bmatrix}\mathbf{B}&\mathbf{0}^{M\times S}\\ \mathbf{0}^{M\times S}&\mathbf{B}\end{bmatrix}\]
\[R^{M\times 2M}=\begin{bmatrix}\cos\theta_{1}&0&0&\sin\theta_{1}&0&0\\ 0&\ddots&0&0&\ddots&0\\ 0&0&\cos\theta_{M}&0&0&\sin\theta_{M}\end{bmatrix}\]
where
\[\mathbf{B}=\begin{bmatrix}B_{1,n}(t_{1})&\cdots&B_{S,n}(t_{1})\\ \vdots&\ddots&\vdots\\ B_{1,n}(t_{M})&\cdots&B_{S,n}(t_{M})\end{bmatrix}\]
and where \(\theta_{i}\quad\forall i\in\{1,\ldots,M\}\) denotes the orientation of the center line at discretization point \(p_{i}\). The linear constraint is therefore
\[\mathbf{r}\leq RA\mathbf{z}\leq\mathbf{l}\]
Breaking down this equation, \(A\mathbf{z}\) computes the discretization coordinates from the control points, which is rotated by \(R\) to compute its projection on the left or right side of the center line.
Now we have all the components for our constrained QP problem, which we can solve with qpOASES [8].
\[\min_{\mathbf{z}} \frac{1}{2}\mathbf{z}^{T}H\mathbf{z}+g^{T}\mathbf{z}\] s.t. \[\mathbf{r}\leq RA\mathbf{z}\leq\mathbf{l} \tag{15}\]
## IV Results
Before presenting the results, it is beneficial to discuss the evaluation metrics for OTO methods. Since this is an open-loop offline method, it is insufficient to show the end trajectory achieved by a vehicle driving on the track, since there are other processes down the pipeline such as online dynamic planning and optimal control calculations. These modules also have a significant impact on the outcome of the experiment. Therefore, we will directly compare the OTO result with the current state-of-the-art method in minimum-curvature optimization in [9] to see if our method has achieved our objectives, which is to offer an alternative minimum-curvature optimization formulation for autonomous racing that uses significantly fewer decision variables, guarantees continuity and still offers a trajectory comparable to previous work.
To this end, we will perform minimum-curvature optimization using our formulation and Heilmeier et al. formulation [9]. The discretization intervals are the same in both experiments. We then simulate lap times with the QSS simulation method described in III.A to show that we have reached comparable results with the previous work.
We first compare the overall lap time performance of our method with the reference center line and Heilmeier et al. [9] in Table I. The optimization algorithms produce 7.65% (ours)
Fig. 5: Optimization visualization at Monza Turn 8-10. The figure on the left visualizes the movements of the control points which shift the center line spline into a minimum-curvature spline. The figure on the right compares the optimization results of ours and [9].
and 8.94% (Heilmeier et al.) lap time reduction, respectively, offering comparable maximum velocity and acceleration profiles. The percentage difference of the metrics is shown in the right column, which suggests that we achieved a level of optimality similar to [9]. However, our result produces a \(C^{2}\) continuous B-spline curve that can be easily re-discretized and even artificially manipulated afterwards by dragging the control points, which is a very useful feature in the developmental stage of autonomous racing, whereas [9] only produces a discretized trajectory with no guarantee of continuity level due to its discrete decision variables. We also see a significant reduction in the dimension of the decision variable. Our QP problem has 204 decision variables (102 control points), down from 1932 using [9] with a 3 m discretization interval. The resulting QP computation time is 3.8 ms, down from 8.225 s using [9]. This shows that our method not only gives a more compact formulation that captures the essence of the minimum-curvature problem, but also enables the possibility of adopting such method in online planning on a very long track.
Looking turn by turn, Fig. 5 zooms in on the optimization at turns 8-10. The control points of the center-line spline are shifted to form a minimum curvature trajectory. For turn 7 in the zoomed-in section, we are able to optimize the full trajectory with only four control points.
We then move on to a different race track and apply the same OTO method to see how our method transfers to a new track. Fig. 6 visualizes the optimization results done on Putnam Road Course, Indiana, USA, which is a test track of the Indy Autonomous Challenge. The zoomed-in portion of the graph visualizes the movement of control points which leads to a curvature-optimal trajectory. Table II shows metrics similar to I, in which comparable lap time reduction of 13.9% and 15.67% are respectively achieved by [9] and our work. The difference in the simulation metrics between ours and [9] remains close, suggesting that the outcome of the optimization is very comparable.
## V Conclusions
We present a B-spline OTO method for autonomous racing which solves a minimum-curvature optimization problem. Compared to previous works which only output a discretized trajectory, this work outputs a fully parameterized trajectory of \(C^{2}\) continuity, which ensures a smooth control profile for high-speed vehicle handling. The algorithm also considers the data scarcity of early-stage autonomous motorsports development and requires minimum vehicle dynamics data. Compared to previous work [9], the dimension of the problem is significantly reduced from thousands of discretization points down to a few dozens of control knot points. The problem computation time is also drastically reduced as a result. This enables future work to explore the online application of minimum-curvature OTO for autonomous racing.
Fig. 6: Optimization visualization at Putnam Road Course, Indiana, USA. The left figure shows the movement of the control points and the optimal trajectory shape as a result. The right figure compares ours and Heilmeier et al.’s trajectory through the chicane. |
2305.00511 | Monotonic Extensions of Lipschitz Maps | We study the problem of extending an order-preserving real-valued Lipschitz
map defined on a subset of a partially ordered metric space without increasing
its Lipschitz constant and preserving its monotonicity. We show that a certain
type of relation between the metric and order of the space, which we call
radiality, is necessary and sufficient for such an extension to exist.
Radiality is automatically satisfied by the equality relation, so the classical
McShane-Whitney extension theorem is a special case of our main
characterization result. As applications, we obtain a similar generalization of
McShane's uniformly continuous extension theorem, along with some functional
representation results for radial partial orders. | Efe A. Ok | 2023-04-30T15:41:40Z | http://arxiv.org/abs/2305.00511v1 | # Monotonic extensions of Lipschitz maps
###### Abstract.
We study the problem of extending an order-preserving real-valued Lipschitz map defined on a subset of a partially ordered metric space without increasing its Lipschitz constant and preserving its monotonicity. We show that a certain type of relation between the metric and order of the space, which we call _radiality_, is necessary and sufficient for such an extension to exist. Radiality is automatically satisfied by the equality relation, so the classical McShane-Whitney extension theorem is a special case of our main characterization result. As applications, we obtain a similar generalization of McShane's uniformly continuous extension theorem, along with some functional representation results for radial partial orders.
Key words and phrases:partially ordered metric spaces, order-preserving Lipschitz maps, radial convexity, McShane-Whitney extension theorem, extension of uniformly continuous functions 2020 Mathematics Subject Classification: Primary 54C20, 26A26, 54C30; Secondary 06F30
## 1. Introduction
The most important continuous extension theorem for order-preserving functions is _Nachbin's extension theorem_. This theorem considers a partially ordered topological space, and gives conditions under which a continuous and increasing real-valued function defined on a compact subset of such a space can be extended to the entire space in such a way to remain continuous and increasing. It has found profound applications, especially in the field of decision theory. (The references for the present discussion are provided in the body of the paper.)
Another extension theorem of great importance is the famous _McShane-Whitney extension theorem_. This theorem shows that any Lipschitz map defined on a subset of a metric space can be extended to the entire space without increasing the Lipschitz constant of the original map. This theorem paved the way toward various types of Lipschitz extension theorems for Banach space-valued Lipschitz maps, presently a topic of active research in geometric functional analysis. In addition, it has recently been pivotally used in the literature on machine learning and metric data analysis.
The primary objective of this note is to understand to what extent a Nachbin type generalization of the McShane-Whitney theorem is possible. To state our query precisely, consider a \(1\)-Lipschitz real-valued function \(f\) on a subset \(S\) of a metric space \(X.\) Now suppose \(X\) is endowed with a partial order \(\succcurlyeq\), and that \(f\) is order-preserving (in the sense that \(f(x)\geq f(y)\) for every \(x,y\in S\) with \(x\succcurlyeq y\)). The problem is to determine under what conditions (that do not depend on \(S\) and \(f\)), one can extend \(f\) to an order-preserving \(1\)-Lipschitz map on \(X.\) Our main result (Theorem 2, below) says that this is possible if, and only if, \(\succcurlyeq\) satisfies a rather demanding condition, which we call _radiality_. (We actually prove a slightly more
general result that covers \(\ell_{\infty}(T)\) valued-maps as well, for any nonempty set \(T\).) Radiality of \(\succcurlyeq\) demands that if \(x\succcurlyeq y\) while \(z\succcurlyeq y\) does not hold, then the distance between \(x\) and \(z\) is larger than that between \(y\) and \(z\) (and similarly for the case where not \(y\succcurlyeq x\) but \(y\succcurlyeq z\)). While it is obviously strong, this condition is necessary for the sought monotonic Lipschitz extension theorem. Moreover, when \(\succcurlyeq\) is total, it reduces to _radial convexity_ which is commnoly used in the field of topological order theory. Finally, the equality ordering is radial, so our extension theorem generalizes the McShane-Whitney extension theorem (just like Nachbin's theorem generalizes the Tietze extension theorem).
We also present some applications of our monotonic Lipschitz extension theorem. First, we show that every radial partial order on a (compact) metric space can be represented by means of a (compact) collection of order-preserving Lipschitz functions. An immediate corollary of this is that every radial partial order is closed. Second, we prove that on any radial partially ordered \(\sigma\)-compact metric space \(X\), there is a strictly increasing Lipschitz map \(F\) (in the sense that \(F(x)>F(y)\) for every distinct \(x,y\in X\) with \(x\succcurlyeq y\)). Finally, we combine our extension theorem with the recent remetrization approach introduced in Beer [4] to show that if \(\succcurlyeq\) is a radial partial order on a metric space \(X,\) then any bounded (or more generally, Lipschitz for large distances), order-preserving and uniformly continuous map on a subset of \(X\) can be extended to a function on the entire space in such a way that it remains order-preserving and uniformly continuous. Radiality can actually be relaxed substantially in this result, but characterizing those metric posets on which such an extension is possible is presently an open problem.
## 2. Preliminaries
### Posets
Let \(X\) be a nonempty set. A _preorder_ on \(X\) is a reflexive and transitive binary relation on \(X,\) while a _partial order_ on \(X\) is an antisymmetric preorder on \(X.\) We refer to the ordered pair \((X,\succcurlyeq)\) as a _poset_ if \(\succcurlyeq\) is a partial order on \(X.\) (In this context, \(X\) is called the _carrier_ of the poset.) A preorder on \(X\) is _total_ if any two elements \(x\) and \(y\) of \(X\) are \(\succcurlyeq\)_-comparable_, that is, either \(x\succcurlyeq y\) or \(y\succcurlyeq x\) holds. A total partial order on \(X\) is said to be a _linear order_ on \(X;\) in this case, we refer to \((X,\succcurlyeq)\) as a _loset_.
Let \((X,\succcurlyeq)\) be a poset. For any \(x\in X,\) we define \(x^{\downarrow}:=\{z\in X:x\succcurlyeq z\}\) and \(x^{\uparrow}:=\{z\in X:z\succcurlyeq x\}\). (A set of the former type is said to be a _principle ideal_ in \((X,\succcurlyeq),\) and one of the latter type is called a _principal filter_ in \((X,\succcurlyeq).) In turn, for any \(S\subseteq X,\) we define the \(\succcurlyeq\)_-decreasing closure_ of \(S\) as \(S^{\downarrow}:=\bigcup_{x\in S}x^{\downarrow},\) and define the \(\succcurlyeq\)_-increasing closure_\(S^{\uparrow}\) of \(S\) dually. In turn, \(S\) is said to be \(\succcurlyeq\)_-decreasing_ if \(S=S^{\downarrow},\) and \(\succcurlyeq\)_-increasing_ if \(S=S^{\uparrow}.\)
Given any poset \((X,\succcurlyeq),\) we denote the asymmetric part of \(\succcurlyeq\) by \(\succ\), that is, \(x\succ y\) means \(y\neq x\succcurlyeq y.\) We also define the binary relation \(\succcurlyeq^{\bullet}\) on \(X\) as
\[x\succcurlyeq^{\bullet}y\quad\text{ iff }\quad\text{ not }y\succcurlyeq x.\]
Thus \(x\succcurlyeq^{\bullet}y\) means that either \(x\succ y,\) or \(x\) and \(y\) are not \(\succcurlyeq\)-comparable. It is plain that \(\succcurlyeq^{\bullet}\) is an irreflexive relation. In general, this relation is neither symmetric nor asymmetric, nor it is transitive. When \(\succcurlyeq\) is total, however, \(\succcurlyeq^{\bullet}\) equals \(\succ.\)
A function \(f:X\to Y\) from a poset \(X=(X,\succcurlyeq)\) to a poset \(Y=(Y,\triang)\) is said to be _order-preserving_ if for every \(x,y\in X,\)
\[x\succcurlyeq y\quad\text{ implies }\quad f(x)\triangtie f(y).\]
If \(Y\) is \((\mathbb{R},\geq)\), where \(\geq\) is the usual order, we refer to \(f\) simply as \(\succcurlyeq\)_-increasing_. Note that the indicator function of any \(\succcurlyeq\)-increasing subset of \(X\) is an \(\succcurlyeq\)-increasing map.
### Normally Ordered Topological Posets
A _topological poset_ is an ordered pair \((X,\succcurlyeq)\) where \(X\) is a topological space and \(\succcurlyeq\) is a partial order on \(X\) such that \(\succcurlyeq\) is closed in \(X\times X\) (relative to the product topology). On the other hand, a _normally ordered topological space_ is an ordered pair \((X,\succcurlyeq)\) where \(X\) is a topological space and \(\succcurlyeq\) is a partial order on \(X\) such that for every disjoint closed subsets \(A\) and \(B\) such that \(A\) is \(\succcurlyeq\)-decreasing and \(B\) is \(\succcurlyeq\)-increasing, there exist disjoint open subsets \(O\) and \(U\) of \(X\) such that \(O\) is \(\succcurlyeq\)-decreasing and contains \(A\), and \(U\) is \(\succcurlyeq\)-increasing and contains \(B\). If, in addition, \(\succcurlyeq\) is closed in \(X\times X\), we refer to \((X,\succcurlyeq)\) as a _normally ordered topological poset_.
In his seminal work, Nachbin [27] has studied such spaces and obtained the following generalization of the classical Tietze extension theorem:
**The Nachbin Extension Theorem**.: _Let \((X,\succcurlyeq)\) be a normally ordered topological poset. Then for every compact subset \(S\) of \(X\), and every \(\succcurlyeq\)-increasing and continuous \(f:S\to\mathbb{R}\), there is an \(\succcurlyeq\)-increasing and continuous \(F:X\to\mathbb{R}\) with \(F|_{S}=f.\)_
This is a truly powerful extension theorem which holds true also when \(\succcurlyeq\) is not antisymmetric (cf. [25]). It is used extensively in decision theory; see, for instance, [8], [14], [15], and references cited therein.
### Partially Ordered Metric Spaces
A _partially_ (resp., _linearly_) _ordered metric space_ is an ordered triplet \((X,d,\succcurlyeq)\) such that \((X,d)\) is a metric space and \((X,\succcurlyeq)\) is a poset (resp., loser). If, in addition, \(\succcurlyeq\) is a closed subset of \(X\times X\), we refer to \((X,d,\succcurlyeq)\) as a _metric poset_ (resp., _metric loser_).
A partially ordered metric space \((X,d,\succcurlyeq)\) is said to be _radially convex_ (or that the partial order \(\succcurlyeq\) on \((X,d)\) is _radially convex_) if
\[x\succ y\succ z\quad\text{ implies }\quad d(x,z)\geq\max\{d(x,y),d(y,z)\}\]
for every \(x,y,z\in X.\) This concept builds an appealing connection between the order and metric structures to be imposed on a given set. Indeed, such partially ordered metric spaces have received some attention in topological order theory (cf. [5], [10] and [30], among many others), and are often used in the topological analysis of smooth dendroids; (cf. [17]).
In what follows we will need to work with a strengthening of radial convexity. We say that a partially ordered metric space \((X,d,\succcurlyeq)\) is _radial_ (or that the partial order \(\succcurlyeq\) on \((X,d)\) is _radial_) if
\[x\succcurlyeq^{\bullet}y\succ z\quad\text{ implies }\quad d(x,z)\geq d(x,y) \tag{1}\]
and
\[x\succ y\succcurlyeq^{\bullet}z\quad\text{ implies }\quad d(x,z)\geq d(y,z). \tag{2}\]
While radiality is more demanding than radial convexity, these concepts coincide when the partial order at hand is total.
**Lemma 1**.: _A linearly ordered metric space is radial if and only if it is radially convex._
### Examples of Radial Metric Posets
If we order and metrize any nonempty subset of \(\mathbb{R}\) in the usual way, we obtain a radial metric loser. Besides, it is plain that every partially ordered discrete metric space is radial, and the equality relation on any metric space is radial. But easy examples show that ordering \(\mathbb{R}^{2}\) coordinatewise and endowing it with the Euclidean metric yields a radially convex metric poset which is not radial.
Before proceeding further, we present a few more examples.
_Example 1_.: Consider the poset \((X,\succcurlyeq)\) where \(X:=\{x_{1},x_{2},x_{3},x_{4}\}\), \(x_{1}\succ x_{2}\succ x_{4}\), \(x_{1}\succ x_{3}\succ x_{4}\), and \(x_{2}\) and \(x_{3}\) are not \(\succcurlyeq\)-comparable. (This poset is isomorphic to \((2^{S},\supseteq)\) for any doubleton \(S\).) For any \(a,b\in(0,1)\), define \(d_{a,b}:X\times X\rightarrow[0,1]\) by the matrix
\[\left[\begin{array}{cccc}0&a&a&1\\ a&0&b&1-a\\ a&b&0&1-a\\ 1&1-a&1-a&0\end{array}\right]\]
whose \(ij\)th term is \(d_{ab}(x_{i},x_{j})\), \(i,j=1,...,4.\) Then, \(d_{a,b}\) is a metric on \(X\) iff \(\min\{a,1-a\}\geq\frac{1}{2}b\). In fact, under this parametric restriction, \((X,d_{a,b},\succcurlyeq)\) is a radially convex metric poset. In addition, if \(1-a<b<a\), this metric poset satisfies the condition (2), but not (1), while if \(a<b<1-a\), then the opposite situation ensues. (In particular, this shows that there is no redundancy in our definition of radiality.) Consequently, \((X,d_{a,b},\succcurlyeq)\) is a radial metric poset, provided that \(\min\{a,1-a\}\geq b\). \(\Box\)
_Example 2_.: Let \(T\) be a tree with a finite set \(X\) of vertices and root \(x_{0}\in X.\) The _path-metric_ on \(X\) (induced by \(T\)) is defined as
\[\rho_{T}(x,y):=\mbox{the length of the path between $x$ and $y$ in $T$}.\]
(Since \(T\) is a tree, there is a unique path between any of its two vertices.) We define \(d_{T}:X\times X\rightarrow\{0,1,2\}\) by setting \(d_{T}(x,y):=\min\{\rho_{T}(x,y),2\}\) if \(x\) and \(y\) are on the same path whose one endpoint is \(x_{0}\), and \(d_{T}(x,y):=1\) otherwise. It is readily checked that \(d_{T}\) is a metric on \(X.\) Finally, we define the partial order \(\succcurlyeq\) on \(X\) by
\[x\succcurlyeq y\quad\mbox{ iff }\quad y\mbox{ is on the path between $x_{0}$ and $x$}.\]
Then, \((X,\rho_{T},\succcurlyeq)\) is a radially convex metric poset (which need not be radial), while \((X,d_{T},\succcurlyeq)\) is a radial metric poset. \(\Box\)
_Example 3_.: Let \(A\) and \(B\) be two disjoint bounded subsets of a metric space \((Y,d).\) Let \(\succcurlyeq_{A}\) and \(\succcurlyeq_{B}\) be radially convex linear orders on \((A,d)\) and \((B,d)\), respectively. Let \(\succcurlyeq\) be the disjoint sum of \(\succcur_{A}\) and \(\succcurlyeq_{B}\), that is, \(\succcurlyeq\) is the partial order on \(X:=A\sqcup B\) with \(x\succcurlyeq y\) iff either \(x\succcurlyeq_{A}y\) or \(x\succcurlyeq_{B}y\). Now pick any number \(\theta\geq\max\{\mbox{diam}(A),\mbox{diam}(B)\},\) and consider the function \(D:X\times X\rightarrow[0,\infty)\) with
\[D(x,y):=\left\{\begin{array}{ll}d(x,y),&\mbox{if $(x,y)\in A^{2}$ or $(x,y)\in B^{2}$}\\ \frac{1}{2}\theta,&\mbox{otherwise.}\end{array}\right.\]
It is easily checked that \(D\) is a metric on \(X.\) In fact, \((X,D,\succcurlyeq)\) is a radial partially ordered metric space. \(\Box\)
_Example 4_.: Let \(I\) stand for the unit interval \([0,1]\), and take any set \(J\) that does not intersect \(I.\) Define the partial order on \(X:=I\sqcup J\) with \(x\succcurlyeq y\) iff either \((x,y)\in J\times I\)
or \(\{x,y\}\subseteq I\) and \(x\geq y.\) (In other words, \(\succcurlyeq\) agrees with the usual order on \(I,\) and puts anything in \(J\) above all numbers in \(I.\) No two distinct elements of \(J\) are \(\succcurlyeq\)-comparable.) Define \(d:X\times X\rightarrow[0,\infty)\) as follows: (i) \(d|_{I\times I}\) is the absolute value metric on \(I;\) (ii) \(d|_{J\times J}\) is the discrete metric on \(J;\) (iii) \(d(x,y):=1+y\) if \((x,y)\in J\times I;\) and (iv) \(d(x,y):=1+x\) if \((x,y)\in I\times J.\) Then, \((X,d,\succcurlyeq)\) is a radial partially ordered metric space. \(\square\)
In passing, we note that it may be a mistake to think of the radiality property as prohibitively strong. In the context of metric data analysis and machine learning (see [9], [13], and [16]), one often works with finite metric spaces or metric graphs (relative to which the Lipschitz extension problems are by no means trivial). As witnessed by Examples 1 and 2 above, the radiality property may turn out to be considerably less demanding in those sorts of environments.
### Lipschitz Functions
For any real number \(K>0,\) a function \(f:X\to Y\) from a partially ordered metric space \(X=(X,d_{X},\succcurlyeq_{X})\) to a partially ordered metric space \(Y=(Y,d_{Y},\succcurlyeq_{Y})\) is said to be \(K\)_-Lipschitz_ if for every \(x,y\in X,\)
\[d_{Y}(f(x),f(y))\leq Kd_{X}(x,y). \tag{3}\]
We say that \(f\) is _Lipschitz_ if it is \(K\)-Lipschitz for some \(K\geq 0.\) The smallest \(K\geq 0\) such that (3) holds for every \(x,y\in X,\) is called the _Lipschitz constant_ of \(f.\) For excellent treatments of the general theory of Lipschitz functions, see [11] and [31].
We denote the set of all \(K\)-Lipschitz maps from \(X\) to \(Y\) as \(\operatorname{Lip}_{K}(X,Y),\) but write \(\operatorname{Lip}_{K}(X)\) for \(\operatorname{Lip}_{K}(X,\mathbb{R}).\) In turn, the sets of all order-preserving members of \(\operatorname{Lip}_{K}(X,Y)\) and \(\operatorname{Lip}_{K}(X)\) are denoted as \(\operatorname{Lip}_{K,\uparrow}(X,Y)\) and \(\operatorname{Lip}_{K,\uparrow}(X),\) respectively. Throughout this note, we consider these as metric spaces relative to the uniform metric. This makes these spaces complete, but in general not separable.
### The Monotone Lipschitz Extension Property
We say that a partially ordered metric space \((X,d,\succcurlyeq)\) has the _monotone Lipschitz extension property_ if for every nonempty \(S\subseteq X\), every \(K>0\) and \(f\in\operatorname{Lip}_{K,\uparrow}(S),\) there exists an \(F\in\operatorname{Lip}_{K,\uparrow}(X)\) with \(F|_{S}=f.\) In this terminology, the classical _McShane-Whitney extension theorem_ can be viewed as saying that \((X,d,=)\) has the monotone Lipschitz extension property. Our primary objective in this note is to see exactly to what extent we can replace \(=\) with a partial order on \(X\) in this statement.
_Remark 1_. When \((X,d,\succcurlyeq)\) has the monotone Lipschitz extension property, we can always ensure the achieved extension have the same range as the function to be extended. To see this, take any \(F\in\operatorname{Lip}_{K,\uparrow}(X)\) and \(S\subseteq X.\) Where \(m:=\inf_{x\in S}F(x)\) and \(M:=\sup_{x\in S}F(x),\) the map \(G:X\rightarrow[m,M]\) defined by
\[G(x):=\max\{\min\{F(x),M\},m\},\]
is an \(\succcurlyeq\)-increasing \(K\)-Lipschitz map with \(G|_{S}=F|_{S}\). \(\square\)
## 3. Monotone Lipschitz Extensions
Unless a partially ordered metric space is totally ordered, or it is finite, its radiality seems like a fairly demanding condition. However, our main finding in this note shows that this condition is necessary and sufficient for any such space to possess the monotone Lipschitz extension property.
**Theorem 2**.: _A partially ordered metric space \((X,d,\succcurlyeq)\) has the monotone Lipschitz extension property if and only if it is radial._
_Proof._ Suppose \((X,d,\succcurlyeq)\) is not radial. Then, there exist three points \(x,y,z\) in \(X\) such that either
\[x\succcurlyeq^{\bullet}y\succ z\quad\text{ and }\quad d(x,z)<d(x,y), \tag{4}\]
or
\[x\succ y\succcurlyeq^{\bullet}z\quad\text{ and }\quad d(x,z)<d(y,z). \tag{5}\]
Assume first the case (4), set \(S:=\{x,y\}\), and define \(f:S\to\mathbb{R}\) by \(f(x):=d(x,y)\) and \(f(y):=0.\) Then, \(f\in\text{Lip}_{1,\uparrow}(S)\), but for any \(1\)-Lipschitz extension \(F:X\to\mathbb{R}\) of \(f\), we have
\[F(z)\geq F(x)-d(x,z)>f(x)-d(x,y)=0=F(y)\]
which means \(F\) is not \(\succcurlyeq\)-increasing. If, on the other hand, (5) holds, we set \(S:=\{y,z\}\), and define \(f:S\to\mathbb{R}\) by \(f(y):=d(y,z)\) and \(f(z):=0.\) Then, \(f\in\text{Lip}_{1,\uparrow}(S)\), but for any \(1\)-Lipschitz extension \(F:X\to\mathbb{R}\) of \(f\), we have
\[F(x)\leq F(z)+d(x,z)<d(y,z)=F(y)\]
which means \(F\) is not \(\succcurlyeq\)-increasing. This proves the necessity part of the assertion. The sufficiency part is a special case of a more general result that we will establish below.
There does not seem like there is an easy way of getting around the radiality requirement for the monotonic Lipschitz extension problem. For a partially ordered metric space \((X,d,\succcurlyeq)\) that is not radial, the argument above shows that it may not be possible to extend an \(\succcurlyeq\)-increasing \(1\)-Lipschitz map on a compact and \(\succcurlyeq\)-increasing (or \(\succcurlyeq\)-decreasing) set \(S\subseteq X\) to an \(\succcurlyeq\)-increasing \(1\)-Lipschitz map on \(X\).
Setting \(\succcurlyeq\) as the equality relation in Theorem 2 yields the classical McShane-Whitney extension theorem. The following is another straightforward corollary.
**Corollary 3**.: _A linearly ordered metric space \((X,d,\succcurlyeq)\) has the monotone Lipschitz extension property if and only if it is radially convex._
_Remark 2_. It was shown by Mehta [25] that every topological loset \((X,\succcurlyeq)\) is a normally ordered topological space. Therefore, specializing the Nachbin extension theorem to the context of metric spaces, we find: _Given any metric loset \((X,d,\succcurlyeq)\), and any compact \(S\subseteq X,\) every \(\succcurlyeq\)-increasing \(f\in C(S)\) extends to an \(\succcurlyeq\)-increasing \(F\in C(X).\)_ Corollary 3 can be thought of as the reflection of this result in the context of Lipschitz functions. As a return to adding the hypothesis of radial convexity to the picture, it achieves an order-preserving Lipschitz extension of any order-preserving Lipschitz function defined on any (possibly non-compact) subset of \(X.\)\(\square\)
The Lipschitz extension problem for Banach space-valued maps on a metric space is a rather deep one, and is the subject of ongoing research in metric space theory and geometric functional analysis. However, there is one special case of the problem which is settled by the McShane-Whitney theorem in a routine manner. This is when the Lipschitz maps to be extended take values in the Banach space \(\ell_{\infty}(T)\)
of all bounded real functions on some nonempty set \(T.\) (This generalization is of interest, because every metric space can be isometrically embedded in \(\ell_{\infty}(T)\) for some \(T\).) Precisely the same holds for the monotone Lipschitz extension problem as well where we consider \(\ell_{\infty}(T)\) as partially ordered coordinatewise. (For any \(u,v\in\ell_{\infty}(T),\) we write \(u\geq v\) whenever \(u(t)\geq v(t)\) for every \(t\in T\)). We now prove the sufficiency part of Theorem 2 in this more general context.
**Theorem 4**.: _Let \((X,d,\succcurlyeq)\) be a radial partially ordered metric space. For any \(K\geq 0,\) let \(S\) be a nonempty subset of \(X\) and \(f:S\to\ell_{\infty}(T)\) an order-preserving \(K\)-Lipschitz map. Then, there exists an order-preserving \(K\)-Lipschitz map \(F:X\to\ell_{\infty}(T)\) with \(F|_{S}=f\)._
_Proof._ We assume \(S\neq X,\) for otherwise there is nothing to prove. Similarly, the claim is trivially true when \(K=0,\) so we may assume \(K>0.\) Moreover, it is enough to prove the assertion for \(K=1,\) for then the general case obtains by applying what is established to the map \(\frac{1}{K}f.\)
The following proof is patented after the typical way one proves the Hahn-Banach Theorem. In the initial stage of the argument, we take an arbitrary \(x\in X\backslash S\) and extend \(f\) to an order-preserving \(1\)-Lipschitz function on \(S\cup\{x\}\). To this end, consider the functions \(a_{x}:T\to[-\infty,\infty]\) and \(b_{x}:T\to[-\infty,\infty]\) defined as
\[a_{x}(t):=\sup\left\{f(z)(t):z\in S\cap x^{\downarrow}\right\}\]
and
\[b_{x}(t):=\inf\left\{f(y)(t):y\in S\cap x^{\uparrow}\right\}.\]
If \(S\cap x^{\downarrow}=\varnothing,\) then \(a_{x}(t)=-\infty\) for every \(t\in T,\) while \(S\cap x^{\uparrow}=\varnothing\) implies \(b_{x}(t)=\infty\) for every \(t\in T\). On the other hand, if both \(S\cap x^{\downarrow}\) and \(S\cap x^{\uparrow}\) are nonempty, monotonicity of \(f\) yields \(-\infty<a_{x}(t)\leq b_{x}(t)<\infty\) for all \(t\in T.\) In all contingencies, then, \([a_{x}(t),b_{x}(t)]\) is a nonempty interval in the set of all extended reals.
We next define the functions \(\alpha_{x}:T\to[-\infty,\infty]\) and \(\beta_{x}:T\to[-\infty,\infty]\) by
\[\alpha_{x}(t):=\sup\left\{f(z)(t)-d(x,z):z\in S\right\}\]
and
\[\beta_{x}(t):=\inf\left\{f(y)(t)+d(x,y):y\in S\right\}.\]
(These are the McShane and Whitney extensions of \(f,\) respectively.) In this case, both \(\alpha_{x}(t)\) and \(\beta_{x}(t)\) are real numbers for every \(t\in T.\) In fact, as \(f\) is \(1\)-Lipschitz, for every \(y,z\in S\) we have
\[f(z)(t)-f(y)(t)\leq\left\|f(z)-f(y)\right\|_{\infty}\leq d(z,y)\leq d(x,y)+d(x,z),\]
whence \(f(z)(t)-d(x,z)\leq f(y)(t)+d(x,y),\) for all \(t\in T.\) Conclusion: \(-\infty<\alpha_{x}(t)\leq\beta_{x}(t)<\infty\) for all \(t\in T.\)
We claim that
\[\alpha_{x}(t)\leq b_{x}(t)\quad\text{ and }\quad a_{x}(t)\leq\beta_{x}(t) \tag{6}\]
for every \(t\in T.\) To see this, suppose \(\alpha_{x}(t)>b_{x}(t)\) for some \(t\in T.\) Then, there exist \(y\in S\cap x^{\uparrow}\) and \(z\in S\) such that \(f(y)(t)<f(z)(t)-d(x,z).\) It follows that \(f(y)(t)<f(z)(t),\) so \(y\succcurlyeq z\) does not hold (because \(f\) is \(\succcurlyeq\)-increasing). Thus: \(z\succcurlyeq^{\bullet}y\succ x.\) Since \((X,d,\succcurlyeq)\) is radial, therefore, \(d(x,z)\geq d(y,z).\) This entails
\[f(y)(t)<f(z)(t)-d(x,z)\leq f(z)(t)-d(y,z),\]
and hence, \(\left\|f(z)-f(y)\right\|_{\infty}\geq f(z)(t)-f(y)(t)>d(z,y),\) contradicting \(f\) being \(1\)-Lipschitz. We conclude that \(\alpha_{x}(t)\leq b_{x}(t)\) for all \(t\in T,\) as claimed. The second inequality in (6) is established analogously.
In view of these observations, we conclude that the intervals \([a_{x}(t),b_{x}(t)]\) and \([\alpha_{x}(t),\beta_{x}(t)]\) overlap for every \(t\in T.\) We define \(F:S\cup\{x\}\rightarrow\ell_{\infty}(T)\) as
\[F(w)(t):=\left\{\begin{array}{ll}f(w)(t),&\mbox{if }w\in S\\ \theta(t),&\mbox{if }w=x,\end{array}\right.\]
where \(\theta(t)\) is an arbitrarily picked real number in \([a_{x}(t),b_{x}(t)]\cap[\alpha_{x}(t),\beta_{x}(t)]\) for any \(t\in T.\) Then, \(F\) is \(1\)-Lipschitz, because for any \(y\in S,\) we have
\[f(y)(t)-d(x,y)\leq\alpha_{x}(t)\leq F(x)(t)\leq\beta_{x}(t)\leq f(y)(t)+d(x,y),\]
and hence \(\left|F(x)(t)-F(y)(t)\right|\leq d(x,y),\) for all \(t\in T,\) that is, \(\left\|F(y)-F(x)\right\|_{\infty}\)\(\leq d(x,y).\) On the other hand, for every \(y\in S\) with \(y\succeq x,\) we have \(f(y)(t)\geq b_{x}(t)\geq F(x)(t),\) and similarly, for every \(z\in S\) with \(x\succeq z,\) we have \(f(x)(t)\geq a_{x}(t)\geq F(z)(t),\) for all \(t\in T.\) Thus, \(F\) is order-preserving as well.
The proof is completed by a standard transfinite induction argument. Let \(\mathcal{F}\) stand for the set of all \((A,F)\) such that \(S\subseteq A\subseteq X\) and \(F\in\mbox{Lip}_{1,\uparrow}(A,\ell_{\infty}(T))\) with \(F|_{S}=f.\) Since it includes \((S,f),\) this collection is not empty. In addition, it is easily verified that \((\mathcal{F},\triangtext{)}\) is an inductive poset where \((A,F)\triangtext{(}B,G)\) iff \(A\supseteq B\) and \(F|_{B}=G.\) So, by Zorn's Lemma, there is a \(\triangtext{-maximal element }(A,F)\) in \(\mathcal{F}\). In view of the first part of the proof, we must have \(T=X.\)\(\blacksquare\)
_Remark 3_.: By setting \(\theta(t):=\max\{a_{x}(t),\alpha_{x}(t)\}\) for all \(t\in T\) in the proof above, and modifying the transfinite induction part of the proof in the obvious way, we find that there is a smallest order-preserving \(K\)-Lipschitz map \(F:X\rightarrow\ell_{\infty}(T)\) with \(F|_{S}=f\) in the context of Theorem 4. That there is also a largest such \(F\) is established analogously. \(\square\)
_Remark 4_.: There are various generalizations of the Lipschitz property, and the construction above adapts to some of these. To wit, Miculescu [24] considers \((K,g)\)-Lipschitz functions which are functions \(f\) from a metric space \((X,d_{X})\) to another metric space \((Y,d_{Y})\) such that \(d_{X}(f(x),f(y))\leq Kd_{Y}(g(x),g(y))\) for every \(x,y\in X.\) Theorems 2 and 4 modify in the obvious way to account for such functions as well. \(\square\)
_Remark 5_.: Given Theorem 2, it is natural to inquire if the monotonic Lipschitz extensions of real functions can be carried out locally. To state the problem, we recall that a real map on a metric space \(X=(X,d)\) is called _pointwise Lipschitz_ if for every \(y\in X,\) there exist \(K_{y}\geq 0\) and \(\delta_{y}>0\) such that \(\left|f(x)-f(y)\right|\leq K_{y}d(x,y)\) for all \(x\in X\) with \(d(x,y)<\delta_{y}.\) The question is: If \((X,d,\succeq)\) is a radial partially ordered metric space, \(S\) a nonempty closed subset of \(X\) and \(f:S\rightarrow\mathbb{R}\) is an \(\succeq\)-increasing pointwise Lipschitz map, does there exist an \(\succeq\)-increasing pointwise Lipschitz map \(F:X\rightarrow\mathbb{R}\) with \(F|_{S}=f?\) If \(\succeq\) is the equality relation, the answer is known to be yes; see, for instance, [12] and [19]. The first part of the proof above also adapts to show that the answer is yes so long as we add only finitely many points in the extension. That is, minor modifications of that part of the proof yields the following fact:
_Let \((X,d,\succcurlyeq)\) be a radial partially ordered metric space, and \(S\) a nonempty closed subset of \(X\) with \(|X\backslash S|<\infty.\) Then, every \(\succcurlyeq\)-increasing pointwise Lipschitz map on \(S\) can be extended to an \(\succcurlyeq\)-increasing pointwise Lipschitz map on \(X\)._
Unfortunately, the transfinite inductive step of the proof above fails to deliver this result without the requirement \(|X\backslash S|<\infty.\)\(\square\)
## 4. Functional Representations of Radial Orders
For any nonempty \(X,\)\(\mathcal{F}\subseteq\mathbb{R}^{X},\) and \(x,y\in X,\) we write \(\mathcal{F}(x)\geq\mathcal{F}(y)\) to mean \(f(x)\geq f(y)\) for every \(f\in\mathcal{F}\). For any such collection \(\mathcal{F},\) the binary relation \(\succsim\) on \(X\) defined by \(x\succsim y\) iff \(\mathcal{F}(x)\geq\mathcal{F}(y),\) is a preorder on \(X.\) Conversely, for every preorder \(\succsim\) on \(X,\) there is a family \(\mathcal{F}\) with \(x\succsim y\) iff \(\mathcal{F}(x)\geq\mathcal{F}(y)\) for every \(x,y\in X.\)1 In this case, we say that \(\mathcal{F}\)_represents_\(\succsim\). In several applied mathematical fields, such as decision theory and the theory of optimal transportation, it is important to determine the structure of the families of real functions that may represent a given preorder in this sense.
Footnote 1: This is readily proved by taking \(\mathcal{F}\) as the set of all indicator functions of \(\{z\in X:z\succsim x\}\) as \(x\) varies over \(X.\)
As an easy consequence of Theorem 2, we find that any radial partial order on any metric space can be represented by a family of order-preserving \(1\)-Lipschitz real-valued functions.
**Proposition 5**.: _Let \((X,d,\succcurlyeq)\) be a radial partially ordered metric space. Then, there exists an \(\mathcal{F}\subseteq\operatorname{Lip}_{1,\uparrow}(X)\) that represents \(\succcurlyeq\). If \((X,d)\) is compact, we can choose \(\mathcal{F}\) in such a way that it is compact and \(\sup_{F\in\mathcal{F}}\left\|F\right\|_{\infty}\leq\operatorname{diam}(X).\)_
_Proof._ Assume \(|X|>1,\) which implies \(\succcurlyeq^{\bullet}\neq\varnothing,\) for otherwise there is nothing to prove. For any \(x,y\in X\) with \(x\succcurlyeq^{\bullet}y,\) define \(f_{x,y}\in\mathbb{R}^{\{x,y\}}\) by \(f_{x,y}(x):=d(x,y)\) and \(f_{x,y}(y):=0,\) and note that \(f\in\operatorname{Lip}_{1,\uparrow}(\{x,y\}).\) We apply Theorem 2 to extend \(f_{x,y}\) to an \(\succcurlyeq\)-increasing \(1\)-Lipschitz real-valued map \(F_{x,y}\) on \(X.\) Next, define \(\mathcal{F}:=\{F_{x,y}:x\succcurlyeq^{\bullet}y\}.\) Then, \(x\succcurlyeq y\) implies \(F(x)\geq F(y)\) for all \(F\in\mathcal{F}\) simply because every member of \(\mathcal{F}\) is \(\succcurlyeq\)-increasing. Conversely, if \(x\succcurlyeq y\) does not hold, we have \(F(x)<F(y)\) for some \(F\in\mathcal{F},\) namely, \(F=F_{y,x}.\)
Now suppose \((X,d)\) is compact. Put \(K:=\operatorname{diam}(X),\) and note that \(K\in(0,\infty).\) Next, for any fixed \(e\in X,\) define
\[\mathcal{G}:=\{\tfrac{1}{K}(F-F(e)):F\in\mathcal{F}\}.\]
Then, \(|G(x)|=|G(x)-G(e)|\leq K^{-1}d(x,e)\leq 1\) for every \(x\in X,\) so \(\left\|G\right\|_{\infty}\leq 1,\) for every \(G\in\mathcal{G}\). Moreover, \(\mathcal{G}\subseteq\operatorname{Lip}_{1/K,\uparrow}(X)\) and \(\mathcal{G}\) represents \(\succcurlyeq\). Then, \(\mathcal{H}:=K\mathrm{cl}(\mathcal{G})\) is a closed and bounded set of \(1\)-Lipschitz bounded functions which represents \(\succcurlyeq\). Since any subset of \(\operatorname{Lip}_{1}(X)\) is equicontinuous, applying the Arzela-Ascoli Theorem yields the second claim of the proposition. \(\square\)
As an immediate consequence of Proposition 5 we obtain the somewhat surprising fact that every radial partially ordered metric space is, per force, a metric poset. That is:
**Corollary 6**.: _Every radial partial order on a metric space \(X\) is a closed subset of \(X\times X\)._
The concepts of "radial partially ordered metric space" and "radial metric poset" are thus identical. We will adopt the latter terminology in the remainder of the paper.
We next apply our main extension theorem to show that a radially convex linear order on a \(\sigma\)-compact metric space can be represented by a Lipschitz function. The main ingredient of the argument is contained in the following observation.
**Lemma 7**.: _Let \((X,d,\succcurlyeq)\) be a radial metric poset. Then, for any compact subset \(S\) of \(X,\) there is an \(G\in\operatorname{Lip}_{1,\uparrow}(X)\) such that \(\left\|G\right\|_{\infty}\leq\operatorname{diam}(S)\)_ and
\[G(x)>G(y)\quad\text{ for every }\,x,y\in S\text{ with }x\succ y.\]
_Proof._ Take any compact \(S\subseteq X,\) and use Proposition 5 to find a compact, and hence separable, \(\mathcal{F}\subseteq\operatorname{Lip}_{1,\uparrow}(S)\) such that (i) \(\sup_{F\in\mathcal{F}}\left\|F\right\|_{\infty}\leq\operatorname{diam}(S);\) and (ii) \(x\succcurlyeq y\) iff \(\mathcal{F}(x)\geq\mathcal{F}(y)\) for every \(x,y\in X.\) Let \((F_{m})\) be a sequence in \(\mathcal{F}\) such that \(\{F_{1},F_{2},...\}\) is dense in F. We define \(G:=\sum_{n\geq 1}2^{-n}F_{n}.\) It is readily checked that \(F\in\operatorname{Lip}_{1,\uparrow}(S).\) Besides, if \(x,y\in S\) satisfy \(x\succ y,\) then \(F(x)>F(y)\) for some \(F\in\mathcal{F}\) (because \(\mathcal{F}\) represents \(\succcurlyeq\)). Consequently, since \(\{F_{1},F_{2},...\}\) is dense in \(\mathcal{F}\) relative to the uniform metric, there exists an \(n\in\mathbb{N}\) with \(F_{n}(x)>F_{n}(y),\) which implies \(G(x)>G(y).\) To complete the proof, we extend \(G\) to \(X\) by using Theorem 2, and recall Remark 1. \(\square\)
**Theorem 8**.: _Let \((X,d,\succcurlyeq)\) be a radial metric poset such that \((X,d)\) is \(\sigma\)-compact. Then, there is a Lipschitz function \(F:X\to\mathbb{R}\) with_
\[F(x)>F(y)\quad\text{ for every }\,x,y\in S\text{ with }x\succ y. \tag{7}\]
_Proof._ By hypothesis, there exists a sequence \((S_{m})\) of compact subsets of \(X\) such that \(S_{1}\subseteq S_{2}\subseteq\cdots\) and \(S_{1}\cup S_{2}\cup\cdots=X.\) We may assume \(\left|S_{1}\right|>1.\) Put \(K_{n}:=\operatorname{diam}(S_{n}),\) and note that \(K_{n}\in(0,\infty)\) for each \(n.\) By Lemma 7, for every \(n\in\mathbb{N},\) there is a \(G_{n}\in\operatorname{Lip}_{1,\uparrow}(X)\) such that \(\left\|G_{n}\right\|_{\infty}\leq K_{n}\) and \(G_{n}(x)>G_{n}(y)\) for every \(x,y\in S_{n}\) with \(x\succ y.\) We define \(F\in\mathbb{R}^{X}\) by \(F(x):=\sum_{n\geq 1}2^{-n}K_{n}^{-1}G_{n}.\) It is plain that \(F(x)>F(y)\) for every \(x,y\in X\) with \(x\succ y.\) Moreover, \(F\) is \(K_{1}^{-1}\)-Lipschitz. Indeed for any \(x,y\in X,\)
\[\left|F(x)-F(y)\right|\leq\sum_{n\geq 1}\tfrac{1}{2^{n}K_{n}}\left|G_{n}(x)-G_{ n}(y)\right|\leq\sum_{n\geq 1}\tfrac{1}{2^{n}K_{1}}d(x,y)\leq\tfrac{1}{K_{1}}d(x,y)\]
since \(K_{1}\leq K_{n}\) for each \(n.\)\(\square\)
**Corollary 9**.: _Let \((X,d,\succcurlyeq)\) be a radially convex metric poset such that \((X,d)\) is \(\sigma\)-compact. Then, there is a Lipschitz function \(F:X\to\mathbb{R}\) with_
\[x\succcurlyeq y\quad\text{ if and only if }\quad F(x)\geq F(y)\]
_for every \(x,y\in X.\)_
This result has the flavor of continuous utility representation theorems of decision theory. Indeed, it provides a rather easy proof of the following well-known result of that literature.
**Corollary 10**.: _Let \(\succ
_for every \(x,y\in X.\)_
_Proof._ Define \(\mathbf{x}:=\{y\in X:x\succsim y\succsim x\}\) for any \(x\in X,\) and note that \(\mathcal{X}:=\{\mathbf{x}:x\in X\}\) is a partition of \(X.\) Then, the binary relation \(\succcurlyeq\mathcal{X}\times\mathcal{X}\) defined by \(\mathbf{x}\succcurlyeq\mathbf{y}\) iff \(x\succsim y,\) is a partial order on \(\mathcal{X}.\) Let \(H_{d}\) stand for the Hausdorff metric on \(\mathcal{X}\). Then, \((\mathcal{X},H_{d},\succcurlyeq)\) is a compact metric loser. By the Carruth metrization theorem (of [10]), there exists a metric \(D\) on \(\mathcal{X}\) such that \(H_{d}\) and \(D\) are equivalent, and \(D(\mathbf{x},\mathbf{z})=D(\mathbf{x},\mathbf{y})+D(\mathbf{y},\mathbf{z})\) for every \(x,y,z\in X\) with \(x\succ y\succ z.\) We may thus apply Corollary 9 to obtain an \(\succcurlyeq\)-increasing and \(1\)-Lipschitz \(F\) map on \((\mathcal{X},D,\succcurlyeq)\) such that \(\mathbf{x}\succcurlyeq\mathbf{y}\) iff \(F(\mathbf{x})\geq F(\mathbf{y})\) for every \(x,y\in X.\) The map \(u:X\to\mathbb{R}\) with \(u(x):=F(\mathbf{x})\) fulfills the requirements of the assertion. \(\square\)
## 5. Monotone Uniformly Continuous Extensions
### A Monotone Version of McShane's Uniformly Continuous Extension
**Theorem.** As an another application of Theorem 2, we prove a uniformly continuous extension theorem in the context of radial metric posets. A special case of this theorem will correspond to the monotonic version of McShane's famous uniformly continuous extension theorem for bounded functions.
For any metric spaces \(X=(X,d_{X})\) and \(Y=(Y,d_{Y}),\) a function \(f:X\to Y\) is said to be _Lipschitz for large distances_ if for every \(\delta>0\) there is a \(K_{\delta}>0\) such that \(d_{Y}(f(x),f(y))\leq K_{\delta}d_{X}(x,y)\) whenever \(d_{X}(x,y)\geq\delta.\) This concept often arises with extension and approximation problems concerning uniformly continuous functions; see, for instance, [21], [18] and [7]. In fact, a basic result of this literature says that every uniformly continuous map on a Menger-convex metric space is, per force, Lipschitz for large distances (cf. [7, Proposition 1.11]).
We need to make two observations about real-valued functions that are Lipschitz for large distances. The first one is basic, and was noted explicitly in [18].
**Lemma 11**.: _Every bounded real-valued function on a metric space is Lipschitz for large distances._
_Proof._ For any bounded real-valued function \(f\) on a metric space \(X=(X,d),\) and \(\delta>0,\) we have
\[|f(x)-f(y)|\leq\left(\tfrac{2\|f\|_{\infty}}{\delta}\right)d(x,y)\]
for all \(x,y\in X\) with \(d(x,y)\geq\delta.\)\(\blacksquare\)
Our second observation provides a characterization of uniformly continuous real-valued maps that are Lipschitz for large distances. This characterization seems new, but we should note that Beer and Rice [6] work out several related results. In the statement of the result, and henceforth, \(\omega_{f}\) stands for the _modulus of continuity_ of any given real-valued function \(f\) on \(X=(X,d),\) that is, \(\omega_{f}:[0,\infty)\to[0,\infty]\) is the function defined by
\[\omega_{f}(t):=\sup\{|f(x)-f(y)|:x,y\in X\text{ and }d(x,y)\leq\delta\}.\]
**Lemma 12**.: _Let \(X=(X,d)\) be a metric space and \(f\in UC(X).\) Then, \(f\) is Lipschitz for large distances if and only if there exist nonnegative real numbers \(a\) and \(b\) such that \(\omega_{f}(t)\leq at+b\) for every \(t\geq 0.\)2_
Footnote 2: [6] refers to an \(f:X\to\mathbb{R}\) with the latter property as a function _having an affine majorant_, and investigates it in detail. In fact, this concept already plays a prominent role in McShane’s
_Proof._ For any \(a,b\in\mathbb{R},\) let \(h_{a,b}\) denote the map \(t\mapsto at+b\) on \([0,\infty)\). Suppose first that \(\omega_{f}\leq h_{a,b}\) for some \(a,b\geq 0.\) Then, for any \(\delta>0,\) setting \(K_{\delta}:=a+b/\delta\) yields
\[|f(x)-f(y)|\leq\omega_{f}(d(x,y))\leq ad(x,y)+b\leq ad(x,y)+b\left(\tfrac{d(x,y )}{\delta}\right)=K_{\delta}d(x,y)\]
for every \(x,y\in X\) with \(d(x,y)\geq\delta.\) Conversely, suppose \(f\) is Lipschitz for large distances. Note first that uniform continuity of \(f\) entails that there is a \(\delta>0\) with \(\omega_{f}(\delta)\leq 1.\) In turn, by the Lipschitz property of \(f,\) there exists a \(K:=K_{\delta}>0\) such that
\[|f(x)-f(y)|\leq Kd(x,y)\quad\text{ for all }x,y\in X\text{ with }d(x,y)\geq\delta.\]
We wish to show that \(\omega_{f}\leq h_{K,1}.\) To this end, fix an arbitrary \(t\geq 0,\) and take any \(x,y\in X\) with \(d(x,y)\leq t.\) If \(d(x,y)<\delta,\) then \(|f(x)-f(y)|\leq\omega_{f}(\delta)\leq 1\leq h_{K,1}(t).\) Otherwise, \(|f(x)-f(y)|\leq Kd(x,y)\leq Kt\leq h_{K,1}(t).\) Conclusion: \(|f(x)-f(y)|\leq h_{K,1}(t)\) for any \(x,y\in X\) with \(d(x,y)\leq t.\) Taking the sup over all such \(x\) and \(y\) yields \(\omega_{f}(t)\leq h_{K,1}(t).\)\(\blacksquare\)
We now proceed to show that the uniformly continuous extension theorem of McShane [22] also generalizes to the context of radial metric posets. This is proved most easily by adopting the remetrization technique of Beer [4, pp. 23-25] which derives the said extension from the McShane-Whitney theorem. For the sake of completeness, we provide the details of Beer's technique within the proof.
**Theorem 13**.: _Let \((X,d,\succc)\) be a radial metric poset and \(S\) a subset of \(X.\) Then, for every \(\succc\)-increasing \(f\in UC(S)\) which is Lipschitz for large distances, there exists an \(\succc\)-increasing \(F\in UC(X)\) with \(F|_{S}=f.\)_
_Proof._ Let \(\mathcal{H}\) stand for the set of all increasing affine self-maps \(h\) on \([0,\infty)\) with \(\omega_{f}\leq h.\) By Lemma 12, \(\mathcal{H}\neq\varnothing\). We may thus define the map \(\varphi:[0,\infty)\to\mathbb{R}\) by \(\varphi(t):=\inf_{h\in\mathcal{H}}h(t).\) Clearly, \(\varphi\) is an increasing and concave (hence subadditive) self-map on \([0,\infty).\) Since \(f\) is not constant, \(\varphi(t)>0\) for some \(t>0,\) so concavity of \(\varphi\) entails \(\varphi(t)>0\) for all \(t>0.\) We claim that \(\varphi\) is continuous at \(0\) (whence \(\varphi\in C([0,\infty))\)) and \(\varphi(0)=0\). To prove this, take any \(\varepsilon>0.\) Since \(f\) is uniformly continuous, there exists a \(\delta>0\) with \(\omega_{f}(t)\leq\varepsilon\) for every \(t\in[0,\delta).\) In turn, as \(f\) is Lipschitz for large distances, there exists a \(K>0\) such that \(|f(x)-f(y)|\leq Kd(x,y)\) whenever \(d(x,y)\geq\delta.\) Now consider the self-map \(h\) on \([0,\infty)\) with \(h(t):=Kt+\varepsilon.\) Clearly, \(\omega_{f}(t)\leq h(0)\leq h(t)\) for all \(t\in[0,\delta),\) while \(\omega_{f}(t)\leq\max\{\varepsilon,Kt\}\leq h(t)\) for all \(t\geq\delta.\) It follows that \(h\in\mathcal{H}\). But then \(\varphi(t)\leq Kt+\varepsilon\) for all \(t\geq 0,\) which implies \(\inf_{t>0}\varphi(t)\leq\varepsilon.\) In view of the arbitrary choice of \(\varepsilon,\) we conclude that \(\inf_{t>0}\varphi(t)=0=\varphi(0).\)
With these preparations in place, we now turn to the task at hand. Define \(D:X\times X\to\mathbb{R}\) by \(D(x,y):=\varphi(d(x,y)).\) Since \(\varphi(t)>0\) for all \(t>0,\) it is obvious that \(D(x,y)>0\) for every distinct \(x,y\in X,\) while \(\varphi(0)=0\) implies \(D(x,x)=0\) for all \(x\in X\). Moreover, \(D\) is clearly symmetric and it satisfies the triangle inequality (because \(\varphi\) is increasing and subadditive). Thus: \((X,D,\succc)\) is a partially ordered metric space. As \(\varphi\) is increasing, this space is radial. Besides, \(|f(x)-f(y)|\leq h(d(x,y))\) for every \(x,y\in S\) and \(h\in\mathcal{H}\), and it follows that \(|f(x)-f(y)|\leq D(x,y)\) for every \(x,y\in S,\) that is, \(f\) is \(1\)-Lipschitz on the metric space \((S,D|_{S\times S}).\) By
Theorem 2, therefore, there exists an \(\succcurlyeq\)-increasing \(F:X\rightarrow\mathbb{R}\) which is \(1\)-Lipschitz on \((X,D)\) with \(F|_{S}=f.\) But then for every \(\varepsilon>0,\) continuity of \(\varphi\) at \(0\) ensures that there is a \(\delta>0\) small enough that \(\varphi(t)<\varepsilon\) for all \(t\in(0,\delta),\) which means \(|F(x)-F(y)|<\varepsilon\) for all \(x,y\in X\) with \(d(x,y)\leq\delta.\) It follows that \(F\) is uniformly continuous on the metric space \((X,d).\)\(\blacksquare\)
Since every bounded map on a metric space is Lipschitz for large distances (Lemma 11), the following is a special case of Theorem 13. When \(\succcurlyeq\) is taken as the equality relation in its statement, this result reduces to McShane's uniformly continuous extension theorem for bounded real-valued functions.
**Corollary 14**.: _Let \((X,d,\succcurlyeq)\) be a radial metric poset. Every \(\succcurlyeq\)-increasing, bounded and uniformly continuous map on a subset \(S\) of \(X\) can be extended to an \(\succcurlyeq\)-increasing and uniformly continuous map on \(X\)._
As every continuous map on a compact metric space is uniformly continuous, an immediate consequence of Corollary 14 is the following observation which provides a companion to Nachbin's extension theorem.
**Corollary 15**.: _Let \((X,d,\succcurlyeq)\) be a radial metric poset, and \(S\) a nonempty compact subset of \(X\). Then, for every \(\succcurlyeq\)-increasing \(f\in C(S),\) there is an \(\succcurlyeq\)-increasing \(F\in UC(X)\) with \(F|_{S}=f\)._
At the cost of imposing the radiality property, this result drops the topological requirement of being normally ordered in Nachbin's extension theorem, and in addition, it guarantees the uniform continuity of the extension as opposed to its mere continuity.
### The Monotone Uniform Extension Property
It should be noted that the similarity of the statements of Theorem 4 and Corollary 14 is misleading. To clarify this point, let us say that a partially ordered metric space \((X,d,\succcurlyeq)\) has the _monotone uniform extension property_ if for every closed \(S\subseteq X\) and \(\succcurlyeq\)-increasing and bounded \(f\in UC(S),\) there is an \(\succcurlyeq\)-increasing \(F\in UC(X)\) with \(F|_{S}=f.\) The point we wish to make is that this property is categorically different than the monotone Lipschitz extension property. After all, the proof of Theorem 2 shows that a finite metric poset has the monotone Lipschitz extension property iff that metric poset is radial. In other words, finiteness of the carrier does not allow us improve Theorem 2. By contrast, one can inductively prove that every finite metric poset has the monotone uniform extension property. More generally, an immediate application of Nachbin's extension theorem yields the following fact:
_Let \((X,d,\succcurlyeq)\) be a metric poset such that \((X,d)\) is an UC-space.3 Then, \((X,d,\succcurlyeq)\) has the monotone uniform extension property, provided that it is normally ordered._
The family of all partially ordered metric spaces with the monotone uniform extension property is thus much larger than that of radial metric posets. Characterization of this family remains as an interesting open problem.
Acknowledgement We thank professors Jerry Beer, Hiroki Nishimura, Gil Riella, and Nik Weaver for their insightful comments at the development stage of this work.
|
2308.16600 | The Synchronization Power of Auditable Registers | Auditability allows to track all the read operations performed on a register.
It abstracts the need of data owners to control access to their data, tracking
who read which information. This work considers possible formalizations of
auditing and their ramification for the possibility of providing it.
The natural definition is to require a linearization of all write, read and
audit operations together (atomic auditing). The paper shows that atomic
auditing is a powerful tool, as it can be used to solve consensus. The number
of processes that can solve consensus using atomic audit depends on the number
of processes that can read or audit the register. If there is a single reader
or a single auditor (the writer), then consensus can be solved among two
processes. If multiple readers and auditors are possible, then consensus can be
solved among the same number of processes. This means that strong
synchronization primitives are needed to support atomic auditing.
We give implementations of atomic audit when there are either multiple
readers or multiple auditors (but not both) using primitives with consensus
number 2 (swap and fetch&add). When there are multiple readers and multiple
auditors, the implementation uses compare&swap.
These findings motivate a weaker definition, in which audit operations are
not linearized together with the write and read operations (regular auditing).
We prove that regular auditing can be implemented from ordinary reads and
writes on atomic registers. | Hagit Attiya, Antonella Del Pozzo, Alessia Milani, Ulysse Pavloff, Alexandre Rapetti | 2023-08-31T09:58:21Z | http://arxiv.org/abs/2308.16600v1 | # The Synchronization Power of Auditable Registers
###### Abstract
Auditability allows to track all the read operations performed on a register. It abstracts the need of data owners to control access to their data, tracking who read which information. This work considers possible formalizations of auditing and their ramification for the possibility of providing it.
The natural definition is to require a linearization of all write, read and audit operations together (_atomic_ auditing). The paper shows that atomic auditing is a powerful tool, _as it can be used to solve consensus_. The number of processes that can solve consensus using atomic audit depends on the number of processes that can read or audit the register. If there is a single reader or a single auditor (the writer), then consensus can be solved among two processes. If multiple readers and auditors are possible, then consensus can be solved among the same number of processes. This means that strong synchronization primitives are needed to support atomic auditing.
We give implementations of atomic audit when there are either multiple readers or multiple auditors (but not both) using primitives with consensus number 2 (swap and fetch&add). When there are multiple readers _and_ multiple auditors, the implementation uses compare&swap.
These findings motivate a weaker definition, in which audit operations are not linearized together with the write and read operations (_regular_ auditing). We prove that regular auditing can be implemented from ordinary reads and writes on atomic registers.
Auditability, atomic register, fault tolerance, consensus number
with an _audit_ operation. An audit operation indicates who performed read operations on it, and which values they have read. Auditability is defined in terms of two properties: _completeness_ ensures that readers that access data are detected, while _accuracy_ ensures that readers who do not access data are not incriminated.
In this work, we formalize the correctness of auditable registers in terms of their high-level operations (read, write and audit). We first formalize a natural extension of an atomic register, called _atomic register with atomic audit_ where all the operations (including audit) appear to happen in a sequential order that respects their real-time order.
We show that an _atomic register with atomic audit_ is a powerful abstraction, _because it has a greater consensus number than an ordinary atomic register_. Recall that the _consensus number_[12] of object type \(X\) is \(m\) if \(m\) is the largest integer such that there exists an asynchronous consensus algorithm for \(m\) processes, up to \(m-1\) may crash, using only shared objects of type \(X\) and read/write registers.
We present a wait-free algorithm that solves consensus among _two processes_, using an atomic register with atomic audit where only one process (the writer) can perform audit operations. This stands in contrast to the well-known result [12] that atomic read/write registers cannot be used to solve wait-free consensus among two processes. We then show that when \(m\) processes can read and audit the register, it is possible to solve consensus among \(m\) processes.
These results indicate that base objects stronger than read/write registers are needed to implement atomic audit, motivating our implementations of an atomic register with atomic audit. When there is either a single auditor or a single reader, we use base objects with consensus number 2 (_swap_ and _fetch\(\&\)add_).
Specifically, we first present a simple algorithm for a single-reader atomic auditable register with atomic audit, where the writer is the only process that can execute the audit operations. The writer needs to atomically retrieve who read the previously written value while writing a new value. With a single reader, this can be easily ensured by using _swap_ primitives: to read a value, the reader atomically swaps it with a special value. If the writer retrieves this special value when writing a new value, then it is aware of the value read.
Extending this idea to multi-readers is challenging, since readers might be swapping each other's values. We propose a solution that uses a single shared object accessed with _swap_ and _fetch\(\&\)add_ primitives. The \(n\) low-order bits of the value stored in this object, where \(n\) is the number of readers, are used to indicate which readers have accessed the value stored in the high-order bits. Each reader is assigned a unique bit, which is set to 1 when the reader accesses the value. Then, when the writer writes a new value it can learn who read its previous value by checking the values of the low-order bits it retrieved from the value atomically read while writing the new value. A similar algorithm allows to support multiple auditors, but only a single reader, also using _swap_ and _fetch\(\&\)add_.
When there are multiple readers and auditors, we use _compare\(\&\)swap_, which has consensus number \(\infty\), in addition to _fetch\(\&\)add_.
Taken together, our results mean that atomic audit cannot be implemented using only reads and writes, and stronger primitives (with consensus number \(>1\)) should be used.
We then investigate the possibility to extend an atomic register with a useful audit operation without relying on strong synchronization primitives. This weaker abstraction is called a _regular_ audit, and roughly speaking, differs from the previous one in not having the audit operations linearized together with the write and read operations. In particular, a regular audit operation _aop_ may not detect a read operation that is concurrent with it, even though the read has to be linearized before a write that completes before the invocation of
_aop_. Our final result is a single-writer multi-reader atomic register with multi-auditor _regular_ audit, using only atomic read and write operations, whose consensus number is 1.
Organization of the paper.After discussing related work in the next section, Section 3 describes our model of computation, and the definitions of _atomic_ and _regular_ auditing. Sections 4 and 5 contain the results for an atomic register with atomic audit. Section 6 elaborates on those results to formalize the synchronization power of atomic audit in terms of consensus number. Section 7 contains the implementation of an atomic register with regular audit. We conclude in Section 8.
## 2 Related Work
To the best of our knowledge, only two papers [5, 6] study auditability of read operations. Cogo and Bessani [5] were the first to formalize the notion of auditable register. Their definition is tailored for auditable register implementations on top of a shared memory model where some base objects can be faulty, i.e., they can omit to record readers or they can record nonexistent read operations. They present an algorithm to implement an auditable register, here read and write operations provide regular semantics using \(n\geq 4f+1\) atomic read/write shared objects, \(f\) of which may be faulty. Because of their failure model, their high-level register implementation relies on information dispersal schemes, where the input of a high-level write is split into several pieces each written in a different low-level shared object. It implies that a process can read a written value only if it collects enough pieces of information, making its read operation detectable. Their definition of completeness and accuracy for the auditable register relies on the notion of _effectively read_, which they formalize to capture the fact that the process executing the high-level read operation could have collected enough pieces of information and be able to retrieve the value, even if the read operation does not return.
In asynchronous message-passing systems where \(f\) processes can be Byzantine, Del Pozzo et al. [6] study the possibility of implementing an atomic auditable register with the accuracy and completeness properties, as defined by Cogo and Bessani, with fewer than \(4f+1\) processes. They prove that without communication between servers, auditability requires at least \(4f+1\) processes, \(f\) of which may be Byzantine. They also show that allowing processes to communicate with each other admits an auditable atomic register with only \(3f+1\) processes, providing optimal resilience. Their implementation also uses information dispersal scheme to deal with Byzantine processes.
In contrast, we consider a classical shared memory model where processes can fail by crashing. Also, our definition of auditable register is not tailored for a given class of implementations, since it is stated in terms of high-level operations.
Most of the other works on auditing protocols for distributed storage focus on data integrity [7, 8, 15, 21, 22, 23], see a survey in [13]. In contrast, our work focuses on auditing _who_ has read _which_ data.
Frey, Gestin and Raynal [9] investigate the synchronization power of _AllowList_ and _DenyList_: append-only lists where AllowList contains resources that processes can access, while DenyList includes resources that processes cannot access. They prove the consensus number of AllowList is 1, while the consensus number of DenyList is equal to the number of processes that can access resources not listed in the DenyList. These objects are related to an auditable register, because they also record which processes accessed a resource and how many times. However, the precise nature of this relationship is unclear since some variants
of an auditable register with atomic audit (with a single reader or a single auditor) have concensus number 2, a phenomenon that does not happen with AllowList or DenyList.
Finally, we discuss relation of auditability to _Accountability_[3, 4, 11, 20]. When faulty processes are malicious, accountability aims to produce proofs of misbehavior in instances where processes deviate, in an observable way, from the prescribed protocol. This allows the identification and removal of malicious processes from the system as a way to clean the system after a safety violation. In contrast, auditability logs the processes actions and let the auditor derive conclusions on the processes behavior.
## 3 Preliminaries
### Model
We consider a standard shared-memory model where crash-prone asynchronous processes communicate through registers, using a given set of primitive operations. The primitive operations (sometimes called just primitives) include ordinary _read_ and _write_, as well as _swap_, _fetch\(\&\)add_ and _compare\(\&\)swap_.
A _swap\((v)\)_ primitive atomically writes \(v\) to the register and returns its previous value. A _fetch\(\&\)add\((a)\)_ primitive atomically writes the sum of \(a\) and the current value of the register into the register and returns its previous value. A _compare\(\&\)swap\((\)old\(,\)new\()\)_ primitive is an atomic conditional write: the write of \(new\) is executed if and only if the value of the register is \(old\); a boolean value is returned that indicates if the write was successful or not.
The auditable atomic register is an extension of an ordinary atomic read/write register [14]. It is formally defined in the next subsection. We only consider _single-writer_ registers, where each register can be written by a single process.
An auditable register implementation specifies the state representation of the register and the algorithms processes follow when they perform the read, write and audit operations. Each operation has an invocation and a response event.
An execution is a sequence of steps performed by processes as they follow their algorithms, in each of which a process applies at most a single primitive to the shared memory (possibly in addition to some local computation).
A _history_\(H\) is a sequence of invocation and response events; no two events occur at the same time. An operation is _complete_ in history \(H\), if \(H\) contains both the invocation and the matching response for this operation. If the matching response is missing, the operation is _pending_. An operation \(op\)_precedes_ another operation \(op^{\prime}\) in \(H\) if the response of \(op\) appears before the invocation of \(op^{\prime}\) in \(H\); we also say that \(op^{\prime}\)_follows_\(op\).
A history is _sequential_ if each operation invocation is immediately followed by the matching response, by the same process on the same object. For a given history \(H\), \(\textsf{complete}(H)\) is the set of histories obtained from \(H\) by appending zero or more responses to some pending invocations and discarding the remaining pending invocations.
We consider _wait-free_ implementations which ensures that a non faulty process completes an operation within a finite number of its own steps.
### Definitions of Auditable Register
An auditable register supports three operations: \(R.write(v)\) which assigns value \(v\) to the register \(R\), \(R.read()\) which returns the value of the register, and \(R.audit()\) which reports the set of all values read in the register and by whom. Specifically, an audit operation returns a set of pairs \((p,v)\), each corresponding to a read invoked by process \(p\) that returned \(v\). In the
following, we define two specifications for _audit_ operations, exploring different semantics of their interaction with concurrent read and write operations.
Atomic Audit.Intuitively, atomicity provides the illusion that all the read, write, and audit operations appear as if they have been executed sequentially.
A history \(H\) is _atomic with atomic audit_ if there is a history \(H^{\prime}\) in \(\mathsf{complete}(H)\) and a sequential history \(\pi\) that contains all operations in \(H^{\prime}\) such that:
1. If operation \(\mathit{op}_{1}\) precedes operation \(\mathit{op}_{2}\) in \(H\), then \(\mathit{op}_{1}\) appears before \(\mathit{op}_{2}\) in \(\pi\). Informally, \(\pi\) respects the real-time order of non-overlapping operations.
2. Every read in \(\pi\) returns the value of the most recent preceding write, if there is one, and the initial value, otherwise. Informally, the history \(\pi\) respects the semantics of an atomic read / write register.
3. Every audit \(\mathit{op}\) in \(\pi\) returns a set of pairs \(\mathcal{P}\) such that (Completeness): For each read operation \(\mathit{op}^{\prime}\) by process \(p\) that precedes \(\mathit{op}\) in \(\pi\), \((p,v)\in\mathcal{P}\), where \(v\) is the value returned by \(\mathit{op}^{\prime}\). (Strong Accuracy): For any pair \((p,v)\in\mathcal{P}\), there is a read operation \(\mathit{op}^{\prime}\) by process \(p\) that returned \(v\), and \(\mathit{op}^{\prime}\) precedes \(\mathit{op}\) in \(\pi\).
Roughly speaking, _completeness_ formalizes that any read of a value from the register must be detected by the audit operation, while _strong accuracy_ ensures that a read is reported by an audit operation only if it has occurred. Note that taken together, completeness and strong accuracy say that a pair \((p,v)\) is returned by the audit operation _if and only if_ a read operation by process \(p\), returning \(v\), is linearized in \(\pi\) before the audit. That is, an _atomic audit_ operation detects all the read operations linearized before it and does not detect any read operation linearized after it.
Regular Audit.A _regular audit_ operation detects all read operations that complete before the audit starts and does not detect any read operation that starts after it completes. An audit operation may detect some subset of the read operations that overlap it.
A history \(H\) is _atomic with regular audit_ if there is a history \(H^{\prime}\) in \(\mathsf{complete}(H)\), and a sequential history \(\pi\) that contains all read and writes operations in \(H^{\prime}\) that satisfies the first two conditions of Definition 1, and in addition:
1. Every audit \(\mathit{op}\in H^{\prime}\) returns a set of pairs \(\mathcal{P}\) such that (Completeness): For each read operation \(\mathit{op}^{\prime}\) in \(H^{\prime}\) by process \(p\), that completes in \(H^{\prime}\) before the invocation of \(\mathit{op}\) in \(H^{\prime}\), \((p,v)\in\mathcal{P}\), where \(v\) is the value returned by \(\mathit{op}^{\prime}\). (Accuracy): For any pair \((p,v)\in\mathcal{P}\), there is a read operation \(\mathit{op}^{\prime}\in H^{\prime}\) by process \(p\) that returned \(v\), and the invocation of \(\mathit{op}^{\prime}\) in \(H^{\prime}\) precedes the response of \(\mathit{op}\) in \(H^{\prime}\).
Note that while the condition on atomic audit operations (Definition 1) is stated relative to the linearization (sequential execution) \(\pi\), the condition on regular audit is stated relative to the completion \(H^{\prime}\) of the original history \(H\). As we shall see, this seemingly-minor change leads to an important difference in the synchronization power of audit operations.
Figure 1 depicts the difference between the responses of atomic audit and regular audit.
In the rest of the paper, we consider that only one process can invoke write operations on the register, called the _writer_, which is also allowed to invoke audit operations. Thus, the _writer_ is also an _auditor_ of the register.
When several processes other than the writer are allowed to read the register, we call it _multi reader_; otherwise, it is _single reader_. Similarly, if several processes other than the writer can audit the register, we call it _multi auditor_; otherwise, it is _single auditor_.
## 4 Using atomic audit to solve consensus
In this section, we investigate how atomic audit allows to solve consensus. An algorithm solving consensus satisfies the following properties:
**Termination:**: A process decides within a finite number of its own steps.
**Agreement:**: All processes decide on the same value.
**Validity:**: The decision value has been proposed by some process.
### Single-reader register with single-auditor atomic audit solves two-process consensus
Algorithm 1 solves consensus between two processes using two single-writer single-reader (swsr) atomic registers with a single-auditor atomic audit: \(R_{i}\), for each \(i\in\{0,1\}\), is a swsr register written and audited by process \(p_{i}\) and read by \(p_{1-i}\).
Each process first writes the value it proposes in its own register. Then it reads the other process's register and audits its own register. Finally, it returns its own value or the other process's value, accordingly to the values returned from the read and audit operations. In particular, \(p_{i}\) returns its own value (Line 6) if it read the initial value from \(R_{1-i}\) (Line 5). In that case, \(p_{1-i}\) reads \(v_{i}\) from \(R_{i}\) (Line 3). The condition in Line 5 would not hold, and since the audit operation on \(R_{1-i}\) detects that \(p_{i}\) read \(\bot\) from \(R_{1-i}\) (Line 4), \(p_{1-i}\) returns the value of _val_ (Line 8), which is \(v_{i}\). Finally, if \(p_{i}\) and \(p_{1-i}\) both read the input of the other process and they know this fact thanks to the result of the audit operation, they apply a deterministic rule to break the tie and choose the same value.
Algorithm 1 satisfies validity.
Proof.: If \(p_{i}\) returns \(v_{i}\) (Line 6 or Line 9), then validity holds since \(v_{i}\) is the value \(p_{i}\) proposed itself.
Otherwise, \(p_{i}\) returns _val_ (Line 8 and Line 9). Then, _val_ is the result of a read from \(R_{1-i}\) (Line 3). We argue that this is the value proposed by \(p_{1-i}\). Process \(p_{i}\) does not satisfy the condition in Line 5, implying that _val_ is not the initial value of \(R_{1-i}\). Since the only write operation in \(R_{1-i}\) is the input value of \(p_{1-i}\) (Line 2), Definition 1(2) implies validity.
Algorithm 1 satisfies agreement.
Proof.: We consider all possible ways process \(p_{i}\) can return a value, and show that \(p_{1-i}\) must return the same value.
_Case 1:_\(p_{i}\) returns its own value \(v_{i}\) in Line 6. Then, by Line 5, \(R_{1-i}.read()\) returned \(\bot\) to \(p_{i}\) (Line 3). Thus, by Definition 1(1), \(R_{1-i}.write(v_{1-i})\) by \(p_{1-i}\) is linearized after \(R_{1-i}.read()\) by \(p_{i}\). This implies that \(R_{i}.write(v_{i})\) is linearized before \(R_{i}.read()\), which returns \(v_{i}\) to \(p_{1-i}\), by Definition 1(2). Finally, by Definition 1(3), \(R_{1-i}.audit()\) returns \(\{(p_{i},\bot)\}\) to \(p_{1-i}\). Therefore, the condition in Line 7 holds for \(p_{1-i}\) and it returns _val_ that contains \(v_{i}\) (Line 8).
Figure 1: A scenario where a regular audit can return either \(\emptyset\) or \((p_{1},1)\), while an atomic audit must return \((p_{1},1)\).
_Case 2:_\(p_{i}\) returns _val_ at Line 8, which holds \(v_{1-i}\), the value proposed by \(p_{1-i}\). In this case, the condition at Line 7 holds for \(p_{i}\). By Definition 1(3), \(R_{i}.read()\) by \(p_{1-i}\) returns \(\bot\) (Line 3), implying that \(p_{1-i}\) returns \(v_{1-i}\) (by Line 5 and Line 6).
_Case 3:_\(p_{i}\) returns at Line 9. This means that \(p_{i}\) has read \(v_{1-i}\) from \(R_{1-i}\) (Line 3) and its audit response is not \(\{(p_{1-i},\bot)\}\) (Line 4). This means that the audit response is either \(\emptyset\) meaning that \(p_{1-i}\) has not yet read \(R_{i}\) or \((p_{1-i},v_{i})\). By Definition 1(3), \(R_{i}.read()\) by \(p_{1-i}\) is linearized after \(R_{i}.audit()\) by \(p_{i}\). Thus, \(R_{i}.write(v_{i})\) by \(p_{i}\) is linearized before \(R_{i}.read()\) by \(p_{1-i}\), which therefore, returns \(v_{i}\) to \(p_{1-i}\) (Line 3). Since \(R_{1-i}.read()\) by \(p_{i}\) did not return \(\bot\), Definition 1(3) implies that \(\{(p_{i},\bot)\}\) is not the response of \(R_{i}.audit()\) by \(p_{i}\). It follows that \(p_{i}\) and \(p_{1-i}\) return the same value which is the maximum between \(v_{i}\) and \(v_{1-i}\).
The algorithm satisfies validity (Lemma 3) and agreement (Lemma 4). Furthermore, **propose** performs a constant number of primitive operations, so Algorithm 1 is clearly wait-free, therefore respects the termination property. This implies:
Algorithm 1 solves consensus for two processes.
### Multi-reader register with multi-auditor atomic audit solves \(n\)-process consensus
We now generalize Algorithm 1 to solve consensus among \(n\) processes using single-writer multi-reader (swmr) atomic registers with _multi-auditor_ atomic audit. Like the algorithm for two-process consensus, processes leverage the audit to reconstruct at which point of the execution the other processes are, and base their decision on it.
Algorithm 2 uses \(n\) swmr atomic registers with multi-auditor atomic audit \(R_{0}\),..., \(R_{n-1}\), all initially \(\bot\). Process \(p_{i}\) is the single writer of \(R_{i}\), and all processes can read and audit \(R_{i}\).
Each process \(p_{i}\) proposes its input, by writing it in \(R_{i}\). Then, \(p_{i}\) reads and audits all the other registers. A simple situation is when one process, say \(p_{i}\), writes and reads before all other processes, the audit detects that \(p_{i}\) read \(v_{i}\) in \(R_{i}\) and \(\bot\) in all other registers. This implies that all later processes will read \(p_{i}\)'s value in \(R_{i}\) and, thanks to the audit, detect that
\(p_{i}\) is not aware of the other processes' propositions. In this case, \(v_{i}\) is the only value known to all processes, so it is safe to decide on \(v_{i}\).
In general, we can consider the set \(P\) of processes that write before any process reads. No process reads \(\bot\) from the registers of processes in \(P\), and this can be detected by auditing these registers. This means that all processes consider the input values of processes in \(P\) as safe to decide upon, and agreement can be reached by deterministically picking one of these values, e.g., the maximal one.
Each process keeps the following local data structures: _values_[ ] is an array of length \(n\) to hold the values read from \(R_{0}\),..., \(R_{n-1}\), initially \(\bot\); _safe_values_ is a set that stores the proposed values that no process missed, initially \(\emptyset\); and _audit_response_ holds the results of audits on \(R_{0}\),..., \(R_{n-1}\), initially \(\emptyset\).
When proposing a value \(v_{i}\), each process \(p_{i}\) first writes \(v_{i}\) in \(R_{i}\) (Line 2). Next, \(p_{i}\) reads \(R_{0}\),..., \(R_{n-1}\) and stores the responses in _values_[ ] (Line 4). Finally, \(p_{i}\) audits \(R_{0}\),..., \(R_{n-1}\) and stores the returned pairs in a set _audit_response_ (Line 6). For each \(R_{j}\), when a value is added to _audit_response_, \(p_{i}\) checks if there is a process that read \(\bot\) from \(R_{j}\) (Line 7). If this is not the case, then the value in \(R_{j}\) is considered safe, and is added to _safe_values_ (Line 8). Finally, it returns the maximum value in _safe_values_ (Line 9). (We assume that the input values are from a totally-ordered set.)
A process \(p_{i}\) adds a value \(v\) to safe_values, only if \(v\) was proposed by some process.
Proof.: Process \(p_{i}\) adds values it reads from \(R_{0}\),..., \(R_{n-1}\), the registers of other processes, to _safe_values_ in Line 8. The value read from a register \(R_{j}\), in Line 4, is either \(\bot\) (the initial value of the register) or the value proposed by \(p_{j}\), written to \(R_{j}\) in Line 2.
We next argue that \(p_{i}\) does not add \(\bot\) to _safe_values_. If \(p_{i}\) read \(\bot\) from some \(R_{j}\), then since its audit on \(R_{j}\) follows its read from \(R_{j}\), the Completeness of its audit operation (Definition 1(3)) implies that \((p_{i},\bot)\) is contained in the response of \(R_{j}.audit()\). By the condition of Line 7, \(\bot\) is not added to _safe_values_.
**Lemma 7**.: _Algorithm 2 satisfies validity._
Proof.: A process decides on a value in _safe_values_ (Line 9), and by Lemma 6, this set contains only values proposed by some process. We complete the proof by showing that _safe_values_ is not empty.
Let \(p_{k}\) be the first process to complete its write of \(v_{k}\) to \(R_{k}\). Since all processes read the registers of the other processes after finishing the write, it follows that all the processes read \(v_{k}\neq\bot\) from \(R_{k}\). By Strong Accuracy (Definition 1(3)), the audit of \(R_{k}\) does not contain a pair \((p_{j},\bot)\), for any \(p_{j}\). Therefore, the condition in Line 7 holds and \(p_{i}\) adds \(v_{k}\) to _safe_values_ in Line 8, as needed.
**Lemma 8**.: _Algorithm 2 satisfies agreement._
Proof.: We prove that all processes have the same set _safe_values_ when deciding, which immediately implies agreement. Suppose that process \(p_{i}\) is the first to add a value \(v_{k}\) to its _safe_values_ set. This means that \(p_{i}\) reads \(v_{k}\) from register \(R_{k}\) (Line 4).
Let \(aop_{i}\) be the audit by \(p_{i}\) on \(R_{k}\) (Line 7). Since \(p_{i}\) adds \(v_{k}\) to _safe_values_, it follows that no pair \((p_{j},\bot)\), for some process \(p_{j}\), is included in the response to \(aop_{i}\). This implies that all read operations from \(R_{k}\) that are linearized before \(aop_{i}\) do not return \(\bot\).
Consider a read operation by process \(p_{j}\) from \(R_{k}\) that is linearized after \(aop_{i}\). This follows the read of processes \(p_{i}\) from \(R_{k}\), which returns \(v_{k}\neq\bot\), and hence, this read will also read \(v_{k}\neq\bot\). (Since only \(p_{k}\) writes to \(R_{k}\), once, changing its value from \(\bot\) to \(v_{k}\).)
Thus, no read from \(R_{k}\) returns \(\bot\). This means that any process \(p^{\prime}_{i}\) consider \(v_{k}\neq\bot\) in Line 7. Moreover, by the Strong Accuracy property of the audit operation (Definition 1(3)), it follows that no pair \((p_{j},\bot)\) is contained in the result of the audit operation by \(p^{\prime}_{i}\) on \(R_{k}\). This implies that \(v_{k}\) is in _safe_values_ of \(p^{\prime}_{i}\).
This implies that the _safe_values_ sets of all processes are identical, and they all decide on the same value.
Therefore, the algorithm satisfies validity (Lemma 7) and agreement (Lemma 8). Furthermore, all the loops in **propose** are iterated at most \(n\) times. Since the operations invoked in **propose** are wait-free, we get that Algorithm 2 is wait-free. This implies:
**Theorem 9**.: _Algorithm 2 solves consensus for \(n\) processes._
## 5 Atomic audit implementations
We now turn to present several implementations of an atomic single-writer register with atomic audit. The results of the previous section indicate which synchronization primitives must be used in the implementations. Since two-process consensus can be solved with a single auditor and single reader, we cannot avoid synchronization primitives with consensus number at least two; we use swap and fetch&add (Sections 5.1, 5.2 and 5.3). When there are multiple auditors and multiple readers, we use a universal synchronization primitive, compare&swap (Section 5.4), in addition to fetch&add.
### Implementing single-reader atomic register with single-auditor
atomic audit using swap
We implement a swsr atomic register with single-auditor atomic audit using a swap primitive. We use a shared register \(R\) that holds the last written value, if the last operation was a write, or a special value (\(\bot\)) if the last operation was a read; the audit operations do not affect the
value of \(R\). In a write(\(v\)) operation, the (single) writer applies _swap_ to \(R\), atomically writing \(v\) into \(R\) and retrieving the overwritten value to check if the reader read the previously written value. In the latter case, the swap returns a special value \(\bot\).
The pseudo-code appears in Algorithm 3. The reader keeps the following local data structures: _val_ that holds the value read from \(R\), initially \(\bot\); and _read_result_ that holds the value returned by the last read operation, initially \(\bot\). The writer and auditor keeps the following local data structures: _curr_val_ that hold the last value written in \(R\), initially \(v_{0}\); _prev_val_ that hold the previous value written in \(R\), initially \(\bot\); and _audit_result_ a set that stores the pairs (process,value) of the detected read operation, initially \(\emptyset\).
In a read, the reader atomically reads the last value written into \(R\) and swaps it with \(\bot\) to notify the writer that it read the last value written. If the response is not \(\bot\), then this is the response of the read, which the reader stores in _read_result_ for future read operations, before returning. Otherwise, no write has occurred since its previous read, so the read returns the value in _read_result_ (without changing its value).
In a write, the (single) writer stores in _prev_val_ the value of the previous write, from _curr_val_ (Line 3), and stores in _curr_val_ the value \(v\) it is going to write (Line 3). In this way, if the next write operation detects that the reader has read the previous value written, the writer knows what this value is. Then, the writer swaps _curr_val_ into \(R\): atomically writing it into \(R\) and retrieving the overwritten value. If the writer gets \(\bot\) from the swap, then the reader has read the last value it wrote (stored in _prev_val_), and it adds the pair (reader, _prev_val_) to _audit_result_ (Line 3).
In an audit, the auditor (who is also the writer) returns all the (process,value) pairs collected during the previous write operations. By reading \(R\), the auditor checks whether the reader read the value of the last write operation, in which case \(R\) is \(\bot\). In this case, it adds the pair (reader, _curr_val_) to _audit_result_. Finally, the audit returns _audit_result_.
The algorithm uses a shared variable \(R\) that holds either a value or a special character.
#### 4.2.2 Proof of Correctness:
We assume that the values written to the register are unique.
Fix a history \(H\). It has at most two pending operations: one, either an audit or a write, by process \(p_{w}\), and another by process \(p_{r}\), which must be read. We never complete a pending audit. We complete a pending read in \(H\) if and only if some audit contains \((p_{r},v)\) in its response and no preceding read in \(H\) (which must be complete) returns \(v\). We complete a pending write in \(H\) if and only if some read (including those completed) returns the corresponding value.
A pending operation that is completed has accessed \(R\) with a swap: A read is completed if it is the only read that returns a value detected by an audit, thus, the read has executed the swap in Line 3. A write is completed if some read has read its value, namely, the write has executed the swap in Line 3.
We totally order all the completed operations by the order they apply their unique swap on \(R\). Call this total order \(\pi\) and note that it respects the real-time order of the high-level operations on the register, since the swaps are inside the operations' intervals.
Every read in \(\pi\) returns the value of the most recent preceding write, if there is one, and the initial value, otherwise.
Proof.: Consider a read \(\mathit{op}_{r}\) that returns a value \(v\), and let \(\mathit{op}_{r}{}^{\prime}\) be the first read that returns this value. Since _read_result_ is updated only if the value returned by the swap in Line 3 is not \(\bot\), then the swap of \(\mathit{op}_{r}{}^{\prime}\) returns \(v\). Thus, there is a preceding swap that sets \(R\) to \(v\), and it must be by some write \(\mathit{op}_{w}\) of value \(v\). Since reads and writes are linearized by the order of their swaps, \(\mathit{op}_{w}\) precedes \(\mathit{op}_{r}{}^{\prime}\), and therefore, also \(\mathit{op}_{r}\) in \(\pi\).
We next argue that no other write is linearized between \(\mathit{op}_{w}\) and \(\mathit{op}_{r}\) in \(\pi\). Assume otherwise, and let \(\mathit{op}_{w}{}^{\prime}\) be the last write that is linearized before \(\mathit{op}_{r}\) in \(\pi\).
If the swap of \(\mathit{op}_{r}\) in Line 2 returns a value different from \(\bot\), then this value was written by \(\mathit{op}_{w}{}^{\prime}\) because this is the last preceding swap that writes a non-\(\bot\) value before the swap by \(\mathit{op}_{r}\). This contradicts the assumption that \(\mathit{op}_{r}\) returns the value written by \(\mathit{op}_{w}\).
If the swap of \(\mathit{op}_{r}\) in Line 2 returns \(\bot\), this means that an earlier read that executed Line 2 after \(\mathit{op}_{w}{}^{\prime}\) executed its swap. The first such read swaps from \(R\) the value written by \(\mathit{op}_{w}{}^{\prime}\) with \(\bot\). By Line 4, the value of _read_result_ is not \(v\) when \(\mathit{op}_{r}\) returns in Line 5, which is a contradiction.
The set of pairs \(\mathcal{P}\) returned by an audit in \(\pi\) satisfies the Completeness property.
Proof.: Consider an audit \(\mathit{op}_{a}\) that returns a set \(\mathcal{P}\), and let \(\mathit{op}_{r}\) be a read returning \(v\) that precedes \(\mathit{op}_{a}\) in \(\pi\). By Lemma 3.2, every read returns the value of the most recent preceding write in \(\pi\). Let \(\mathit{op}_{r}{}^{\prime}\) be the first read that returns \(v\). Then \(\mathit{op}_{r}{}^{\prime}\) sets \(R\) to \(\bot\), and the value in
\(\mathit{curr\_val}\) is \(v\).
If there is no write between \(\mathit{op}_{r}\) and \(\mathit{op}_{a}\), the audit reads \(\bot\) from \(R\) (Line 13), while \(\mathit{curr\_val}\) is still \(v\) in Line 14, implying that the audit adds \((p_{r},v)\) to \(\mathcal{P}\). Otherwise, there is a write between \(\mathit{op}_{r}\) and \(\mathit{op}_{a}\). Let \(\mathit{op}_{w}\) be the first such write, and notice that \(\mathit{op}_{w}\) completes, since there is a following audit (by the same process). Moreover, since it is the first write after \(\mathit{op}_{r}\), the value of \(R\) is \(\bot\) when \(p_{w}\) executes Line 9 and \(\mathit{curr\_val}\) is \(v\) immediately before it executes Line 7. Thus, the pair \((p_{r},v)\) is added to \(\mathit{audit\_result}\).
The set of pairs \(\mathcal{P}\) returned by an audit in \(\pi\) satisfies the Strong Accuracy property.
Proof.: Consider an audit \(\mathit{op}_{a}\) that returns a set \(\mathcal{P}\), and let \((p_{r},v)\) be a pair in \(\mathcal{P}\). The first operation \(\mathit{op}\) that adds \((p_{r},v)\) to \(\mathcal{P}\) is either \(\mathit{op}_{a}\) itself or a write / audit that precedes \(\mathit{op}_{a}\) in \(\pi\). This is because the variable \(\mathit{read\_result}\) holding set \(\mathcal{P}\) is updated immediately after the swap by \(p_{w}\) in the corresponding operation.
If \((p_{r},v)\) is added to \(\mathcal{P}\) by an audit \(\mathit{op}\), then \(\mathit{curr\_val}\) is \(v\) when this happens. Since the condition in Line 13 holds, there is a reads that swaps \(\bot\) into \(R\) after \(v\) was written to \(R\). This read is between the write of \(v\) and \(\mathit{op}\) and by Lemma 10, it returns \(v\), proving the lemma.
If \((p_{r},v)\) is added to \(\mathcal{P}\) by a write \(\mathit{op}\), then by Line 7 and Line 10, \(v\) is the value written by the write that immediately precedes \(\mathit{op}\) in \(\pi\). Then \(v\) is the value returned by the read that swaps \(v\) with \(\bot\), which allows the condition in Line 9 to hold. This read precedes \(\mathit{op}\) and therefore, it also precedes \(\mathit{op}_{a}\).
Lemma 10, Lemma 11 and Lemma 12 imply:
Algorithm 3 implements a single-writer single-reader atomic register with single-auditor atomic audit.
### Implementing multi-reader atomic register with single-auditor atomic audit using swap and fetch&add
The algorithm for a _multi-reader_ atomic register with single-auditor atomic audit follows a similar idea as the algorithm for a _single_ reader in the previous section, by having each reader leave a trace of each of its reads. However, there is an additional difficulty of allowing the writer to atomically retrieve the traces of all readers when writing a new value or doing an audit.
We address this difficulty by using fetch&add, in addition to swap. A \(\mathit{fetch\&add}\) allows to accurately change the value of a shared variable \(R\) so that its binary representation captures multiple pieces of information: The high-order bits hold the value written by the writer, while the \(n\) low-order bits indicate whether the readers have read the value. Specifically, the bit in position \(i\), denoted \(\mathit{bit}_{i}\), is associated with reader \(p_{i}\), \(0\leq i<n\), and holds either \(0\) or \(1\). \(\mathit{bit}_{i}\) is set to \(1\) by \(p_{i}\) to indicate that it has read the value stored in the high-order bits of \(R\); it is \(0\), otherwise. We use two functions to extract information from \(R\). If \(R\) holds \(temp\), then \(\mathsf{GetValue}(temp)\) retrieves the value stored in the high-order bits of \(temp\) and \(\mathsf{GetSits}(temp)\) retrieves an array of \(n\) low-order bits of \(temp\).
In more detail (see Algorithm 4), when a reader \(p_{i}\) reads a value written in \(R\), it sets \(\mathit{bit}_{i}\) to \(1\) by adding \(2^{i}\) to the value stored in \(R\). Since a reader can read the same value several times, \(p_{i}\) checks that \(\mathit{bit}_{i}\) is not already set to \(1\), before adding \(2^{i}\) to \(R\) (Line 4). This ensures that \(p_{i}\) changes only its bit.
```
Shared Variables:\(R\) accessed with read, swap and fetch\(\&add\) primitives, initially \(v_{0}*2^{n}\) Local Variables:\(\triangleright\)Pseudo code for reader\(p_{i}\), \(i\in[0,n-1]\)val, initially \(\bot\)\(\triangleright\) content of the register \(read\_result\), initially \(\bot\)\(\triangleright\) last value read
1:Read()
2:\(val\gets R.read()\)
3:if\((GetBits(val)[i]=0)\)
4:\(read\_result\gets GetValue(R.fetch\&add(2^{i}))\)
5:return\(read\_result\) Local Variables:\(\triangleright\)Pseudo code for writer and auditor\(p_{w}\)\(audit\_result\), initially \(\emptyset\)\(\triangleright\) set of tuples \((p,v)\), with \(p\) the reader and \(v\) a value curr_val, initially \(v_{0}\)\(\triangleright\) last value written prev_val, initially \(\bot\)\(\triangleright\) previous value written val with initial value \(\bot\)\(\triangleright\) content of the register
6:Write(\(v\))
7:prev_val\(\leftarrow\)curr_val
8:curr_val\(\gets v\)
9:\(val\gets R.swap(v,0^{n})\)\(\triangleright\) write \(v\) in the high order bits
10:for\(0\leq j<n\)
11:if\((GetBits(val)[j]=1)\)\(\triangleright\) check if \(p_{j}\) read the previous value
12:\(audit\_result.add(p_{j},prev\_val)\)
13:return
14:Audit()
15:\(val\gets R.read()\)
16:for\(0\leq j<n\)
17:if\((GetBits(val)[j]=1)\)\(\triangleright\) check if \(p_{j}\) read the last value
18:\(audit\_result.add(p_{j},curr\_val)\)
19:return\(audit\_result\)
```
**Algorithm 4** Implementation of multi-reader atomic register with single-auditor atomic audit using _fetch\(\&add\)_ and _swap_, for \(n\) readers
When writing a new value \(v\), the writer swaps the value \(v\) and resets the \(n\) low-order bits to \(0\) into \(R\) and obtains the previous value of \(R\), into a local variable _val_. Then for each reader \(p_{i}\), the writer retrieves \(bit_{i}\) from _val_ (Lines 10 and 11). If \(bit_{i}\) is equal to \(1\), the writer knows that reader \(p_{i}\) has read the previous value and the pair \((p_{i},prev\_val)\) is added to the set to be returned by an audit, called \(audit\_result\). \(audit\_result\) is a local variable, which can be accessed both by the writer and the auditor because they are the same process.
In a similar manner, an audit operation also reads \(R\) to detect high-level Read operations that may have read the last value written but that has not been detected by the last write operation. Since \(audit\_result\) is a set, the pair will not be added if it was already in the set. (An efficient implementation of a sequential set can be used for this local variable.)
Note that the algorithm uses a shared variable \(R\) that holds a value and \(n\) bits.
Proof of Correctness:We assume that the values written to the register are unique.
Fix a history \(H\). Note that there are at most \(n+1\) pending operations in \(H\): one (either an audit or a write) by the writer, and possibly one read operation for each reader. We never complete a pending audit. We complete a pending read invoked by process \(p_{i}\) in \(H\) if and only if some audit contains \((p_{i},v)\) in its response and no earlier read in \(H\) (which must be complete) returns \(v\) to \(p_{i}\). We complete a pending write in \(H\) if and only if some read in \(H\) returns the corresponding value. Note that if a pending operation is completed, then it applied a primitive to \(R\): a write is completed if some read has read its value, namely, the write has executed the swap in Line 9; a read is completed if it is the only read that returns a value detected by an audit, thus, the read has executed the _fetch\(\&\)add_ in Line 4.
We totally order all the completed operations according to the order they apply their last primitive (_swap_, _read_ or _fetch\(\&\)add_) to \(R\). A write or an audit apply only one primitive. For a read, the last primitive is the _fetch\(\&\)add_, if this is the first time that the process reads a given value, and otherwise, it is the read. Let \(\pi\) denote this total order, and note that it respects the real-time order of the high-level operations on the auditable register because each such step is in the execution interval of the corresponding operation.
For each \(i\in\{0,\ldots n-1\}\), \(\mathit{bit}_{i}\) is set to \(0\) each time a write operation executes Line 9 and it is set to \(1\) if and only if a read operation by \(p_{i}\) executes Line 4.
Proof.: When a write operation executes Line 9, it writes a value \(v*2^{n}\) into \(R\) for some integer \(v\). Thus the \(n\) less significant bits of the binary representation of the value stored in \(R\) are set to \(0\). For each \(i\in\{0,\ldots n-1\}\), process \(p_{i}\) adds the amount \(2^{i}\) to the value stored in \(R\) if and only if the \(i\)-th bit of its binary representation is equal to \(0\). Thus, \(p_{i}\) sets the \(i\)-th bit to \(1\) and does not change any other bit of \(R\).
Every read in \(\pi\) returns the value of the most recent preceding write in \(\pi\), if there is one, and the initial value, otherwise.
Proof.: Consider a read \(\mathit{op}_{r}\) by a process \(p\) that returns a value \(v\).
If the condition in Line 3 holds, then the result of \(\mathit{op}_{r}\) is the value extracted from the value read by applying the fetch\(\&\)add primitive on \(R\) (the whole content of \(R\) without the bits dedicated for auditing) in Line 4. Thus, there is a preceding swap primitive that set the high-order bits of \(R\) to \(v\), and it must be by some write operation \(\mathit{op}_{w}\). Since reads and writes are linearized according to the order they apply their last primitive to \(R\), \(\mathit{op}_{w}\) precedes \(\mathit{op}_{r}\) in \(\pi\). For the same reason, \(\mathit{op}_{w}\) should be the most recent preceding write in \(\pi\). Otherwise, \(\mathit{op}_{r}\) would have returned a different value.
Otherwise, the condition at Line 3 does not hold, then the value returned by \(\mathit{op}_{r}\) has been computed by the most recent read operation that precedes \(\mathit{op}_{r}\) in the program order of \(p\) for which the condition evaluates to \(true\), denoted \(\mathit{op}_{r^{\prime}}\). Also, by Lemma 3 and since \(\mathit{bit}_{i}\neq 0\), no write changed the values of \(R\) after \(\mathit{op}_{r^{\prime}}\) and before \(\mathit{op}_{r}\). Thus, the claim follows from the previous case.
The set of pairs \(\mathcal{P}\) returned by an audit in \(\pi\) satisfies the Completeness property.
Proof.: Consider an audit \(\mathit{op}_{a}\) that returns a set \(\mathcal{P}\), and let \(\mathit{op}_{r}\) be a read operation by process \(p_{i}\) that returns a value \(v\) and that precedes \(\mathit{op}_{a}\) in \(\pi\). In the following we prove that \((p_{i},v)\in\mathcal{P}\). By Lemma 3, every read returns the value of the most recent preceding write
in \(\pi\), denoted \(op_{w}\). Let \({\it op_{r}}^{\prime}\) be the first read that returns \(v\) to \(p_{i}\). \({\it op_{p}}_{r}^{\prime}\) is either \({\it op_{r}}\) or it is in between \({\it op_{w}}\) and \({\it op_{r}}\) in \(\pi\). According to the way we linearize the operation, the swap primitive applied during \({\it op_{w}}\) precedes the read primitive applied by \(p_{i}\) in the execution of \({\it op_{r}}^{\prime}\). In particular, \({\it op_{w}}\) sets \({\it bit_{i}}\) to \(0\) (Line 9), and \({\it op_{r}}^{\prime}\) read this bit equal to \(0\) and sets its value to \(1\) (Line 4). And this value does not change until the next write operation (if any) applies the swap to write a new value. Moreover, the value in \({\it curr\_val}\) is \(v\) (Line 8 ).
If there is no write operation between \({\it op_{r}}\) and \({\it op_{a}}\), then the value read from \(R\) when executing the audit operation has \({\it bit_{i}}\) equals to \(1\) (Line 17). Also \({\it curr\_val}\) is still \(v\) when the writer executes Line 18. Thus, \((p_{i},v)\) is added to \(\mathcal{P}\). Otherwise, there is a write between \({\it op_{r}}\) and \({\it op_{a}}\) in \(\pi\). Let \({\it op_{w}}^{\prime}\) be the first such write, and notice that \({\it op_{w}}^{\prime}\) completes, since there is a following audit (by the same process). Moreover, since it is the first write after \({\it op_{r}}\), \({\it bit_{i}}\) is equal to \(1\) when the writer executes Line 9 and \({\it curr\_val}\) is equal to \(v\) immediately before it executes Line 7. Thus, the pair \((p_{i},v)\) is added to \({\it audit\_result}\) at Line 18.
The set of pairs \(\mathcal{P}\) returned by an audit in \(\pi\) satisfies the Strong Accuracy property.
Proof.: Consider an audit operation \({\it op_{a}}\) that returns a set \(\mathcal{P}\), and let \((p_{i},v)\) be some pair in \(\mathcal{P}\). The first operation \({\it op}\) that adds \((p_{i},v)\) to \(\mathcal{P}\) is either \({\it op_{a}}\) itself or a write/audit that precedes \({\it op_{a}}\) in \(\pi\). This is because the variable \({\it audit\_result}\) holding the set \(\mathcal{P}\) is updated after the swap (or read) primitive applied by the writer in the execution of a write (or audit) operation.
If \((p_{i},v)\) is added to \(\mathcal{P}\) by an audit \({\it op_{a}}^{\prime}\), then \({\it curr\_val}\) is \(v\) when this pair is added. Since condition in Line 17 holds, \({\it bit_{i}}=1\) when the writer executes Line 18 (since the writer is also the auditor, no write operation can be concurrent to the audit). By Lemma 14, only \(p_{i}\) can set \({\it bit_{i}}\) to \(1\), then \(p_{i}\) read \(v\) and set its bit to \(1\) (at Line 4) before the step of the audit, which proves the lemma.
If \((p_{i},v)\) is added to \(\mathcal{P}\) in the execution of a write \({\it op_{w}}\). By Line 7 and 12, \(v\) is the value written by the write operation that immediately precedes \({\it op_{w}}\) in \(\pi\). Then \(v\) is the value returned by the read that set \({\it bit_{i}}=1\), which allows the condition in Line 9 to hold. This read precedes \({\it op_{w}}\) and therefore, it also precedes \({\it op_{a}}\).
Algorithm 4 implements a single-writer multi-reader atomic register with single-auditor atomic audit.
### Implementing single-reader atomic register with multi-auditor
atomic audit using swap and fetch&add
The algorithm for a single-reader atomic register with _multi-auditor_ atomic audit follows a similar idea as the algorithm for a _multi_ reader in the previous section, using a shared register accessed with the \(read\), \(swap\) and \(fetch\&add\) primitives to support the detection of read operations by the writer and the auditors.
Since an audit operation can overlap read, write and other audit operations, we need an additional mechanism to ensure that the return value of the audit is linearizable. The reader and the auditors share information in an unbounded array of read/write registers called \(pairs\), where \(pairs[k]\) indicates whether the reader read the \(k\)-th value written by the writer (if there was such write). If \(pairs[k]\) contains the initial value \(\bot\) then the reader has not read the \(k\)-th value written, otherwise \(pairs[k]\) contains that value. Each value written has a unique sequence number that is incremented when the writer performs a new write. When performing a write of a value \(v\), the writer applies a swap to \(R\) to atomically write
together with its sequence number and set the lowest order bit of the register to 0 to indicate a new write (not yet read).
Algorithm 5 presents the pseudo-code. As in Algorithm 4, in a read operation, the reader reads the value of \(R\) and sets the low-order bit to 1 if it was equal to 0 (indicating that this is the first time \(p_{r}\) read this value). Additionally, it writes the value read in the corresponding entry of \(pairs\). When a process performs an audit operation, it retrieves from \(R\) the sequence number \(sn\) of the last write operation, and also checks whether the reader has read the last value written \(v\). In the latter case, it writes \(v\) into \(pairs[sn]\). Then, it reads all the entries of \(pairs\), from index \(sn\) down to the first, to obtain its return set.
Because the value \(v\) and the sequence number are unbounded, we interleave them bit by bit in \(R\), as done in [18]. Three functions are used to extract information from \(R\). If \(R\) holds \(temp=(v,sn,bit)\), then \(\mathsf{GetBit}(temp)\) retrieves its lowest-order bit, \(\mathsf{GetValue}(temp)\) retrieves the value \(v\), and \(\mathsf{GetSn}(temp)\) retrieves \(sn\).
```
Shared Variables:\(R\) accessed with read, fetch&add and swap. Its initial value is \((v_{0},0,0)\). \(pairs\) An unbounded array of read/write registers, shared by all processes. Initially all registers contains the special value \(\bot\). Local Variables:\(\triangleright\)Pseudo code for reader \(p_{r}\)sn, initially 0\(\triangleright\) the sequence number of the value store in the register read_result, initially \(\bot\)\(\triangleright\) value read from the register
1:Read()\(temp\gets R.read()\)
2:if\((GetBit(temp)=0)\)
3:\(temp\gets GetValue(R.\mathit{fetch\&add(1)})\)
4:\(read\_result\gets GetValue(temp)\)
5:\(pairs[GetSn(temp)].write(\mathit{read\_result})\)
6:returnread_result
```
**Local Variables:\(\triangleright\)Pseudo code for writer \(p_{w}\) prev_val, initially \(\bot\)\(\triangleright\) previous value written \(sn\), initially 0\(\triangleright\) the sequence number of the write
8:Write(\(v\))
9:\(sn\gets sn+1\)
10:\(temp\gets R.swap((v,sn,0))\)
11:if\((GetBit(temp)=1)\)
12:\(pairs[GetSn(temp)].write(GetValue(temp))\)\(\triangleright\) detect the read of the previous write
13:return Local Variables:\(\triangleright\)Pseudo code for auditor \(p_{i}\) audit_result initially 0\(\triangleright\) set of couples (process, value) audit_index initially 0\(\triangleright\) index of the last updated value in pairs[\(val\), initially 0\(\triangleright\) the value store in the register
14:Audit()
15:\(temp\gets R.read()\)
16:\(audit\_index\gets GetSn(temp)\)
17:if\((GetBit(temp)=1)\)
18:\(pairs[audit\_index].write(GetValue(temp))\)
19:for\(j\) from \(audit\_index\) to 0
20:if\((pairs[j].read()\neq\bot)\)
21:\(audit\_result.add(p_{r},pairs[j].read())\)
22:return\(audit\_result\) ```
**Algorithm 5** Implementation of a single-reader atomic register with multi-auditor atomic audit using _swap_ and _fetch&add_
Proof of Correctness:We assume that the values written to the register are unique.
Note that there are at most \(n\) pending operations in \(H\): one (either an audit or a write) by the writer, one (either an audit or a read) by the reader, and possibly one (an audit) for all the other processes. We construct a history \(H^{\prime}\) by completing some operations in \(H\). We never complete a pending audit. We complete a pending read in \(H\) if and only if some audit contains \((p_{r},v)\) in its response and no preceding read in \(H\) (which must be complete) returns \(v\). After completing the reads, we complete a pending write if and only if some (completed) read returns the corresponding value. We remove from \(H^{\prime}\) all other pending operations in \(H\). Note that if a pending operation is completed, then it applied a primitive to \(R\): a write is completed if some read has read its value, namely, the write has executed the swap in Line 10; a read is completed if it is the only read that returns a value detected by an audit, thus, the read has executed the _fetch\(\&\)add_ in Line 4. Thus, all operations in \(H^{\prime}\) applied a primitive to \(R\), and we can associate a sequence number \(\mathit{sn}\) to each operation, which corresponds to the sequence number they read (for a read or audit operation) or write (for a write operation) from the shared register \(R\) during this primitive.
We construct a total order \(\pi\) of the operations in \(H^{\prime}\). First, we put in \(\pi\) all the write operations according to the order they occur in \(H^{\prime}\); because write operations are executed sequentially by the unique writer, this sequence is well-defined and the order is consistent with the sequence number associated with the values written.
Next, we add the read operations in \(\pi\). Since there is a unique reader the read operations are executed sequentially. The sub-sequence of read operations that returns a value with sequence number \(\mathit{sn}\) is placed immediately after the write operation that generates the sequence number \(\mathit{sn}\), while preserving their order in \(H^{\prime}\).
The construction of \(\pi\) immediately imply that a read operation returns the value written by the write preceding it in \(\pi\).
Every read operation in \(\pi\) returns the value of the most recent preceding write in \(\pi\), if there is one, and the initial value otherwise.
Finally, we consider the audit operations one by one, in reverse order of their response in \(H\). Consider an audit operation \(\mathit{op}_{a}\) and let \(\mathit{sn}\) be the sequence number it read at Line 15. There are three cases.
* Case 1: If \(\mathit{op}_{a}\) reads a value \(v\) in \(pairs[\mathit{sn}]\), we place \(\mathit{op}_{a}\) in \(\pi\) immediately after the last read with sequence number \(\mathit{sn}\) that starts before \(\mathit{op}_{a}\) terminates.
* Case 2: If \(\mathit{op}_{a}\) read a value \(v\) in \(pairs[\mathit{sn}-1]\) and the initial value \(\bot\) in \(pairs[\mathit{sn}]\), we place \(\mathit{op}_{a}\) in \(\pi\) immediately after the write operation with sequence number \(\mathit{sn}\) (at the start if no such write exists).
* Case 3: If \(\mathit{op}_{a}\) read the initial value \(\bot\) both in \(pairs[\mathit{sn}]\) and in \(pairs[\mathit{sn}-1]\), then we place \(\mathit{op}_{a}\) in \(\pi\) immediately after the write operation with sequence number \(\mathit{sn}\) if it terminates before \(\mathit{op}_{a}\) is invoked in \(H^{\prime}\). Otherwise, \(\mathit{op}_{a}\) is placed immediately after the write operation with sequence number \(\mathit{sn}-1\) (at the start if there is no such operation).
Case 3 handles the situation where an audit operation \(\mathit{op}_{a}\) reads a sequence number \(\mathit{sn}\) but misses a read operation \(\mathit{op}_{r}\) that returns the value with sequence number \(\mathit{sn}-1\). This happens only if \(\mathit{op}_{a}\) is concurrent with \(\mathit{op}_{r}\) and with the write \(\mathit{op}_{w}\) that generates the sequence number \(\mathit{sn}\); in particular, \(\mathit{op}_{r}\) and \(\mathit{op}_{w}\) write into \(pairs[\mathit{sn}-1]\) after \(\mathit{op}_{a}\) read it.
We next prove that \(\pi\) preserves the real-time order of \(H\) and that the results of the audit operations guarantee the completeness and strong accuracy properties.
Let \(\mathit{op}_{1}\) and \(\mathit{op}_{2}\) two high-level operations (read, write, and audit) in \(H\) such that \(\mathit{op}_{1}\) completes before \(\mathit{op}_{2}\) begins. Then, \(\mathit{op}_{1}\) precedes \(\mathit{op}_{2}\) in \(\pi\).
Proof.: The construction preserves the order of write-write and read-read operations. Consider a write operation \(op_{w}\) and a read operation \(op_{r}\); let \(sn\) be the sequence number generated by \(op_{w}\) and \(sn^{\prime}\) be the sequence number returned by \(op_{r}\). If \(op_{r}\) precedes \(op_{w}\) in \(H^{\prime}\) then \(sn^{\prime}<sn\), and by construction \(op_{r}\) is placed after the write that generates \(sn^{\prime}\) and before the next write (which is possibly \(op_{w}\)). If \(op_{r}\) follows \(op_{w}\) in \(H\), then \(sn^{\prime}\geq sn\), and by construction, \(op_{r}\) is placed after \(op_{w}\) in \(\pi\).
Consider now an audit operation \(op_{a}\), and assume the sequence number read by \(op_{a}\) (Line 15) is \(sn\).
Consider a write operation \(op_{w}\) with sequence number \(sn^{\prime}\). _(A)_ If \(op_{w}\) follows \(op_{a}\) in \(H^{\prime}\), then \(sn^{\prime}\geq sn+1\). Since the latest \(op_{a}\) is placed in \(\pi\) is after the write with sequence number \(sn\), it follows that \(op_{a}\) is ordered before \(op_{w}\) in \(\pi\). _(B)_ If \(op_{w}\) precedes \(op_{a}\) in \(H^{\prime}\), then \(sn^{\prime}\leq sn\). Otherwise, \(op_{a}\) would read a value greater than \(sn\), because the sequence numbers are monotonically increasing. If \(op_{a}\) is ordered after the write operation with sequence number \(sn\), then it is clearly after \(op_{w}\) in \(\pi\). The only case \(op_{a}\) is ordered before the write operation \(op_{w}^{\prime}\) that generates \(sn\) is when \(op_{w}^{\prime}\) does not precede \(op_{a}\) in \(H^{\prime}\), so it cannot be \(op_{w}\). Thus, \(op_{a}\) is placed after \(op_{w}\) in \(\pi\).
Consider a read operation \(op_{r}\) that returns \(v\), and let \(sn^{\prime}\) be its sequence number. _(A)_ If \(op_{r}\) follows \(op_{a}\) in \(H^{\prime}\), then \(sn^{\prime}\geq sn\). If \(sn^{\prime}>sn\), then \(op_{a}\) is placed before \(op_{r}\) in \(pi\), since \(op_{a}\) is placed in \(\pi\) before the write with sequence number \(sn\), and \(op_{r}\) is placed in \(\pi\) after the write with sequence number \(sn^{\prime}\). Otherwise, \(sn^{\prime}=sn\). If \(op_{a}\) reads \(v\) from \(pairs[sn]\), then Case 1 applies, and \(op_{a}\) precedes \(op_{r}\) in \(\pi\). If \(op_{a}\) reads \(\bot\) from \(pairs[sn]\), then it is placed immediately after the write that generates \(sn\) (at the latest) and thus it precedes \(op_{r}\) in \(\pi\). _(B)_ If \(op_{r}\) precedes \(op_{a}\) in \(H\), then \(sn^{\prime}\leq sn\). If \(sn^{\prime}=sn\), Then \(op_{r}\) writes \(v\) in \(pairs[sn^{\prime}]\) before \(op_{a}\), and by Case 1, \(op_{a}\) is ordered in \(\pi\) immediately after the last read with sequence number \(sn^{\prime}\) that starts before \(op_{a}\) terminates. Then \(op_{a}\) is ordered after \(op_{r}\) in \(\pi\). If \(sn^{\prime}=sn-1\), then since \(pairs[sn^{\prime}]\neq\bot\) when \(op_{a}\) reads it, \(op_{a}\) is placed in \(\pi\) after the write with sequence number \(sn\), and hence after \(op_{r}\). Finally, if \(sn^{\prime}<sn-1\) then \(op_{r}\) is placed before the write with sequence number \(sn^{\prime}-1\), while \(op_{a}\) is placed after this write.
Finally, consider another audit operation \(op_{a^{\prime}}\), with sequence number \(sn^{\prime}\). Without loss of generality, assume \(op_{a}\) precedes \(op_{a^{\prime}}\) in \(H\); therefore, \(sn\leq sn^{\prime}\). Furthermore, if \(op_{a^{\prime}}\) read \(\bot\) from an entry of \(pairs\), then \(op_{a}\) also read \(\bot\) in the same entry. Thus, \(op_{a^{\prime}}\) is placed after a write with sequence number greater or equal to the last preceding write of \(op_{a}\) in \(\pi\). Also, if a read operation starts before \(op_{a}\) terminates, then it starts before \(op_{a^{\prime}}\) terminates, implying that \(op_{a^{\prime}}\) cannot be linearized before any read operation that precedes \(op_{a}\) in \(\pi\). The claim follows because we add audit operations in \(\pi\) in reverse order of their responses in \(H^{\prime}\). Thus, if they are placed immediately after the same operation (either a read or a write) then \(op_{a}\) is placed before \(op_{a^{\prime}}\) in \(\pi\).
The set of pairs \(\mathcal{P}\) returned by an audit in \(\pi\) satisfies the Completeness property.
Proof.: Consider an audit operation \(op_{a}\) that returns a set \(\mathcal{P}\), and let \(op_{r}\) be a read operation by process \(p_{r}\) that returns a value \(v\) and precedes \(op_{a}\) in \(\pi\). We prove that \((p_{r},v)\in\mathcal{P}\).
Let \(sn\) be the sequence number read by \(op_{a}\), and let \(sn^{\prime}\) the sequence number of the value read by \(op_{r}\). Since \(op_{a}\) follows \(op_{r}\) in \(\pi\), according to our linearization rules \(sn\geq sn^{\prime}\). Thus, \(audit\_index\geq sn^{\prime}\) and \(op_{a}\) reads \(pairs[sn^{\prime}]\) (Lines 19). If the read returns \(v\), then \((p_{r},v)\) is added to \(\mathcal{P}\) (Line 21), and the lemma follows. It remains to prove, by way of contradiction, that \(op_{a}\) does not read \(\bot\) from \(pairs[sn^{\prime}]\). We consider the possible cases:
1. \(sn=sn^{\prime}\), since \(op_{a}\) read \(\bot\) from \(pairs[sn^{\prime}]\), Case 2 or Case 3 apply. Thus, \(op_{a}\) is placed in
\(\pi\) before the write that generates sequence number \(sn^{\prime}\) (at the latest). Since \(\mathit{op}_{r}\) follows this write, \(\mathit{op}_{a}\) is placed before \(\mathit{op}_{r}\) in \(\pi\).
2. \(sn=sn^{\prime}+1\): Case 2 does not hold because \(\mathit{op}_{a}\) read \(\bot\) from \(pairs[sn^{\prime}]\) with \(sn^{\prime}=sn-1\). Since \(\mathit{op}_{r}\) precedes \(\mathit{op}_{a}\) in \(\pi\), by Case 3, \(\mathit{op}_{a}\) follows the write with sequence number \(sn\) in \(\pi\). There are two cases: 1. Case 1: \(\mathit{op}_{a}\) read a value \(\neq\bot\) from \(pairs[sn]\). We reach a contradiction by showing \(\mathit{op}_{a}\) cannot read \(\bot\) from \(pairs[sn-1]\). Since \(pairs[sn]=v^{\prime}\), there is a read operation \(\mathit{op}_{r^{\prime}}\) that reads \(v^{\prime}\), and since there is a single reader, \(\mathit{op}_{r^{\prime}}\) follows \(\mathit{op}_{r}\) in \(H\). Thus, the value of \(pairs[sn-1]\) is set to \(v\) before \(pairs[sn]\) is set to \(v^{\prime}\). By Line 19, \(\mathit{op}_{a}\) first reads \(pairs[sn]\) and then reads \(pairs[sn-1]\), and therefore, it does not read \(\bot\) from \(pairs[sn-1]\). 2. Case 3: \(\mathit{op}_{a}\) read \(\bot\) from \(pairs[sn]\), but the write \(\mathit{op}_{w}\) that generates \(sn\) precedes \(\mathit{op}_{a}\). Since \(\mathit{op}_{r}\) returns \(v\), there is a read operation (possibly \(\mathit{op}_{r}\)) that set the corresponding low-order bit to \(1\) (Line 4). Then, when \(\mathit{op}_{w}\) writes the new value with sequence number \(sn\), it reads \(1\) from this bit and sets \(pairs[sn-1]=v\), so \(\mathit{op}_{a}\) does not read \(\bot\) from \(pairs[sn-1]\).
3. \(sn\geq sn^{\prime}+2\). Then the write operation with sequence number \(sn^{\prime}+1\) completes before \(\mathit{op}_{a}\) reads \(R\), and we can apply the same reasoning as in case 2.b.
The set of pairs \(\mathcal{P}\) returned by an audit in \(\pi\) satisfies the Strong Accuracy property.
Proof.: Consider an audit operation \(\mathit{op}_{a}\) that returns a set \(\mathcal{P}\), and let \((p_{r},v)\) be a pair in \(\mathcal{P}\). We prove that a read operation \(\mathit{op}_{r}\) by process \(p_{r}\) that returns \(v\) is placed before \(\mathit{op}_{a}\) in \(\pi\).
Since \((p_{r},v)\) is in \(\mathcal{P}\), \(\mathit{op}_{a}\) adds \((p_{r},v)\) to \(audit\_result\) because it read \(pairs[sn]=v\) for some \(sn\). If \(v\) was written by the reader (Line 6), then the reader returned the value \(v\) associated with \(sn\), in Line 4. Otherwise, \(v\) was written to \(pairs[sn]\) either by the writer (Line 12) or by the auditor (Line 18). This means that the writer or the auditor read the bit set to \(1\) when, respectively, checking the condition in Line 11 or in Line 17. This bit is set to \(1\) only by the reader when reading the corresponding value in Line 4.
Thus, there is a read operation \(\mathit{op}_{r}\) that read the value \(v\) with sequence number \(sn\) that does not follow \(\mathit{op}_{a}\) in \(H\). We now show that it precedes \(\mathit{op}_{a}\) also in \(\pi\).
Let \(sn^{\prime}\) be the sequence number of of \(\mathit{op}_{a}\). Since \(\mathit{op}_{a}\) reads \(pairs[sn]\), \(sn^{\prime}\geq sn\). If \(sn^{\prime}=sn\), then since \(\mathit{op}_{a}\) read \(v\) in \(pairs[sn]\), \(\mathit{op}_{a}\) is placed after \(\mathit{op}_{r}\) in \(\pi\) and the claim holds. If \(sn^{\prime}=sn+1\), then since \(\mathit{op}_{a}\) reads \(v\) from \(pairs[sn]\), by Cases 1 and 2, it is placed after the write that generates sequence number \(sn+1\), and therefore, after \(\mathit{op}_{r}\). Finally, if \(sn^{\prime}>sn+1\), then \(\mathit{op}_{a}\) is placed in \(\pi\) after a write with a sequence number greater than \(sn\), and hence, after \(\mathit{op}_{r}\).
Algorithm 5 implements a single-writer single-reader atomic register with multi-auditor atomic audit.
### Implementing multi-reader atomic register with multi-auditor
To deal with multiple readers, as in Algorithm 4, each reader sets a dedicated bit in the \(n\) lower-order bits of a shared register \(R\) and the writer writes the value together with a
sequence number in the higher-order bits of \(R\). To deal with multiple auditors, we use an array _pairs_, as in Algorithm 5. To accommodate multiple readers, the array is bi-dimensional, with an unbounded number of columns (corresponding to each written value) and \(n\) rows, one for each reader. Specifically, \(\mathit{pairs}[i][sn]=\perp\) indicates that process \(p_{i}\) has not read the value \(v\) written by the \(sn\)-th write operation (if any); otherwise, \(\mathit{pairs}[i][sn]=v\).
We do not need readers to write into pairs because the writer applies a _compare\(\&\)swap_ to write the new value in \(R\), using the previously-read state of \(R\). So, if in the meanwhile, some reader read the current value stored in \(R\) and set its bit to \(1\), the \(\mathit{compare}\&\)swap_ fails. Thus, the writer detects and write into \(pairs\) all the read operations of the last value written before succeeding the next write. Thus, either the auditor can detect the read operations by reading \(R\) because the bits were not reset by the new write, or this information is in \(pairs\).
Because the sequence numbers and the corresponding values are unbounded, we cannot a priori divide the high-order bits between them. Instead, we interleave them bit-by-bit, as done in Algorithm 5 (following [18]).
Each process has a local variable \(\mathit{val}\) to store the value read from \(R\), in order to select the information it needs. For example, a read operation by a process \(p_{i}\) has to retrieve the value of the low order bit associated with \(p_{i}\) to check if it is equal to \(1\) or \(0\), meaning it has already read the current value or not, respectively. Given a value \(\mathit{val}\), we use the following functions: \(\mathsf{GetValue}(\mathit{temp})\) retrieves the value stored in the high-order bits of \(\mathit{val}\), \(\mathsf{GetSn}(\mathit{temp})\) retrieve the sequence number stored in the high-order bits of \(\mathit{val}\), and \(\mathsf{GetBits}(\mathit{temp})\) retrieves the \(n\) low-order bits of \(\mathit{val}\).
A reader has a local variable called \(\mathit{read\_result}\) to store the value that has to be returned, initially \(\perp\). This is used to ensure that if a new value was not written, consecutive read operations by the same process can return the correct value without setting the corresponding bit to \(1\) more than once.
An auditor has a local variable called \(\mathit{audit\_result}\) that holds a set of pairs (\(process,value\)), one for each detected read operation; it is initially \(\emptyset\). The local variable \(\mathit{audit\_index}\) holds the sequence number read from \(R\), indicating the last column of the matrix _pairs_ written by the writer that the auditor has to read; it is initially \(0\).
The writer holds the last value written in a local variable \(\mathit{val}\).
The pseudocode appears in Algorithm 6.
In a read, the reader first reads \(R\) to check whether it has already read its last value. If this is the case, it simply returns the value. Otherwise, it applies a \(\mathit{fetch}\&\mathit{add}\) to set its bit to \(1\) (indicating that it read the value) and returns the value represented by the high-order bits of the value returned by the \(\mathit{fetch}\&\mathit{add}\).
In an audit, the auditor first reads \(R\) to get the sequence of bits indicating the read operations performed since the last write operation and to atomically get the sequence number of the last write operation. It then reads all the pairs stored in entries of _pairs_ until this index, and adds them to the set that the audit will return; this set is persistent. Finally, it adds to the set additional pairs corresponding to the low-order bits of \(R\) that are set. For simplicity, all the audit reads all entries of \(pairs\) starting from \(0\) up to the read sequence number. It is simple to ensure that the same auditor reads each column of \(pairs\) at most once, by using a persistent local variable to store the last column read.
To write a new value \(v\), the writer increments the sequence number \(sn\), and applies \(\mathit{compare}\&\)swap to \(R\) to store \(sn\) together with \(v\) and reset the \(n\) low-order bits to \(0\). If this is successful, the operation completes; otherwise, the writer reads \(R\), and for each of the \(n\) low-order bits that is set to \(1\), it writes \(v\) into the corresponding entry of \(\mathit{pairs}[\,][sn-1]\) to announce the read operation that set the bit. The writer then retries the \(\mathit{compare}\&\)swap.
```
Shared Variables:\(R\): a register shared by all processes, accessed with \(read\), \(compare\&swap\), and \(fetch\&add\). It contains a sequence number, the corresponding value, and \(n\) bits. Initially \((0,v_{0},0^{n})\). \(pairs[n,\ldots]\): a matrix of read/write registers, where \(pairs[j][k]\) indicates if process \(p_{j}\) has read the k-th written value. Initially, \(\bot\). Local Variables:\(\triangleright\) Pseudo code for reader and auditors \(p_{i}\) (\(i\in[0,n-1]\)) \(temp\) initially \(\bot\)\(\triangleright\) the content of the register \(R\)\(read\_result\) initially \(\bot\)\(\triangleright\) the last value read \(audit\_result\) initially \(\emptyset\)\(\triangleright\) set of (process, value) pairs \(audit\_index\) initially \(0\)\(\triangleright\) index of the last updated value in pairs[]
1:Read()
2:\(temp\gets R.read()\)
3:if\((\mathsf{GetBits}(temp)[i]=0)\)
4:\(read\_result\leftarrow\mathsf{GetValue}(R.\mathit{fetch}\&add(2^{i}))\)
5:return\(read\_result\)
6:Audit()
7:\(temp\gets R.read()\)
8:\(audit\_index\leftarrow\mathsf{GetSn}(temp)\)
9:for\(0\leq j<n\)
10:for\(0\leq k<audit\_index\)
11:if\((pairs[j][k].read()\neq\bot)\)
12:\(audit\_result.add(p_{j},pairs[j][k].read())\)
13:for\(0\leq j<n\)
14:if\((\mathsf{GetBits}(temp)[j]=1)\)\(\triangleright\) checks if \(p_{j}\) read the last value written in R
15:\(audit\_result.add(p_{j},\mathsf{GetValue}(temp))\)
16:return\(audit\_result\)
```
**Local Variables:**
\(\triangleright\)**Pseudo code for writer \(p_{0}\)**
\(temp\) initially \(\bot\)\(\triangleright\) the value read from \(R\)\(sn\) initially \(0\)\(\triangleright\) sequence number of the high-level writes \(val\) initially \(v_{0}\)\(\triangleright\) input value of the last high-level write \(bits[]\) initially \(0^{n}\)\(\triangleright\)\(n\) lowest-order bits of \(R\) to detect high-level reads
17:Write(\(v\))
18:\(sn\gets sn+1\)
19:while(\(R.\mathit{compare}\&swap((sn-1,val,bits),(sn,v,0^{n}))\neq\mathit{True}\))
20:\(temp\leftarrow\mathit{R.read}()\)
21:\(bits\leftarrow\mathsf{GetBits}(temp)\)
22:for\(0\leq j<n\)
23:if\((bits[j]=1)\)\(\triangleright\) check if \(p_{j}\) read the last value
24:\(pairs[sn-1][j].\mathit{write}(val)\)
25:\(bits\gets 0^{n}\)
26:\(val\gets v\)
27:return ```
**Algorithm 6** Implementation of a multi-reader atomic register with multi-auditor atomic audit using \(compare\&swap\) and \(fetch\&add\) for \(n\) processes.
Proof of Correctness:We assume that the values written to the register are unique. We first show that all operations (by a correct process) complete within a finite number of steps. This is immediate for read operations. For an audit operation, the number of the iterations of the first _for_ loop is bounded by the value \(index\) read from \(R\), while the second _for_ loop has \(n\) iterations. We next bound the number of iterations in a write operation.
For every \(i\in\{0,\ldots,n-1\}\), \(\mathit{bit}_{i}\) is set to \(0\) each time a write operation successfully executes the \(\mathit{compare}\&\mathit{swap}\) at Line 1, and it is set to \(1\) if and only if a read operation by \(p_{i}\) executes Line 1.
Proof.: When a write operation executes Line 1, it resets the \(n\) least significant bits of \(R\) to \(0\), together with the new value. For each \(i\in\{0,\ldots n-1\}\), process \(p_{i}\) adds \(2^{i}\) to the value stored in \(R\) if and only if the \(i\)-th bit of its binary representation is \(0\) (Line 1). Thus, \(p_{i}\) sets the \(i\)-th bit to \(1\) and does not change any other bit of \(R\).
The \(\mathit{compare}\&\mathit{swap}\) (Line 1) of a write operation fails at most \(n\) times.
Proof.: The \(\mathit{compare}\&\mathit{swap}\) fails if the state of \(R\) changes in between the \(read\) (Line 1) and the following \(\mathit{compare}\&\mathit{swap}\) (Line 1). Only a reader can change the state of \(R\) between the application of these two primitives by the writer. By Lemma 2, \(\mathit{bit}_{i}\) can be set to \(1\) in Line 1 by process \(p_{i}\) and it is not reset to \(0\) unless a new high-level write is performed. By Line 1, each reader sets its bit at most once for each given value, which completes the proof.
To prove the linearizability of the algorithm, fix a history \(H\). Note that there are at most \(n\) pending operations in \(H\), one for each process. We construct a history \(H^{\prime}\) by completing some pending read and write operations in \(H\); we never complete a pending audit. We first complete a pending read invoked by process \(p_{i}\) in \(H\) if and only if some audit contains \((p_{i},v)\) in its response and no read in \(H\) (which must be complete) returns \(v\) to \(p_{i}\). After completing the reads, we complete a pending write if and only if some (completed) read returns the corresponding value. We remove from \(H^{\prime}\) all other pending operations in \(H\).
Note that if a pending operation is completed, then it applied a primitive to \(R\): A read is completed if it is the only read that returns a value detected by an audit, thus, the read has executed \(\mathit{fetch}\&\mathit{add}\) in Line 1. A write is completed if some read has read its value, namely, the write has executed the \(\mathit{compare}\&\mathit{swap}\) in Line 1.
We construct a sequential history \(\pi\) that contains all the operations in \(H^{\prime}\), while preserving their real-time order. We (totally) order of all the read, audit and write operations in \(H^{\prime}\) according to the order they apply their last primitive on \(R\): this is either a _read_ or a \(\mathit{fetch}\&\mathit{add}\) for read operations, it is a _read_ for an audit, and the \(\mathit{compare}\&\mathit{swap}\) for write operations. Note that these are atomic primitives and their order is well-defined. Clearly, operations are linearized inside their execution intervals, implying that \(\pi\) preserves the real-time order of all the operations in \(H\). Furthermore, we have:
A read in \(\pi\) returns the value of the most recent preceding write in \(\pi\), if there is one, and the initial value, otherwise.
Proof.: Consider a read \(\mathit{op}_{r}\) by a process \(p\) that returns a value \(v\).
If the condition in Line 1 holds, then the result of \(\mathit{op}_{r}\) is the value extracted from the value read by applying \(\mathit{fetch}\&\mathit{add}\) on \(R\) in Line 1. Thus, there is a preceding \(\mathit{successful}\)\(\mathit{compare}\&\mathit{swap}\) primitive that writes \(v\) into \(R\), in the execution of a write operation \(\mathit{op}_{w}\). According to our linearization rules, \(\mathit{op}_{w}\) precedes \(\mathit{op}_{r}\) in \(\pi\). This also shows that \(\mathit{op}_{w}\) is the most recent preceding write in \(\pi\), since otherwise, \(\mathit{op}_{r}\) would have returned a different value.
If the condition in Line 3 does not hold, then the value returned by \(\mathit{op}_{r}\) has been obtained by a previous read operation by \(p\), for which the condition of Line 3 held, denoted \(\mathit{op}_{r^{\prime}}\). By Lemma 24 and since \(\mathit{bit}_{i}\neq 0\), no write changed the values of \(R\) after \(\mathit{op}_{r^{\prime}}\) and before \(\mathit{op}_{r}\). Thus, the claim follows from the previous case.
The set of pairs \(\mathcal{P}\) returned by an audit in \(\pi\) satisfies the Strong Accuracy property.
Proof.: Consider an audit operation \(\mathit{op}_{a}\) by a process \(p\) that returns a set \(\mathcal{P}\), and let \((p_{i},v)\) be a pair in \(\mathcal{P}\). We prove that there is a read operation by process \(p_{i}\) that returns the value \(v\) and it is linearized before \(\mathit{op}_{a}\). Assume that \(\mathit{op}_{a}\) is the first audit operation by \(p\) that adds \((p_{i},v)\) to _audit_result_. If not, we can consider the first such audit operation by \(p\) and since it has to be linearized before \(\mathit{op}_{a}\), the claim follows.
First, consider that \(\mathit{op}_{a}\) adds \((p_{i},v)\) to _audit_result_ at Line 15. Then, the condition at Line 14 is verified and the value read from \(R\) (Line 7) has \(\mathit{val}=v\) and \(\mathit{bit}_{i}=1\). Then, by Lemma 24, there is a read operation \(\mathit{op}_{r}\) that set \(\mathit{bit}_{i}\) to \(1\), and that read the value \(v\) by applying the _fetch\(\&\)add_ to \(R\) in Line 4. Furthermore, the _fetch\(\&\)add_ is applied before \(\mathit{op}_{a}\) read \(R\) in Line 7. Thus, \(\mathit{op}_{r}\) is linearized before \(\mathit{op}_{a}\).
Otherwise, \((p_{i},v)\) is added to _audit_result_ because \(p\) read \(v\) from _pairs\([i][k]\), \(k<audit\_index\) (Line 12). Then, the writer has written \(v\) into _pairs\([i][k]\)_(Line 24) in the execution of the write operation \(\mathit{op}_{w}\) with sequence number \(k+1\) (Line 24). This means that the condition of Line 23 holds, and process \(p_{i}\) previously applied Line 4 in a read operation that returns \(v\); furthermore, this was before \(\mathit{op}_{w}\) applied its successful _compare\(\&\)swap_. Then, \(\mathit{op}_{r}\) precedes \(\mathit{op}_{w}\) in \(\pi\). Since there is a single writer, \(\mathit{op}_{w}\) is either the write that generates the sequence number in \(audit\_index\), or it is linearized before it. This completes the proof since \(\mathit{op}_{a}\) is linearized after \(\mathit{op}_{w}\).
The set of pairs \(\mathcal{P}\) returned by an audit in \(\pi\) satisfies the Completeness property.
Proof.: Consider an audit operation \(\mathit{op}_{a}\) that returns a set \(\mathcal{P}\), and let \(\mathit{op}_{r}\) be a read operation by process \(p_{i}\) that returns a value \(v\). We prove that if \(\mathit{op}_{r}\) is linearized before \(\mathit{op}_{a}\) in \(\pi\), then \((p_{i},v)\in\mathcal{P}\).
By Lemma 26, a read returns the value of the last write that precedes it in \(\pi\), denoted \(\mathit{op}_{w}\). Let \(\mathit{op}_{r^{\prime}}\) be the first read that returns \(v\) to \(p_{i}\); it is either \(\mathit{op}_{r}\) itself or it is linearized between \(\mathit{op}_{w}\) and \(\mathit{op}_{r}\) in \(\pi\). By Lemma 24, \(\mathit{op}_{w}\) sets \(\mathit{bit}_{i}\) to \(0\) (Line 19), and \(\mathit{op}_{r^{\prime}}\) read this bit equal to \(0\) and sets it to \(1\) (Line 4). This bit and the value \(v\) does not change until the next _write_ (if any) applies _compare\(\&\)swap_ to \(R\).
First, assume there is no write between \(\mathit{op}_{r}\) and \(\mathit{op}_{a}\). Then, by the linearization of read and audit operations, the value read from \(R\) (in Line 7) contains \(\mathit{bit}_{i}=1\) and the value \(v\). Thus, the condition in Line 14 is satisfied and \((p_{i},v)\) is added to \(\mathcal{P}\) (in Line 15).
Otherwise, there is a write between \(\mathit{op}_{r}\) and \(\mathit{op}_{a}\) in \(\pi\). Let \(\mathit{op}_{w^{\prime}}\) be the first such write. Since \(\mathit{op}_{w^{\prime}}\) is linearized between \(\mathit{op}_{r}\) and \(\mathit{op}_{a}\) in \(\pi\), \(\mathit{op}_{w^{\prime}}\) successfully executes the _compare\(\&\)swap_ before \(\mathit{op}_{a}\) reads \(R\) (in Line 7). The sequence number of \(\mathit{op}_{w^{\prime}}\) is \(sn+1\) and \(\mathit{op}_{a}\) read a sequence number greater than or equal to \(sn+1\). Thus, \(\mathit{op}_{a}\) reads \(\mathit{pairs}[i][sn]\) (Line 22), and we argue that the read returns \(v\), implying it adds \((p_{i},v)\) to \(\mathcal{P}\) (Line 12). By Lemma 24, the condition of Line 14 holds for \(bits[i]\) and the writer writes the value written by the last completed write, \(v\), into \(\mathit{pairs}[i][sn]\).
Algorithm 6 implements a single-writer multi-reader atomic register with multi-auditor atomic audit.
Note that all (process,value) pairs must be stored somewhere in order to allow the audit to return all the read values. When there is a single auditor, as in Section 5.1 and Section 5.2, the pairs are stored locally at the auditor. This space can be reduced if the completeness property is weakened to require that the audit operation returns only the last \(k\geq 1\) values read by each reader. This weaker form of completeness does not affect the consensus number.
## 6 The consensus number of atomic audit
The _consensus number_[12] of a concurrent object type \(X\) is the largest positive integer \(m\) such that consensus can be wait-free implemented from any number of read/write registers, and any number of objects of type \(X\), in an asynchronous system with \(m\) processes. If there is no largest \(m\), the consensus number is infinite.
By Theorem 5, it is possible to solve consensus among two processes, using only swsr atomic registers with single-auditor atomic audit. This implies that the consensus number of a swsr atomic register with single-auditor atomic audit is at least 2. Clearly, the same holds if the register is multi-reader or multi-auditor.
To prove that the consensus number of this object type is 2, we provide algorithms that implement it with a single auditor (with single or multiple readers), using a single register that supports a combination of read, swap and \(fetch\&add\). In particular, Algorithm 3 implements a swsr atomic register with a single-auditor atomic audit by applying swap and read primitive operations on a single register. Algorithm 4 implements a multi-reader atomic register with a single-auditor atomic audit by applying swap, fetch&add, and read primitives on a single register.
Our result holds since Herlihy [12] proves that one cannot construct a wait-free solution of three process consensus using registers that support any combination of read, write, swap and \(fetch\&add\). It follows that a swsr atomic register with a single-auditor atomic audit cannot be used to solve consensus among three or more processes, which implies:
A single-writer multi-reader atomic register with a single-auditor atomic audit has consensus number two.
Similarly, Algorithm 5 implements a single-reader atomic register with multi-auditor atomic audit. It only uses a register accessed by read, swap and fetch& add primitives and an unbounded number of read/write registers. Thus, the above Herlihy's impossibility together with our Theorem 5, imply:
A single-writer single-reader atomic register with multi-auditor atomic audit has consensus number two.
We also show that a multi-reader atomic register with multi-auditor atomic audit has consensus number larger than 2 if each reader is also an auditor of the register. In particular, according to Algorithm 2, we can solve consensus among \(n\) processes using \(n\)-reader atomic registers with \(n\)-auditors atomic audit. Thus, the consensus number of this object type is at least \(n\).
We finally provide an implementation of the swmr atomic register with multi-auditor atomic audit using read/write registers and a register accessed via read, fetch&add and compare&swap primitives. Even though, an object type that supports these three primitives is not traditionally used, current architectures support this combination of primitives. Since compare&swap has consensus number infinite, it is an open question whether the consensus number of an \(n\)-reader \(n\)-auditor atomic register with atomic audit is \(n\) or more. Specifically,
the question is whether it can be implemented from objects with consensus number \(n\), e.g., similarly to the Proof object of [9].
Table 1 summarize the results.
[style=]
**Table 1** Consensus number of atomic register with atomic audit
\begin{tabular}{|c|c|c|c|c|} \hline Audit & Number of & Number of & Number of & Consensus \\ consistency & writers & readers & auditors & number \\ \hline Atomic & 1 & 1 & 1 & 2 \\ \hline Atomic & 1 & \(n\) & 1 & 2 \\ \hline Atomic & 1 & 1 & \(n\) & 2 \\ \hline Atomic & 1 & \(n\) & \(n\) & \(\geq n\) \\ \hline \end{tabular}
## 7 Regular Audit
In this section, we show how to implement a multi-reader atomic register with multi-auditor regular audit using only single-writer multi-reader atomic registers. The implementation follows a straightforward approach: during a read operation each reader leaves a trace in a register of all the values they read.
The pseudo-code appears in Algorithm 7, and it uses several atomic registers. A swmr atomic register \(R_{v}\) is shared between the writer and the readers. This register is used by the writer to write a new value and by the readers to access it. In addition to \(R_{v}\), each reader \(p_{i}\) shares a swmr atomic register \(R_{a}[i]\) with the auditors. This register is used by \(p_{i}\) to communicate to the auditor all the values it read from the register.
In a read, a reader \(p_{i}\) reads a value from \(R_{v}\) and stores it in _read_result_. Then, it adds \(v\) together with its identifier in _read_log_, and writes _read_log_ in the register \(R_{a}[i]\) it shares with the auditors. Finally, it returns _read_result_. In a write, the writer simply writes \(v\) in \(R_{v}\). In an audit, the auditor simply reads from all the swmr register \(R_{a}\) of each reader, combines it with the result in _audit_result_, and returns.
Proof of Correctness:We assume that the values written to the register are unique.
A history \(H\) has at most \(n\) pending operations, one for each process. We never complete a pending audit. We complete a pending read invoked by process \(p_{i}\) in \(H\) if and only if some audit contains \((p_{i},v)\) in its response and no read in \(H\) (which must be complete) returns \(v\) to \(p_{i}\). Any pending operation that is completed has accessed \(R_{v}\) (with a write or a read): A read is completed if it is the only read that returns a value detected by an audit, the read has executed the _write_ at Line 2. A write is completed only if some read has read its value, namely, the write has executed the _write_ in Line 7.
We totally order all the completed operations by the order they access \(R_{v}\) (in a _read_ or a _write_). Since each operation accesses \(R_{v}\) exactly once, and since this is a primitive atomic operation, the order is well-defined. Call this total order \(\pi\) and note that it respects the real-time order of the high-level read and write operations on the register, since the steps used to create it are in the execution interval of the corresponding operation.
The algorithm uses a shared variable \(R_{v}\) that holds a value along with an array of \(n\) registers of unbounded size. Instead of using an array of \(n\) registers of unbounded size, it is quite simple to adapt the solution and uses, similarly to Algorithm 6, an array of \(m\) registers, where \(m\) is the number of write operations; each register containing a pair (process, value). Furthermore, as discussed for Algorithm 6, the space complexity can be reduced if
the completeness property is weakened to require that the audit operation returns only the last \(k\geq 1\) values read by each reader. This allows to reduce the space complexity to a single register that holds a value along with an array of \(n\) bounded registers.
Note that all the loops in the write or audit operations are iterated at most \(n\) times, and hence Algorithm 7 is wait-free.
Since the _read_ primitives (Line 2 and Line 5) and the _writes_ primitives (Line 7) are applied on an atomic register \(R_{v}\), we have:
Every read in \(\pi\) returns the value of the most recent preceding write, if there is one, and the initial value, otherwise.
The set of pairs \(\mathcal{P}\) returned by an audit satisfies the Completeness Property.
Proof.: A pending read \(\mathit{op}_{r}\) is completed only if some audit operation that detects it, and the lemma follows from the construction of \(\mathsf{complete}(\mathsf{H})\). If \(\mathit{op}_{r}\) returns \(v\) (Line 5), then it adds \((p,v)\) to _read_log_ before returning (Line 3). By the code, \((p,v)\) is never removed from _read_log_. Therefore, \(R_{a}\) contains \((p,v)\) after Line 4.
Otherwise, if a read completes in \(H\), note that an audit returns _audit_result_ (Line 11), which contains the contents of all registers (\(R_{a}\) Line 10). Therefore, \((p,v)\) is in the response of every audit that starts after \(\mathit{op}_{r}\) completes.
The set of pairs \(\mathcal{P}\) returned by an audit satisfies the Accuracy property.
Proof.: If \((p_{i},v)\) is in the set \(\mathcal{P}\) returned by an audit, then \((p_{i},v)\) is in _audit_result_ (Line 11). Note that _audit_result_ contains values read from registers \(R_{a}\) (Line 10), and a reader
writes the set of all values it read to \(R_{a}[i]\). Furthermore, if there is a pending read invoked by process \(p_{i}\), such that \(\mathcal{P}\) includes \((p_{i},v)\) in its response, and that no read returned \(v\) to process \(p_{i}\) in \(H^{\prime}\), we complete the pending read by providing the response \(v_{i}\) to \(p\). Thus, there is a read \(\mathit{op}_{r}\) in \(H^{\prime}\) that returns \(v\) to \(p_{i}\).
Algorithm 7 implements a single-writer multi-reader atomic register with multi-auditor regular audit.
We remark that this algorithm can be specialized to get _single_-writer _single_-reader registers, and extended to get _multi_-writer multi-reader atomic registers. In both cases, the algorithm can provide regular audit for one or many auditors.
## 8 Conclusion
This paper studies the synchronization power of auditing an atomic read / write register, in a shared memory system. We consider two alternative definitions of the audit operation, one which is atomic relative to the read and write operations, and another that is regular. The first definition is shown to have a strong synchronization power, allowing to solve consensus; the number of processes that can solve consensus corresponds to the number of processes that can audit the register. We also implement an atomic audit operation, using swap and fetch&add for a single auditor, and compare&swap for multiple auditors. On the other hand, the weaker, regular audit can be implemented from ordinary reads and writes.
We studied single-writer registers and leave the interesting question of registers with multiple writers to future work.
In most practical systems supporting auditing, there is a single auditor (e.g. a trusted third party or the data owner), and it seems that a regular audit would suffice. Stronger forms of auditing are needed when users do not trust a third party auditor, so that auditing is performed in a distributed manner. Determining the precise requirements in practical systems is outside the scope of this paper, but our results indicate that if a too-strong notion is enforced, it would incur high synchronization cost.
|
2309.11246 | Grassroots Operator Search for Model Edge Adaptation | Hardware-aware Neural Architecture Search (HW-NAS) is increasingly being used
to design efficient deep learning architectures. An efficient and flexible
search space is crucial to the success of HW-NAS. Current approaches focus on
designing a macro-architecture and searching for the architecture's
hyperparameters based on a set of possible values. This approach is biased by
the expertise of deep learning (DL) engineers and standard modeling approaches.
In this paper, we present a Grassroots Operator Search (GOS) methodology. Our
HW-NAS adapts a given model for edge devices by searching for efficient
operator replacement. We express each operator as a set of mathematical
instructions that capture its behavior. The mathematical instructions are then
used as the basis for searching and selecting efficient replacement operators
that maintain the accuracy of the original model while reducing computational
complexity. Our approach is grassroots since it relies on the mathematical
foundations to construct new and efficient operators for DL architectures. We
demonstrate on various DL models, that our method consistently outperforms the
original models on two edge devices, namely Redmi Note 7S and Raspberry Pi3,
with a minimum of 2.2x speedup while maintaining high accuracy. Additionally,
we showcase a use case of our GOS approach in pulse rate estimation on
wristband devices, where we achieve state-of-the-art performance, while
maintaining reduced computational complexity, demonstrating the effectiveness
of our approach in practical applications. | Hadjer Benmeziane, Kaoutar El Maghraoui, Hamza Ouarnoughi, Smail Niar | 2023-09-20T12:15:58Z | http://arxiv.org/abs/2309.11246v1 | # Grassroots Operator Search for Model Edge Adaptation
###### Abstract
Hardware-aware Neural Architecture Search (HW-NAS) is increasingly being used to design efficient deep learning architectures. An efficient and flexible search space is crucial to the success of HW-NAS. Current approaches focus on designing a macro-architecture and searching for the architecture's hyperparameters based on a set of possible values. This approach is biased by the expertise of deep learning (DL) engineers and standard modeling approaches. In this paper, we present a Grassroots Operator Search (GOS) methodology. Our HW-NAS adapts a given model for edge devices by searching for efficient operator replacement. We express each operator as a set of mathematical instructions that capture its behavior. The mathematical instructions are then used as the basis for searching and selecting efficient replacement operators that maintain the accuracy of the original model while reducing computational complexity. Our approach is grassroots since it relies on the mathematical foundations to construct new and efficient operators for DL architectures. We demonstrate on various DL models, that our method consistently outperforms the original models on two edge devices, namely Redmi Note 7S and Raspberry Pi3, with a minimum of 2.2x speedup while maintaining high accuracy. Additionally, we showcase a use case of our GOS approach in pulse rate estimation on wristband devices, where we achieve state-of-the-art performance, while maintaining reduced computational complexity, demonstrating the effectiveness of our approach in practical applications.
keywords: Neural Architecture Search, Edge AI, optimization, Deep Learning +
Footnote †: journal: Future Generation Computer Systems
## 1 Introduction
Hardware-aware Neural Architecture Search (HW-NAS) is a technique to design efficient Deep Learning (DL) architectures for different tasks such as image classification [1] and object detection [2] in computer vision.
HW-NAS follows three steps. First, a _search space_ is defined with a set of possible DL architectures. Second, a multi-objective _search strategy_ is implemented to explore the search space to find the best architecture. The search strategy uses an _evaluation methodology_ to evaluate each sampled architecture against different objectives such as accuracy, latency, and energy consumption. Finally, the architecture that presents the best objectives' trade-off is defined as the _"best"_ architecture. In this paper, the term _architecture_ refers to the DL architecture and the term _architecture performance_ refers to combining task-performance metrics, such as the accuracy or average precision, and the hardware efficiency computed using latency, energy consumption, and memory occupation of a sampled architecture.
The definition of the search space is a critical step in NAS. It determines the range of possible architectures and can significantly impact the final performance. The size of the search space matters. A large search space hinders the exploration but diversifies the results. In contrast, a small search space restricts architectural diversity. Currently, there are three primary approaches to define the search space in HW-NAS [3]:
1. _Cell-based search space_, which involves searching for a repeated cell, also called _block_, within a pre-defined macro-architecture. The cell is defined by a list of operators, such as convolution and batch normalization and an adjacency matrix that defines the connections between the operators. NAS-Bench-101 [4] is a common NAS benchmark designed using a cell-based search space.
2. _Hierarchical search space_[5] extends the cell-based approach by selecting the operators composing the cell, defining the cell-level connections, and merging multiple cells.
3. _Supernetwork search space_[6], in which each architecture is represented as a subgraph within a larger and more complex network called the _supernetwork_. The weights of the supernetwork are shared among all subgraphs, allowing the subnetworks to share computation and enabling efficient exploration of the search space. The supernetwork is called an over-parameterized network. The subgraphs can differ in terms of their connectivity, layer types, layer sizes, and other architectural hyperparameters.
A prevalent limitation of such definitions is the bias introduced by the dependence on human-designed architectures, which restricts the search algorithms from exploring novel and innovative operations and architectures. This bias towards previously handcrafted architectures hinders the discovery of more efficient and effective models for specific tasks. Consequently, there is a need to develop novel methodologies that can help discover more optimized architectures and operations that can perform well on various devices and scenarios without relying on pre-existing models. Such methodologies would be the holy grail of NAS, as they would enable the creation of truly novel
architectures that can push the limits of deep learning performance even further.
One solution defines a giant search space where the architecture and operations are generated from scratch and then evaluated based on their performance. However, given the vast search space, such an approach requires a massive amount of computational resources and is often infeasible for practical use. AutoML-Zero [7], for example, presents a strategy capable of defining the architecture and the training procedure from standard mathematical operations using reinforcement learning. This approach breaks the innovation barrier for NAS but at a significant time complexity price. Due to the highly complex search, AutoML-Zero only achieves linear regression on the MNIST dataset, which is impractical for complex and real-world datasets.
Selecting the right set of operators for a specific task is crucial, however, the actual implementation of the operator can also greatly impact the hardware efficiency of the DL model. To overcome this challenge, recent works have focused on using DL compilers [8; 9] that can automatically select the most efficient implementation and optimization for a given hardware. These compilers use techniques such as code generation and optimization, which automatically translate the high-level DL operators to hardware-specific low-level code to improve the efficiency of DL models on different hardware devices such as edge devices. The use of deep learning compilers highlights the importance of not only selecting the right operator but also optimizing its implementation to achieve the best possible hardware performance. MCUNet [10] combines the use of NAS and _TinyEngine_, a deep learning compiler for microcontrollers, to efficiently look for the best architecture and its hardware efficient implementation in an iterative manner. However, their search space is limited to a set of standard DL operators, whose implementations are not optimized for edge or resource-constrained devices.
This paper presents a search algorithm that adapts the architecture to edge devices without previous human experience. To overcome the time complexity of AutoML-Zero, we apply our search algorithm on a specific layer at each iteration. In the first step, we analyze each layer's latency and memory occupancy distributions in a given model. We consider a model as a set of layers such as convolution. Each layer corresponds to a sequence of operators implemented by a graph of mathematical instructions. Table 1 gives the list of mathematical instructions considered in this work. In the second step, the most inefficient layer is optimized. Costly operators in this layer are replaced by efficient operators. An operator is a set of mathematical instructions that capture its behavior. For example, standardization is defined by subtracting the mean of the input over a mini-batch and dividing it by the standard deviation of that input. The mathematical instructions are then used as a basis for searching and selecting efficient replacement operators that maintain the accuracy of the original model while reducing computational complexity.
We repeat these two steps until we find an architecture suited for the targeted edge device without dropping accuracy. Our technique aims to break the time-consuming barrier of non-restrictive search spaces while searching for new and innovative architectural designs.
We summarize the contributions described in this paper as follows.
* We present a new adaptation methodology via operator replacement. We replace the most hardware-inefficient layer iteratively by building new operators from scratch with minimal human bias.
* We develop an optimized multi-objective evolutionary search algorithm that effectively selects the appropriate operators for deploying an efficient architecture on the targeted device. This enables the deployment of deep learning models on edge devices with improved efficiency and without sacrificing accuracy.
Our methodology has been validated with different types of architectures: Convolutional neural networks (ConvNets) and Vision transformers (ViT). In particular, we identified a novel convolution implementation suitable for Raspberry Pi, is a significant contribution to the field of edge computing. Additionally, we applied our methodology for Pulse Rate estimation with Photoplethysmography (PPG) sensors and achieved state-of-the-art results. Overall, our approach consistently improves the model's hardware efficiency with an average of 2x speedup without any loss in the model's accuracy. These results demonstrate the effectiveness and versatility of our methodology for optimizing DL models for different hardware platforms and applications.
## 2 Related Works & Background
In this section, we review related works on two different aspects of our methodology: defining fine-grained search spaces and using deep learning compilers for optimizing DL operators in HW-NAS.
### Fine-grained Search Space for NAS
The term "_fine-grained search spaces_" refers to search spaces that consist of a set of mathematical and low-level functions. Such search spaces for NAS. The reason is due to the large number of possible operators created from this search space, hindering the exploration.
AutoML Zero [7] is the only AutoML tool that defines a search space from basic operators. Their goal is to search for the end-to-end learning pipeline, i.e., from architecture building blocks to optimizing the loss function. This work is a seminal step towards the holy grail of AutoML: automatically designing a network and training pipeline for any given dataset. However, their methodology took a tremendous amount of time to come up with an already human-designed logistic regression. Recently, BANAT [11] proposes an algebraic representation of the architecture to enable a more general search space definition. This is a promising direction for efficiently and effectively searching over our huge search spaces.
Other works [11; 12; 13] consider modifying a single operator, namely batch normalization. EvoNorms [12] evolves the normalization operator from basic mathematical functions.
They discovered novel implementations and functions for the normalization and activation fusion which improved the overall average precision of multiple standard models.
Due to their recent application and high time complexity, low-level search spaces are only considered in NAS with a task-specific objective. In other terms, our work is the first to search for adapting the model for resource-constrained devices using a low-level search space.
### DL Compilers & Hardware-aware Neural Architecture Search
In addition to search algorithms that operate on high-level architectures and operations, several works have explored the optimization of DL models at the hardware and software levels. One approach to this problem is through the use of DL compilers [8; 9], which automatically optimize the code of a given model for a target hardware platform. These compilers employ a range of techniques, such as graph rewriting, operator fusion, and kernel selection, to reduce memory usage, improve compute performance, and exploit hardware-specific features. For instance, MCUNet [10] developed a dedicated compiler that enhances the convolutions with loop tiling. The search iterates over searching for the architecture, then searching for the best optimization and extracting the hardware performance. While this methodology is proven efficient, their search space consists of standard operations which hinder the innovation and adaptation of multiple hardware platforms in HW-NAS. Other DL compilers such as TVM [8] and Tiramisu [9] predict the adequate optimization to apply for a given operator.
Our primary goal is to tailor the DL architecture to a specific hardware platform by systematically replacing the least-efficient operator in an iterative manner, employing a fine-grained search space. This approach streamlines the resource-intensive process of exploring an extensive search space by concentrating on the adaptation of individual operators step-by-step.
### Pulse Rate Estimation
Pulse rate estimation [14] has been the subject of extensive research in the field of physiological monitoring. Various approaches utilize photoplethysmography (PPG) signals captured from wearable devices, such as wrist-worn sensors or fingertip sensors, to estimate the pulse rate. Among the state-of-the-art pulse rate estimation models, we compare our results against DeepHeart [15], CNN-LSTM [16], and NAS-PPG [17].
DeepHeart uses an ensemble of denoising convolutional neural networks (DCNNs) to denoise contaminated PPG signals that are then passed through spectrum-analysis-based calibration to estimate the final pulse rate.
CNN-LSTM uses a hybrid convolutional and LSTM neural network. The proposed model is comprised of two convolutional layers, two LSTM layers, one concatenation layer, and three fully connected layers including a softmax.
NAS-PPG is the first NAS applied to pulse rate estimation. Their search space is defined with a convolutional macro-architecture comprising time-distributed convolutions and two final LSTM layers. Thanks to their automatic search, they provide the best performance on Troika dataset [18].
This task serves as an excellent validation use case for our methodology, specifically tailored to edge devices, where limited computational resources and power constraints present unique challenges. By effectively optimizing pulse rate estimation models for edge devices, we showcase the practicality and robustness of our approach in overcoming these constraints and meeting the specific requirements of edge environments.
## 3 GOS: Grassroots Operator Search
Figure 1 shows the overall structure of our methodology. Given a model, denoted as \(m\), our goal is to adapt it to a targeted edge platform. We define an _operator_ as a collection of operations applied within a layer. These operations can encompass a single layer, as commonly found in deep learning frameworks (e.g., convolution), or a fused layer, such as the combination of ReLU and Batch Normalization (ReLU-BN) [19]. This distinction allows us to work with both individual and composite layer types in our adaptation process.
The process goes through two stages:
1. _Operator Complexity Analysis:_ First, our process identifies the least efficient operator within the given model (\(m\)) by conducting \(N_{i}\) inference runs on the target edge device. The efficiency metric is computed with different objectives, such as latency and the number of parameters. The number of parameters reflects the size of the operator. Additional criteria such as energy consumption may be added. Among the list of operators in \(m\), the least efficient operator is selected based on algorithm 1. If the model is not deployable on the target platform, i.e., the size of the network exceeds the memory capacity, we select the operator with the highest number of parameters denoted as \(num\_param\) in algorithm 1. Otherwise, we rank the architectures with latency and number of parameters in descending order and select the first operator. Our strategy of ranking is as follows: if the architecture is deployable on the target device, the number of parameters is a less important objective, we rank the operators based on the latency and if two operators are of close latencies then we consider the number of parameters. This behavior is checked at each iteration. If more criteria are considered, then the ranking should be multi-objective [20]. This operator corresponds to the slowest operator that has the highest number of parameters possible. If an operator is selected, it cannot be selected for another optimization iteration. In ConvNets, it is common knowledge that the least-efficient operator is the convolution. However, according to its input and output shape, the convolution may be optimized differently. To efficiently select the operator to be replaced, we define \(N_{o}\) as the maximum number of similar operators and select the top operators each time. For example, if the \(N_{o}\) least efficient operators are all convolutions, we will replace them all with the same generated optimized operator. By selecting the top \(N_{o}\) least efficient operators during each iteration, we strike a balance that ensures both effective optimization and a manageable adaptation time complexity.
2. _Operator Adaptation_: Then, we adapt the selected operator by searching for a variation that can keep the same input and output shapes but optimizes the computations. This phase is done with an evolutionary search on a set of mathematical operations. Section 3.1 and section 3.2 describe the search space and methodology respectively. During the search, only the parameters of the adapted operator are fine-tuned.
The two steps are repeated until satisfactory hardware efficiency is reached or a maximum number of layers have been replaced.
### Operator Search Space
Unlike previous HW-NAS search spaces that are based on pre-defined operator sets, our search space is defined with a set of mathematical operations. The operator is represented with a computation graph. The computation graph is a directed acyclic graph (DAG) with \(N\) nodes and \(E\) edges. The edges describe the inputs and outputs of each node. Figure 1 (step 2) shows an example of such a graph.
Each node in the context can be classified into one of the following three types:
* Instruction: This node corresponds to any mathematical instruction in table 1.
* Input: This node corresponds to the input feature maps or weights that are given as operands to the instruction node.
* Constant: This introduces hyperparameters fixed in the mathematical instruction equation. These constants can be tuned and mutated during the search.
```
Input: Model \(m\), Number of inference \(N_{i}\) \(is\_deployable\gets deploy(m)\) if not is_deployable then for each \(o\) in \(m\)do # for each operator \(\mathtt{o}\) get its number of parameters \(num\_param[o]\gets number\_of\_params(o)\) endfor return argmax(num_param, \(N_{o}\)) # return the operator with highest value in num_param and its number of occurrences \(N_{o}\) endif for each \(o\) in \(m\)do \(latency[o]\gets average\_latency(o,N_{i})\) # compute the mean latency of each operator \(\mathtt{o}\) for \(N_{i}\) inferences \(num\_param[o]\gets number\_of\_params(o)\) endfor return Top \(N_{o}\) similar operators # return the operator with highest value in num_param and its number of occurrences \(N_{o}\)
```
**Algorithm 1** Least-efficient Operator Selection
Figure 1: Overview of the Grassroots Operator Search (GOS) framework.
We constrain the generated computation graphs with \(1<N<=20\) and \(1<E<=25\). These values have been fixed by analyzing standard models' operators. During the generation, the input node is fixed, and its shape is defined by the output of the previous operation in \(m\). The output node's shape is also known as it is constrained by the input shape of the next operator in \(m\). To ensure a valid network, we optionally add a reshape operation at the end of the computation graph to keep the same output shape as expected by the next operator in the given model. Nodes that can't be reached from the input or that do not have a path to the output are considered unused and therefore pruned from the computation graph.
Table 1 shows the basic operations in the search space, including arithmetic, linear algebra, probability, and aggregation operations. The aggregation operations enable to merge between the output of multiple nodes. We include code optimizations such as loop tiling and unrolling as special aggregation functions that are called to optimize the generated operator's code. Note that this is a general application of these optimizations that can be hardware-specifically defined by a compiler.
Note that each operator has a list of hyperparameters dedicated to it. These hyperparameters are illustrated in the equations as constants in table 1.
Example of Operator Computation GraphIn this paragraph, we explain how the convolution 2D is turned into a computation graph. In its simplest form, the convolution 2D can be formulated as in equation 1, where \(N\) is the batch size, C denotes the number of channels, H is the height of input planes in pixels, and W is the width in pixels. \(in\) and \(out\) refer to the input and output respectively. \(*\) in the equation denotes the cross-correlation operation [21].
\[conv2D(N,C_{out})=bias(C_{out})+\sum_{k=0}^{k=C_{u}-1}weight(C_{out},k)*input (N,k) \tag{1}\]
The convolution first splits the input into weight-shaped chunks. We compute the multiply-accumulate of each of these chunks with the weights (i.e., kernels), using the cross-correlation instruction. We then sum up all the multiplied values over the input channels \(C_{in}\). Finally, we add the bias to each output channel \(C_{out}\).
To create the computation graph, we divide the equation into instructions found in table 1. Figure 3 shows the complete convolution 2D graph with a 2-dimensional input and 2 kernels. To have a compact and simple graph, we include the constant nodes inside the instruction node as a list of hyperparameters. In the rest of the paper and for the sake of clarity, we use high-level operator names such as Linear for the matrix multiplication between weight and input matrices.
In this search space, we perform small-scale experiments with random sampling to understand its behaviors. The purpose is to measure the sparsity of the search space and to determine the number of valid and accurate operations generated during the exploration. In this experiment, we replace all similar operators at once. For example, we replace all convolutions in the model with a generated replacement. Figure 2 (a) shows the results of 1000 randomly generated operator replacements for three operators: Conv2D, max-pooling, and batch normalization, in resnet-18 [22]. Random generation, inspired by EvoNorm [12], starts from the input node and sequentially selects an operation from the search space. In all the cases, the ImageNet accuracy drops significantly for most of the replacements, which reflects the high sparsity of our search space. In figure 2 (b), rather than randomly generating the operator replacement, we start with the original operations but adapt one operation in the computation graph. The adaptation is performed while being aware to keep the same arity and type of arguments for each operation. With adaptation, the results are much closer to the original accuracy of the model but the complexity is modified.
### Search Algorithm
Given an operator computation graph, the search algorithm aims at finding a variant that preserves the accuracy of the model with reducing complexity. We rely on an evolutionary algorithm for this purpose. The evolutionary algorithm allows us to handle the sparse search space by exploring a population of
Figure 2: CIFAR-10 accuracy histograms of 1k architectures randomly generated (a) and adapted from the original operator (b).
\begin{table}
\begin{tabular}{|c|l|c|} \hline \hline Category & Instruction & Equation \\ \hline \hline Linear Algebra & Matrix multiplication & \(\mathbf{C}=\mathbf{AB}\) \\ & Matrix addition and subtraction & \(\mathbf{C}=\mathbf{A}+\mathbf{B}\) or \(\mathbf{C}=\mathbf{A}-\mathbf{B}\) \\ & Vector multiplication & \(\mathbf{c}=\mathbf{Ab}\) \\ & Matrix inversion & \(\mathbf{A}^{-1}\) \\ & Dot product & \(\mathbf{a}^{\top}\mathbf{b}\) \\ & Determinant & \(\det(\mathbf{A})\) \\ & Trace & tr(\(\mathbf{A}\)) \\ & Eigenvalues and eigenvectors & \(\mathbf{Av}=\lambda\mathbf{v}\) \\ & Singular value decomposition (SVD) & \(\mathbf{A}=\mathbf{U}\mathbf{\Sigma}\mathbf{V}^{\top}\) \\ & QR decomposition & \(\mathbf{A}=\mathbf{QR}\) \\ & Cholesky decomposition & \(\mathbf{A}=\mathbf{LL}^{\top}\) \\ & Matrix pseudoinverse & \(\mathbf{A}^{\dagger}\) \\ & Matrix rank & rank(\(\mathbf{A}\)) \\ & Hadamard product & \(\mathbf{C}=\mathbf{A}\odot\mathbf{B}\) \\ & Kronecker product & \(\mathbf{C}=\mathbf{A}\otimes\mathbf{B}\) \\ & Outer product & \(\mathbf{C}=\mathbf{ab}^{\top}\) \\ & Vector norm & \(\|\mathbf{x}\|\) \\ & Matrix norm & \(\|\mathbf{A}\|\) \\ & Frobenius norm & \(\|\mathbf{A}\|_{F}\) \\ & Identity matrix & \(\mathbf{I}\) \\ & Zero matrix & \(\mathbf{0}\) \\ \hline \hline Calculus & Gradients & \(\nabla_{\theta}L(\theta)\) \\ & Partial derivatives & \(\frac{\partial f}{\partial x}\) \\ & Chain rule & \(\frac{\partial f}{\partial x}=\frac{\partial f}{\partial x}\frac{\partial g}{ \partial x}\) \\ \hline \hline Activation Functions & Sigmoid & \(\sigma(x)=\frac{1}{1+e^{-i}}\) \\ & ReLU & \(\operatorname{ReLU}(x)=\max(0,x)\) \\ & Tanh & \(\tanh(x)=\frac{e^{x}-e^{-i}}{e^{x}+e^{-i}}\) \\ & Softmax & \(\operatorname{softmax}(x_{i})=\frac{e^{x_{i}}}{\sum_{i=1}^{e^{i}}e^{i^{\prime}}}\) \\ \hline \hline Convolution & cross-correlation & \((f*g)(x,y)=\sum_{i=-k}^{k}\sum_{j=-k}^{k}f(x-i,y-j)g(i,j)\) \\ \hline \hline Pooling & Max pooling & \(\operatorname{maxpool}(x_{i:x+s,j:t+s})=\max_{m=1}^{s}\max_{n=1}^{s}x_{i+m,j:n}\) \\ & Average pooling & \(\operatorname{avgpool}(x_{i:x+s,j:t+s})=\frac{1}{Y}\sum_{m=1}^{s}\sum_{j=1}^{s}x_ {i+m,j:n}\) \\ \hline \hline Probability and Statistics & Probability distributions & \(p(x)\) \\ & Bayesian inference & \(p(\theta|x)=\frac{p(\delta|\theta)p(\theta)}{p(x)}\) \\ \hline \hline Aggregation Function & Summation & \(\sum_{i=1}^{n}x_{i}\) \\ & Mean & \(\frac{1}{n}\sum_{i=1}^{n}x_{i}\) \\ & Maximum & \(\max(x_{1},x_{2},...,x_{n})\) \\ & Minimum & \(\min(x_{1},x_{2},...,x_{n})\) \\ & Square Root & \(\sqrt{x}\) \\ & Concatenation & \(\begin{bmatrix}A&B\\ \hline \sum_{i=1}^{n}w_{i}\end{bmatrix}\) \\ & Weighted Mean & \(\frac{\sum_{i=1}^{n}w_{i}}{\sum_{i=1}^{n}w_{i}}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: List of mathematical instructions defining the search space
valid computation graphs. The computation graph is considered valid if it maintains the shapes of the input and output data and if there exists a path from every intermediate node, including the input node, to the output node. Besides, mutation and crossover provide an efficient way to generate complex adaptations. We use tournament selection which ensures that the best individuals have a higher chance of being selected, while still allowing for some diversity in the population. This helps to prevent premature convergence and promotes the discovery of novel solutions in our large search space.
MutationsThe mutation operations involve modifying the computation graph. Figure 3 summarizes the possible mutations applied on the conv2D computation graph. Each instruction node in the computation graph is typed with the corresponding type in table 1. The most important mutation is modifying any intermediate node with a possible operation. For each operation, we associate a list of possible replacements. The replacement satisfies two constraints: (1) having the same argument's type and arity, (2) the output shape is equal or can be converted to the original output shape by adding a reshape operation. The replacement operation from the list is selected uniformly at random. We also allow for a modification of the aggregation function, and an addition or deletion of a node. When adding or removing a node, we make sure that a path from the input to the output is still possible and that no unused node appears in the graph.
The mutations also include modifying the hyperparameter of the operator. The hyperparameters are properties associated with a vertex in the computation graph. For each instruction, a list of possible hyperparameters; i.e., constants, is available. For each hyperparameter, we constrain the ranges with specified values obtained from the literature. For example, the output channel size of a convolution may change. This mutation may reduce the accuracy of the model. If this is the case, the operator is invalidated and is not considered in the novel population.
CrossoverIn general, the crossover is not applied to NAS algorithms. When we consider high-level operators, it is rarely the case to find a splitting point where the shapes fit. However, in our case, the crossover is beneficial and allows more flexibility. Algorithm 2 and figure 4 detail the crossover procedure. We perform a crossover between two computation graphs in our population. Because all the variants start from the same point, we have more chances to find a split point. We perform a pre-order traversal of the two computation graphs and store all the possible split points. We randomly select a split point between each pair of computation graphs and generate offspring.
Multi-objective Fitness FunctionThe evaluation is specific to the given model and task. We do not generalize the resulting operation to multiple standard models because our goal is to adapt the network for a given hardware platform in a practical time. This allows a more flexible and multi-objective fitness function.
The fitness function evaluates the performance of the adapted operator, formulated in equation 2. In our methodology, we consider hardware efficiency with multiple objectives. Our definition considers latency and the number of parameters. But one can add other objectives such as energy consumption or memory occupancy. We rely on the crowding distance [23] to minimize multiple objectives under an accuracy constraint. The crowding distance is calculated for each solution in a Pareto front and is based on the distances between neighboring solutions in the objective space. The solutions with larger crowding distances are preferred in the selection process, as they represent areas of the objective space with lower solution density, and hence are more diverse and representative of the Pareto front.
Figure 3: Detailed computation graph of the standard convolution 2D including the possible mutations applied to it.
```
Input: Two computation graphs of two operators (\(o1\) and \(o2\)) split_points = [] Stack s = Stack() Push (\(o1\), \(o2\)) to s # pre-order traversal of both computation graphs while s not empty do Pop a node pair (\(o1\), \(o2\)) from the top of the stack while\(o1\) not empty and \(o2\) not empty do Pop a node pair (\(o1\), \(o2\)) from the top of the stack endwhile if shape(\(o1\).output) == shape(\(o2\).input) then Add (\(o1\), \(o2\)) to split_points # add to possible split point end if for child of \(o1\)do Add (child, \(o2\)) to stack endfor for child of \(o2\)do Add (\(o1\), child) to stack endfor Uniformly select (\(o1\), \(o2\)) from split_points # randomly select a split points between all the possibilities Perform a merge illustrated in Figure 4 endwhile
```
**Algorithm 2** Crossover procedure
During the search, we want to maximize the hardware efficiency of the adapted operator while keeping the difference between the loss of the original model \(m\) and the model with the adapted operator, denoted as \(m_{adapted}\), minimal. We add a small value, \(\epsilon\), to ensure exploration. We fine-tune the network after adapting the operator for a few epochs. This fine-tuning is done with all the other operator's weights frozen.
The operator's latency is computed with the difference between the original model's latency and the latency of the adapted model. The number of parameters can be reduced or increased by adding weight input to the computation graph.
\[Min_{o}(LAT(o),PARAM(o))\] (2) subject to \[ACC(m_{adapted})>ACC(m)-\epsilon\]
## 4 Experiments
### Experiment Settings and Implementation Details
We first conducted our experiments on two edge devices: Raspberry Pi 3 Model B and Redmi Note 7S mobile phone. The Raspberry Pi 3 Model B is equipped with a Broadcom BCM2837 SoC with a 1.2 GHz quad-core ARM Cortex-A53 CPU, and 1GB RAM, and runs the Raspbian operating system. The Redmi Note 7S mobile phone is equipped with a Qualcomm Snapdragon 845 SoC with an octa-core CPU and 8GB RAM, running the Android 10 operating system.
To evaluate the performance of our proposed method, we used three popular deep learning models: ResNet18 [22], InceptionV3 [24], and MobileNetV2 [25]. We implemented our approach using Python 3.7 and the PyTorch 1.8.1 deep learning framework. All three architectures were initially trained for Imagenet. The experiment goal is to adapt them for edge devices by changing the most inefficient operators. We measured the accuracy of each model on the validation set and recorded each model's latency and energy consumption during inference. We averaged these numbers for 100 inferences to correctly estimate
Figure 4: Illustration of the cross-over operation.
hardware efficiency. The latency and energy consumption are measured with an inference batch size of 1. For fine-tuning, we use SGD with a mini-batch size of 128. The learning rate is set to 0.003. We use a weight decay of 0.0001 and a momentum of 0.9.
The search is set to do 50 iterations per operator replacement. The stopping criterion is the modification of at least 10 layers in the model. The probability of mutation is set to 0.8 and the cross-over probability to 0.6. We use an epsilon of 1%, i.e., assume that a 1% drop in accuracy is acceptable. The epsilon should be tailored to the dataset and task at hand. Empirical tuning was done to select these values.
Due to the on-search fine-tuning and hardware efficiency computation on-device, our search takes about 1h04min. This time is highly practical as this adaptation is only done once.
Search SetupThe search is achieved on a much more compute-intensive setup. Our search was conducted using an NVIDIA GPU 3070, a high-performance graphics processing unit known for its advanced parallel computing capabilities. The GPU was connected to a powerful workstation equipped with an Intel Core i9 processor and 32 gigabytes of RAM, ensuring sufficient computational resources for the search process.
### Optimizing an architecture for Edge Devices
Table 2 presents the overall hardware efficiency improvement achieved by applying GOS to the evaluated models on both edge devices. Our operator replacement method consistently outperformed the original models with an average speedup of 3.17. Notably, our search was able to find a variant that improved the accuracy of ResNet models by 6.13% and 5.34% for Raspberry Pi and Redmi Note 7S, respectively.
Interestingly, InceptionV3 was found to be unsuitable for deployment on Raspberry Pi due to its large network size. To tackle this issue, our search began optimizing by selecting operators that use the largest amount of parameters, which led to a reduction in the number of parameters and enabled the discovery of a deployable variant.
Although our search space does not directly optimize energy consumption, we observed that our models presented lower energy consumption due to the reduction in the number of parameters and operations.
Furthermore, GOS was able to find a variant of MobileNetV3 that is 2.2x faster with only a minor accuracy drop of 0.4%, even though the original model was already optimized for mobile devices. Overall, our search consistently outperformed the original models, indicating the effectiveness of GOS in achieving hardware efficiency improvements.
In comparing our strategy to other HW-NAS approaches, namely Once-for-All [26] and FBNetV3 [1], we found that our operator replacement method yielded superior results. Our approach consistently outperformed Once-for-All and FBNetV3, showcasing an average speedup of 1.26. This performance advantage highlights the effectiveness of our method in optimizing neural architectures specifically for hardware constraints and further solidifies the value of GOS in achieving superior hardware efficiency improvements. Note that our method can also be used as a specialization phase after the use of these high-level NAS.
Analysis of Resulting operationsIn this paragraph, we discuss the novel operators that were generated through our operator search method and the improvements they bring to the models. Table 3 presents the novel equations for the most efficient operators that replaced the standard convolution 2D, batch normalization, and activation functions. Table 4 summarizes the notations. Our discussion is focused on each device separately.
In general, the last convolution 2D operators of the models are the most inefficient ones. Therefore, in all the models, we automatically optimized these operators using GOS. For the Raspberry Pi device, we modified these operators by adding a dilation rate to the convolutions, similar to dilated convolutions [29]. However, in our operator, the dilation rate is applied within the filter matrix itself, by adding 1, 2, or 3 zeroed columns between different columns of the filter matrix. This modification enables the operator to have a larger receptive field without increasing its size, which can be helpful in capturing features at different scales in an image. This operator is particularly efficient in Raspberry Pi, which has limited computational resources, as it reduces the number of operations needed to process the input.
On the other hand, for the Redmi Note 7S, the model's last convolution 2D operators were modified to a depthwise convolution [30]. Similarly to Raspberry Pi, the dilated rate is applied here as well. The use of dilated filters and depthwise convolution allowed for an increase in hardware efficiency. It is worth noting that we did not start with a depthwise separable convolution, except for MobileNetV3. Instead, our operator search method converges to similar operations. In addition, for resnet18, the search applied a pooling layer at the end of the convolutions. This is done to further reduce the feature map size and enhance the latency. Interestingly, this did not impact negatively the accuracy. However a similar operator was tested on InceptionV3 and MobileNetV3 and a 5%, 6.7% drop in accuracy was seen.
The first convolutions are particularly different from the last ones because of the input shape. In the first convolutions, the channel size is smaller, while the height and width of the feature maps are large. The opposite happens at the end of the model. The first convolutions, even in MobileNetV3, were turned into standard convolutions. The search only changed the hyperparameters of these operators, using 5x5 kernels for some and adding different padding.
The generated batch normalization operation uses a polynomial regression to regress the batch normalization values after the standardization. By incorporating the polynomial regression into the batch normalization equation, this method can improve the accuracy of the normalization while maintaining a fast computation time.
The search algorithm almost never changes the activation functions, as they are usually fast and already efficient. However, we forced the model to change the activation and look for a more efficient version. The resulting equation is shown in table 3. The equation is a leaky version of ReLU. When removing the activation functions from the list of instructions, the search
failed to find a differentiable equation.
Effect of number of instructions.In the previous experiments, we fixed the maximum number of instructions per operator to 20. Here, we justify this value and analyze the effect of changing the number of instructions on operator generation, using the same search space and fitness evaluation as described in Section 4.1.
We varied the number of instructions used to define each operator ranging from 5 to 40 instructions with a step of 5 and compared the resulting architectures' performance. Specifically, we evaluated the accuracy and inference time of the architectures on the Imagenet dataset using the same hardware setup as in the previous experiments. Figure 5 shows the results.
The results showed that increasing the number of instructions used to define each operator generally leads to an improvement in the architectures' performance. This improvement stabilizes after 20 in which we obtain the results shown in table 2. Below 20, the operators become badly implemented and the accuracy drops. Above 25, the instruction set is too large and the operators apply redundant instructions which increases the latency. In addition, the search time highlighted in green increases with the increase of the maximum number of
\begin{table}
\begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline Edge Device & **Model** & **Variant** & **\# Parameters** & **Top-1 Accuracy (\%)** & **Latency (ms)** & **Energy (J)** & **Speedup** \\ \hline \hline \multirow{4}{*}{**Baselines**} & **Resnet18 [22]** & Original & 11M & 69.3 & 382.54 & 1320 & 5.76 \\ \cline{2-9} & & GOS & 9.3M & 75.43 & 66.32 & 220 & \\ \cline{2-9} & **Inceptionv3 [27]** & Original & 25M & 78.2 & - & - & - \\ \cline{2-9} & & GOS & 7.2M & 79.47 & 101.3 & 438.3 & - \\ \cline{2-9} & **MobileNetV3 [28]** & Original & 2.9M & 75.2 & 94.32 & 348 & 2.82 \\ \cline{2-9} & **GOS** & 2.9M & 74.32 & 33.44 & 253 & \\ \cline{2-9} & **FBNetV3 [1]** & Original & 5.3M & 79.1 & 25.4 & 238 & 1.52 \\ \cline{2-9} & **GOS** & 5.1M & 83.4 & 16.7 & 187 & & \\ \cline{2-9} & **OFA [26]** & Original & 4.9M & 74.2 & 22.3 & 211 & 1.16 \\ \cline{2-9} & **Resnet18 [22]** & Original & 11M & 69.3 & 93.43 & 119.4 & 4.51 \\ \cline{2-9} & **Inceptionv3 [27]** & Original & 25M & 78.2 & 83.5 & 132.6 & 3.72 \\ \cline{2-9} & **MobileNetV3 [28]** & GOS & 23.4M & 77.9 & 22.4 & 104.5 & 3.72 \\ \cline{2-9} & **MobileNetV3 [28]** & Original & 2.9M & 75.2 & 76.3 & 76.54 & 2.23 \\ \cline{2-9} & **GOS** & 2.6M & 74.8 & 34.2 & 78.43 & & \\ \cline{2-9} & **FBNetV3 [1]** & Original & 5.3M & 79.1 & 21.6 & 67.9 & 1.18 \\ \cline{2-9} & **GOS** & 4.8M & 81.4 & 18.3 & 87.3 & & \\ \cline{2-9} & **OFA [26]** & Original & 4.6M & 76.5 & 34.6 & 56.42 & 1.21 \\ \cline{2-9} & **OFA [26]** & GOS & 3.7M & 83.4 & 28.5 & 58.3 & \\ \hline \end{tabular}
\end{table}
Table 2: Performance comparison of original models and adapted models on Raspberry Pi 3 and Redmi Note 75
Figure 5: Tuning of the maximum number of instructions per operator while searching for resnet18 GOS variant on Raspberry Pi.
instructions per operator. This is due to the increased latency and fine-tuning time induced by more complex and redundant operators.
### Use Case: Pulse Rate Estimation
The ability to estimate pulse rate continuously is a critical feature in heart attack detection. Estimating pulse rate is essential for measuring workout intensity during exercise and resting heart rate, which is often used to determine cardiovascular fitness. Using mobile wearable devices provides valuable insights into a wearer's health. Due to the limited hardware resources, the model needs to be small and fast to provide real-time results. This task also requires efficient processing of sensor data, which is a critical aspect of hardware-aware NAS. In addition sensed information is highly personal and requires edge inference with a fast and lightweight machine learning model. Typically, wearable devices sense an underlying signal, such as Photoplethysmography (PPG) and raw motion data to estimate pulse rate. Complex algorithms can then process raw data into various activity classifications or step counts. The algorithms used for this purpose range from simple linear regression to complex deep learning models, such as Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM) models. In this use case, we focus on estimating the beats per minute (BPM) based on PPG and accelerometer raw data.
During this experiment, the same previously used search hyperparameters are applied.
DatasetFor this task, we are using the Troika dataset [18]. Troika is a publicly available dataset and contains measurements from three sensors to estimate the heart rate of the wearer: an ECG sensor, a PPG sensor, and an accelerometer sensor. The dataset was collected in a study where participants were asked to perform a set of activities while wearing the sensors, including running, cycling, and sitting. The dataset contains 12 recordings from 8 participants, aged 18 to 35, with each recording lasting 5 minutes. The ground truth heart rate for each recording was obtained from the ECG sensor, which is considered the most accurate method for measuring heart rate.
of a bandpass filter and a Fourier transform. The PPG signal contains information about the blood flow in the capillaries. This signal is a combination of various frequencies, including the pulse rate. By applying a bandpass filter to the PPG signal, frequencies outside the range of interest are eliminated. The Fourier transform is then applied to the filtered signal to extract the characteristic frequencies that correspond to the pulse rate. This process helps to remove noise and artifacts from the signal and facilitates accurate estimation of the pulse rate. The second block consists of a random forest regressor [31]. While this first model is fast, it is not optimal in terms of performance.
* _PPG_NAS_Model_: PPG_NAS [17] is a dedicated NAS for PPG signal analysis and pulse rate estimation. The authors generated an optimized model for pulse rate estimation. The model consists of a 1D convolution layer followed by 2 LSTM layers and a final fully-connected layer. The architecture is designed to minimize the number of parameters and maximize the accuracy of pulse rate estimation. This is a state-of-the-art model in terms of the accuracy of the regression and hardware efficiency on wristband devices.
ResultsTable 5 shows the overall average absolute error (AAE) results on multiple subjects and under different actions: running (\(T1\)), cycling (\(T2\)), and sitting (\(T3\)), as well as their latency and number of parameters. We additionally compare our results to other state-of-the-art models, namely PPG_NAS [17], CNN-LSTM [16], and DeepHeart [15].
Over all subjects and actions, our models outperform their
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|l|l|} \hline \multirow{2}{*}{**Action**} & \multirow{2}{*}{**Subject**} & **RF** & **PPG_NAS** & \multirow{2}{*}{**RF\_Model**} & **PPG_NAS** & **CNN-LSTM** & **DeepHeart** \\ & & **Optimized** & **Optimized** & **RF\_Model** & **[**17**]** & **[**16**]** & **[**15**]** \\ \hline T1 & 1 & 1.43 & 0.8 & 5.9 & 0.95 & 0.47 & 1.47 \\ \hline T1 & 2 & 2.08 & 1.33 & 1.57 & 1.22 & 3.88 & 2.94 \\ \hline T1 & 3 & 3.76 & 0.06 & 4.24 & 0.43 & 1.52 & 0.47 \\ \hline T1 & 4 & 2.08 & 0.69 & 8.68 & 0.69 & 2.31 & 1.02 \\ \hline T1 & 5 & 1.05 & 0.83 & 2.74 & 0.72 & 1.72 & 2.66 \\ \hline T1 & 6 & 4.24 & 0.72 & 4.49 & 0.49 & 1.47 & 0.75 \\ \hline T1 & 7 & 1.51 & 0.71 & 4.8 & 0.99 & 2.85 & 3.45 \\ \hline T1 & 8 & 2.57 & 1.3 & 10.81 & 0.87 & 2.18 & 2.48 \\ \hline T1 & 9 & 3.87 & 1.44 & 7.41 & 1.06 & 4.9 & 0.54 \\ \hline T1 & 10 & 4.49 & 0.98 & 11.18 & 0.64 & 0.34 & 0.72 \\ \hline T1 & 11 & 3.7 & 0.87 & 20.16 & 1.01 & 4.46 & 1.06 \\ \hline T1 & 12 & 2.63 & 0.77 & 5.37 & 0.67 & 1.79 & 0.73 \\ \hline T2 & 13 & 5.24 & 1.96 & 5.56 & 1.62 & 3.01 & 4.8 \\ \hline T2 & 14 & 4.2 & 1.84 & 23.32 & 1.95 & 7.6 & 2.94 \\ \hline T2 & 15 & 9.89 & 1.24 & 9.92 & 0.59 & 1.58 & 0.11 \\ \hline T3 & 16 & 5.22 & 0.57 & 5.49 & 0.61 & 0.9 & 1.63 \\ \hline T3 & 17 & 1.32 & 1.14 & 1.58 & 1.32 & 6.1 & 1.84 \\ \hline T3 & 18 & 1.59 & 0.48 & 5.98 & 0.55 & 0.31 & 1.64 \\ \hline T3 & 19 & 0.2 & 0.54 & 0.61 & 0.47 & 0.12 & 0.18 \\ \hline T3 & 21 & 2.52 & 1.18 & 4.65 & 0.39 & 0.38 & 0.06 \\ \hline T3 & 22 & 0.83 & 0.93 & 4.23 & 0.83 & 1.26 & 2.25 \\ \hline T2 & 23 & 2.88 & 1.54 & 8.17 & 1.38 & 4.26 & 0.94 \\ \hline
**All** & 3.13 & 1.03 & 7.34 & 0.93 & 2.51 & 1.68 \\ \hline
**Latency (ms)** & 2.38 & 2.68 & 1.64 & 5.6 & 11.8 & 13.54 \\ \hline
**Number of** **parameters (M)** & 0.08 & 0.564 & 0.02 & 1.1 & 3.3 & 4.4 \\ \hline \end{tabular}
\end{table}
Table 5: Results of Average Absolute Error for Pulse Rate estimation on TROIKA Dataset [18]
state-of-the-art counterparts with a lesser number of parameters and faster latency.
Figure 6 shows the final architectures proposed by our Grass-roots operator search. The optimized final models of
RF_Model_optimized and PPG_NAS_optimized were obtained by modifying their respective base models. RF_Model_optimized underwent two stages of modifications. First, the fast Fourier transform was altered to extract 30 points instead of the 10 peaks in the original model. Additionally, a linear layer was added to act as a smoother filter that selects the 10 most significant peaks. In the second stage, the random forest regressor was replaced with a multi-layer perceptron (MLP) using a hyperbolic tangent (tanh) activation function. While the RandomForestRegressor is highly efficient, it reduces the model's accuracy, so the search favored MLP. As for PPG_NAS_optimized, the LSTM layers in the original model were replaced with gated recurrent units (GRUs), which use fewer parameters and do not affect the performance. The convolution layer was also modified to use a dilated-like convolution (as shown in the table), and the final linear function was adjusted to have no activation at the end. These optimizations resulted in more accurate and efficient models for pulse rate estimation.
## 5 Conclusion
In conclusion, we have presented a novel approach for optimizing neural network architectures for resource-constrained devices. Our approach leverages the use of mathematical equations to replace common operations such as convolution, batch normalization, and activation functions with more efficient ones, resulting in models that are optimized for low-power devices such as Raspberry Pi and mobile phones. We demonstrated the effectiveness of our approach through experiments on popular architectures, including ResNet18, InceptionV3, and MobileNetV3, achieving significant improvements in inference time and energy consumption compared to the original models. Additionally, we applied GOS to a real-world healthcare problem, namely pulse rate estimation, in which we presented a 2x faster network with a 0.12 average error drop. Overall, our results highlight the potential of our approach for creating efficient neural networks for resource-constrained devices.
|
2309.09409 | Improving Axial Resolution of Optical Resolution Photoacoustic
Microscopy with Advanced Frequency Domain Eigenspace Based Minimum Variance
Beamforming Method | Optical resolution photoacoustic microscopy (OR-PAM) leverages optical
focusing and acoustic detection for microscopic optical absorption imaging.
Intrinsically it owns high optical lateral resolution and poor acoustic axial
resolution. Such anisometric resolution hinders good 3-D visualization; thus
2-D maximum amplitude projection images are commonly presented in the
literature. Since its axial resolution is limited by the bandwidth of acoustic
detectors, ultrahigh frequency, and wideband detectors with Wiener
deconvolution have been proposed to address this issue. Nonetheless, they also
introduce other issues such as severe high-frequency attenuation and limited
imaging depth. In this work, we view axial resolution improvement as an axial
signal reconstruction problem, and the axial resolution degradation is caused
by axial sidelobe interference. We propose an advanced frequency-domain
eigenspace-based minimum variance (F-EIBMV) beamforming technique to suppress
axial sidelobe interference and noises. This method can simultaneously enhance
the axial resolution and contrast of OR-PAM. For a 25-MHz OR-PAM system, the
full-width at half-maximum of an axial point spread function decreased
significantly from 69.3 $\mu$m to 16.89 $\mu$m, indicating a significant
improvement in axial resolution. | Yu-Hsiang Yu, Meng-Lin Li | 2023-09-18T00:36:45Z | http://arxiv.org/abs/2309.09409v1 | Improving Axial Resolution of Optical Resolution Photoacoustic Microscopy with Advanced Frequency Domain Eigenspace Based Minimum Variance Beamforming Method
###### Abstract
Optical resolution photoacoustic microscopy (OR-PAM) leverages optical focusing and acoustic detection for microscopic optical absorption imaging. Intrinsically it owns high optical lateral resolution and poor acoustic axial resolution. Such anisometric resolution hinders good 3-D visualization; thus 2-D maximum amplitude projection images are commonly presented in the literature. Since its axial resolution is limited by the bandwidth of acoustic detectors, ultrahigh frequency, and wideband detectors with Wiener deconvolution have been proposed to address this issue. Nonetheless, they also introduce other issues such as severe high-frequency attenuation and limited imaging depth. In this work, we view axial resolution improvement as an axial signal reconstruction problem, and the axial resolution degradation is caused by axial sidelobe interference. We propose an advanced frequency-domain eigenspace-based minimum variance (F-EIBMV) beamforming technique to suppress axial sidelobe interference and noises. This method can simultaneously enhance the axial resolution and contrast of OR-PAM. For a 25-MHz OR-PAM system, the full-width at half-maximum of an axial point spread function decreased significantly from 69.3 \(\upmu\)m to 16.89 \(\upmu\)m, indicating a significant improvement in axial resolution.
photoacoustic microscopy, axial resolution, minimum variance beamformation
## I Introduction
Optical resolution photoacoustic microscopy (OR-PAM) utilizes optical focusing to obtain a nice lateral resolution of around 1 to 4 \(\upmu\)m. Therefore, OR-PAM is capable of producing detailed images, such as the imaging of normalized total hemoglobin concentration in a mouse ear [1] and blood oxygenation in mouse brain with an intact skull [2]. However, in most literature, the images produced by OR-PAM are usually projection views. It is due to the poor axial acoustic resolution which is decided by the acoustic transducer bandwidth. The axial resolution of OR-PAM is approximately 10 to 100 \(\upmu\)m making the structures along the depth direction rough to identify in three-dimensional images. Therefore, we aim to improve axial resolution to yield usable three-dimensional images.
In order to enhance axial resolution, C. Zhang _et al._[3] use ultrahigh frequency detectors larger than 100 MHz along with the Wiener deconvolution method. D. Cai _et al._[4] employ dual-view OR-PAM and Richardson-Lucy deconvolution to improve axial and lateral resolution. However, these methods require extra equipment to build the system.
We view the axial resolution improvement as an axial signal reconstruction problem, and the axial resolution degradation is caused by axial sidelobe interference. In order to reconstruct the axial signal with higher axial resolution, we need to suppress axial sidelobe interference. In array beamforming, there have been numerous studies dedicated to resolving the sidelobe suppression problem. Our objective is to leverage advanced beamforming techniques to effectively suppress axial sidelobe interference, thereby enabling us to reconstruct axial signals with significantly improved axial resolution.
In addition to sidelobe interference, electronic noise is also a contributing factor. To address these problems, we draw inspiration from eigenspace-based minimum variance beamformer (EIBMV) [5]. The idea is to utilize the eigenstructure of the covariance matrix to separate axial mainlobe contribution from sidelobe interference and noises.
This adaptation of EIBMV to the frequency domain enables us to reconstruct the time domain signal of OR-PAM. Our approach is called frequency domain eigenspace-based minimum variance reconstruction (F-EIBMV).
## II Materials and Methods
### _Inverse Discrete Fourier Transform (IDFT)_
The OR-PAM signal that we want to reconstruct is a time-domain signal, and the most intuitive way to reconstruct the signal is through inverse discrete Fourier transform as follows.
\[x[n]=\frac{1}{N}\sum_{k=0}^{N-1}X[k]e^{\frac{2\pi kn}{N}} \tag{1}\]
We can use matrix form to represent IDFT equation as in Eq. (2).
\[x[n]=\overline{W}^{H}\overline{X}^{t} \tag{2}\]
\[\overline{W}=\frac{1}{N}[1\ 1\cdots 1]^{T} \tag{3}\]
\[\begin{split}\overline{X^{\prime}}=[X[0]e^{\frac{2\pi 0n}{N}}X [1]e^{\frac{2\pi 1n}{N}}\cdots X[N\\ -1]e^{\frac{2\pi(N-1)}{N}}]^{T}\end{split} \tag{4}\]
\(\overline{W}\) is the weight vector which can be viewed as apodization in array beamformation. For standard IDFT the apodization is uniform. \(\overline{X^{i}}\) is a vector representing frequency domain signals after phase compensation. The main idea is to manipulate the apodization of IDFT to suppress axial sidelobe interference and noises.
### _Inverse Discrete Fourier Transform Matrix_
Fig. 1(a) is a time domain axial signal without axial sidelobe interference. Fig. 1(b) is the corresponding IDFT matrix, and there is a white horizontal line located at n = 50 which is also the position of time domain axial signal. Fig. 1(c) is the signal value of each frequency at n=50 which shows high spectral coherence.
On the other hand, Fig. 2(a) is a time domain axial signal with axial sidelobe interference. There are 3 gray lines appear on Fig. 2(b). If we again plot the signal value at n = 50, from Fig. 2(c), we can observe that it fluctuates. It is because the original axial signal is affected by the interference. Fig. 2(c) can be viewed as a DC term along with interference, and we want to add apodization to suppress such interference.
### _Frequency Domain Minimum Variance Method_
The way to suppress sidelobe has been exhibited in array beamformation method. One of the most commonly used is the minimum variance beamformer. This is why we adapt minimum variance beamforming technique into the frequency domain.
The weight of the frequency domain minimum variance method is calculated by minimizing the power, as follows.
\[\begin{array}{l}\overline{W}_{MV}=\arg\min_{\widetilde{\omega}}E\left\{|x[n ]|^{2}\right\}\\ \quad\quad\quad=\arg\min_{\widetilde{\omega}}E\left\{|\overline{W}^{H}\overline {X}^{t}|^{2}\right\}\\ \quad\quad\quad=\arg\min_{\widetilde{\omega}}\widetilde{W}^{H}E\left\{\overline {X}^{t}\overline{X}^{Ht}\right\}\overline{W}\\ \quad\quad\quad=\arg\min_{\widetilde{\omega}}\widetilde{W}^{H}R\widetilde{W},\text{ subject to }\overline{W}^{H}\widetilde{d}=1\end{array} \tag{5}\]
The solution to Eq. (5) is
\[\overline{W}_{MV}=\frac{R^{-1}\widetilde{d}}{\widetilde{d}^{H}R^{-1}\widetilde {d}} \tag{6}\]
The calculation is basically identical to the original minimum variance array beamformer. However, the difference in the frequency domain minimum variance is that the covariance matrix R is calculated using the spectrum after phase compensation.
### _Frequency Domain Eigenspace Based Minimum Variance Reconstruction (F-EIBMV)_
In addition to axial sidelobe interference, there are electronic noises in the OR-PAM system. We hope to use eigen decomposition to separate axial mainlobe signal and noises.
Learning from eigenspace-based minimum variance beamformer [5]. We also perform eigen decomposition on the covariance matrix R.
\[\text{R}=\text{V}\Lambda^{-1}\mathbb{V}^{H} \tag{7}\]
, where, the eigenvalues are \(\Lambda=\text{diag}[\lambda_{1},\lambda_{2},\cdots,\lambda_{N-1}]\), \(\lambda_{1}\geq\lambda_{2}\geq\cdots\geq\lambda_{N-1}\), and \(\text{V}=[\textbf{v}_{1},\textbf{v}_{2},\cdots,\textbf{v}_{N-1}]\) are the eigenvectors. Then, using eigenvectors with larger eigenvalues to construct signal subspace \(\boldsymbol{E}_{s}\).
\[\boldsymbol{E}_{s}=[\textbf{v}_{1},\cdots,\textbf{v}_{\text{Num}}] \tag{8}\]
The Num is the number of eigenvectors that contain axial mainlobe. Therefore, by choosing sufficient number of eigenvectors, we are able to reconstruct axial signal without axial sidelobe interference and noises.
Next, projecting the original frequency domain minimum variance weight \(\overline{W}_{MV}\) onto signal subspace yielding the desired frequency domain eigenspace based minimum variance weight \(\overline{W}_{EIBMV}\).
\[\overline{W}_{EIBMV}=\boldsymbol{E}_{s}\boldsymbol{E}_{s}^{H}\overline{W}_{MV} \tag{9}\]
Finally, replace the original uniform weight in (2) with frequency domain eigenspace based minimum variance weight.
\[x[n]=\overline{W}_{EIBMV}^{H}\overline{X}^{t} \tag{10}\]
By doing so, we can reconstruct OR-PAM's time domain signal with suppressed axial sidelobe interference and noises which can enhance axial resolution and imaging contrast.
### _Optical resolution photoacoustic microscopy (OR-PAM)_
Fig. 3. is the setup of our 25 MHz OR-PAM system. The 532 nm pulsed laser is guided through optical fibers to a Galvo for two-dimensional laser scanning and subsequently focused using an objective lens. The focused beam is directed through a 3D-printed photoacoustic beam combiner and excites the sample. When the laser's energy is absorbed by the sample, the local spot experiences thermal elastic expansion and
Fig. 1: Axial signal without axial sidelobe interference.
Fig. 2: Axial signal with axial sidelobe interference.
generateds a pressure wave (or acoustic wave) that propagates outward. A glass slide set at a 45-degree angle positioned at the center of beam combiner, enabling laser transmission while reflecting the photoacoustic signal onto a 25 MHz ultrasound transducer.
The photoacoustic signal is converted into electrical signals stored in a computer for imaging. The photoacoustic signal received by the transducer underwent initial pre-amplification through a low-noise amplifier. Subsequently, it was filtered and amplified once more using a pulser/receiver. Finally, the signal was digitized by a data acquisition card operating at a sampling rate of 200 MS/s and stored in a computer.
## III Results
In order to see the axial resolution improvement, we measure the axial point spread function of a 25 MHz OR-PAM system by directing the laser onto an airforce target. Because airforce target is a thin film, so we can extract an A-line to evaluate the full width at half-maximum and estimate axial resolution.
In Fig. 4, with the frequency domain minimum variance (F-MV) method, the axial sidelobe interference is suppressed, resulting in an improvement in axial resolution. Moreover, by applying the proposed method, the electronic noises can be further reduced by at least 20 dB. The overall image contrast will be enhanced because of the reduced noise levels.
The full-width at half-maximum (FWHM) of the original axial point spread function (PSF) is 69.3 \(\upmu\)m, and that of the axial PSF after applying F-MV is 18.38 \(\upmu\)m. For the proposed method, the FWHM is 16.89 \(\upmu\)m which is approximately 4 times thinner than the original axial PSF.
## Acknowledgments
The authors appreciate the support of National Science and Technology Council, Taiwan (MOST 110-2221-E-007-011 - MY3, NSTC 111-2321-B-002-016- and NSTC 112-2321-B-002-025-) and Brain Research Center, NTHU, under the Higher Education Sprout Project, funded by the Ministry of Education, Taiwan.
|
2309.12997 | Scaling Limits of the Wasserstein information matrix on Gaussian Mixture
Models | We consider the Wasserstein metric on the Gaussian mixture models (GMMs),
which is defined as the pullback of the full Wasserstein metric on the space of
smooth probability distributions with finite second moment. It derives a class
of Wasserstein metrics on probability simplices over one-dimensional bounded
homogeneous lattices via a scaling limit of the Wasserstein metric on GMMs.
Specifically, for a sequence of GMMs whose variances tend to zero, we prove
that the limit of the Wasserstein metric exists after certain renormalization.
Generalizations of this metric in general GMMs are established, including
inhomogeneous lattice models whose lattice gaps are not the same, extended GMMs
whose mean parameters of Gaussian components can also change, and the
second-order metric containing high-order information of the scaling limit. We
further study the Wasserstein gradient flows on GMMs for three typical
functionals: potential, internal, and interaction energies. Numerical examples
demonstrate the effectiveness of the proposed GMM models for approximating
Wasserstein gradient flows. | Wuchen Li, Jiaxi Zhao | 2023-09-22T16:57:44Z | http://arxiv.org/abs/2309.12997v1 | # Scaling limits of the Wasserstein information matrix on Gaussian mixture models
###### Abstract.
We consider the Wasserstein metric on the Gaussian mixture models (GMMs), which is defined as the pullback of the full Wasserstein metric on the space of smooth probability distributions with finite second moment. It derives a class of Wasserstein metrics on probability simplices over one-dimensional bounded homogeneous lattices via a scaling limit of the Wasserstein metric on GMMs. Specifically, for a sequence of GMMs whose variances tend to zero, we prove that the limit of the Wasserstein metric exists after certain renormalization. Generalizations of this metric in general GMMs are established, including inhomogeneous lattice models whose lattice gaps are not the same, extended GMMs whose mean parameters of Gaussian components can also change, and the second-order metric containing high-order information of the scaling limit. We further study the Wasserstein gradient flows on GMMs for three typical functionals: potential, internal, and interaction energies. Numerical examples demonstrate the effectiveness of the proposed GMM models for approximating Wasserstein gradient flows.
Key words and phrases:Wasserstein Information Matrix; Gaussian Mixture Model; Scaling Limit, Asymptotic Analysis; Gradient Flow W. Li's work is supported by AFOSR MURI FP 9550-18-1-502, AFOSR YIP award No. FA9550-23-1-0087, NSF RTG: 2038080, and NSF DMS-2245097.
###### Abstract
We consider the _Wasserstein distance_ of a family of \(\mathcal{P}\left(\mathbb{R}\right)\)-valued functions \(\rho:\mathbb{R}\rightarrow\mathbb{R}\), where \(\mathbb{R}\) is a family of \(\mathcal{P}\left(\mathbb{R}\right)\)-valued functions and \(\rho:\mathbb{R}\rightarrow\mathbb{R}\) is a family of \(\mathcal{P}\left(\mathbb{R}\right)\)-valued functions. We show that the Wasserstein distance of a family of \(\mathcal{P}\left(\mathbb{R}\right)\)-valued functions is a measure of the form \(\rho(\cdot)\), where \(\rho(\cdot)\) is a family of \(\mathcal{P}\left(\mathbb{R}\right)\)-valued functions. We show that the Wasserstein distance of a family of \(\mathcal{P}\left(\mathbb{R}\right)\)-valued functions is a measure of the form \(\rho(\cdot)\), where \(\rho(\cdot)\) is a family of \(\mathcal{P}\left(\mathbb{R}\right)\)-valued functions. We also show that the Wasserstein distance of a family of \(\mathcal{P}\left(\mathbb{R}\right)\)-valued functions is a measure of the form \(\rho(\cdot)\), where \(\rho(\cdot)\) is a family of \(\mathcal{P}\left(\mathbb{R}\right)\)-valued functions. We also show that the Wasserstein distance of a family of \(\mathcal{P}\left(\mathbb{R}\right)\)-valued functions is a measure of the form \(\rho(\cdot)\), where \(\rho(\cdot)\) is a family of \(\mathcal{P}\left(\mathbb{R}\right)\)-valued functions. We also show that the Wasserstein distance of a family of \(\mathcal{P}\left(\mathbb{R}\right)\)-valued functions is a measure of the form \(\rho(\cdot)\), where \(\rho(\cdot)\) is a family of \(\mathcal{P}\left(\mathbb{R}\right)\)-valued functions. We also show that the Wasserstein distance of a family of \(\mathcal{P}\left(\mathbb{R}\right)\)-valued functions is a measure of the form \(\rho(\cdot)\), where \(\rho(\cdot)\) is a family of \(\mathcal{P}\left(\mathbb{R}\right)\)-valued functions. We also show that the Wasserstein distance of a family of \(\mathcal{P}\left(\mathbb{R}\right)\)-valued functions is a measure of the form \(\rho(\cdot)\), where \(\rho(\cdot)\) is a family of \(\mathcal{P}\left(\mathbb{R}\right)\)-valued functions. We also show that the Wasserstein distance of a family of \(\mathcal{P}\left(\mathbb{R}\right)\)-valued functions is a measure of the form \(\rho(\cdot)\), where \(\rho(\cdot)\) is a family of \(\mathcal{P}\left(\mathbb{R}\right)\)-valued functions. We also show that the Wasserstein distance of a family of \(\mathcal{P}\left(\mathbb{R}\right)\)-valued functions is a measure of the form \(\rho(\cdot)\), where \(\rho(\cdot)\) is a family of \(\mathcal{P}\left(\mathbb{R}\right)\)-valued functions. We also show that the Wasserstein distance of a family of \(\mathcal{P}\left(\mathbb{R}\right)\)-valued functions is a measure of the form \(\rho(\cdot)\), where \(\rho(\cdot)\) is a family of \(\mathcal{P}\left(\mathbb{R}\right)\)-valued functions. We also show that the Wasserstein distance of a family of \(\mathcal{P}\left(\mathbb{R}\right)\)-valued functions is a measure of the form \(\rho(\cdot)\), where \(\rho(\cdot)\) is a family of \(\mathcal{P}\left(\mathbb{R}\right)\)-valued functions. We also show that the Wasserstein distance of a family of \(\mathcal{P}\left(\mathbb{R}\right)\)-valued functions is a measure of the form \(\rho(\cdot)\), where \(\rho(\cdot)\) is a family of \(\mathcal{P}\left(\mathbb{R}\right)\)-valued functions. We also show that the Wasserstein distance of a family of \(\mathcal{P}\left(\mathbb{R}\right)\)-valued functions is a measure of the form \(\rho(\cdot)\), where \(\rho(\cdot)\) is a family of \(\mathcal{P}\left(\mathbb{R}\right)\)-valued functions. We also show that the Wasserstein distance of a family of \(\mathcal{P}\left(\mathbb{R}\right)\)-valued functions is a measure of the form \(\rho(\cdot)\), where \(\rho(\cdot)\) is a family of \(\mathcal{P}\left(\mathbb{R}\right)\)-valued functions. We also show that the Wasserstein distance of a family of \(\mathcal{P}\left(\mathbb{R}\right)\)-valued functions is a measure of the form \(\rho(\cdot)\), where \(\rho(\cdot)\) is a family of \(\mathcal{P}\left(\mathbb{R}\right)\)-valued functions. We also show that the Wasserstein distance of a family of \(\mathcal{P}\left(\mathbb{R}\right)\)-valued functions is a measure of the form \(\rho(\cdot)\), where \(\rho(\cdot)\) is a family of \(\mathcal{P}\left(\mathbb{R}\right)\)-valued functions. We also show that the Wasserstein distance of a family of \(\mathcal{P}\left(\mathbb{R}\right)\)-valued functions is a measure of the form \(\rho(\cdot)\), where \(\rho(\cdot)\) is a family of \(\mathcal{P}\left(\mathbb{R}\right)\)-valued functions. We also show that the Wasserstein distance of a family of \(\mathcal{P}\left(\mathbb{R}\right)\)-valued functions is a measure of the form \(\rho(\cdot)\), where \(\rho(\cdot)\) is a family of \(\mathcal{P}\left(\mathbb{R}\right)\)-valued functions. We also show that the Wasserstein distance of a family of \(\mathcal{P}\left(\mathbb{R}\right)\)-valued functions is a measure of the form \(\rho(\cdot)\), where \(\rho(\cdot)\) is a family of \(\mathcal{P}\left(\mathbb{R}\right)\)-valued functions. We also show that the Wasserstein distance of a family of \(\mathcal{P}\left(\mathbb{R}\right)\)-valued functions is a measure of the form \(\rho(\cdot)\), where \(\rho(\cdot)\) is a family of \(\mathcal{P}\left(\mathbb{R}\right)\)-valued functions. We also show that the Wasserstein distance of a family of \(\mathcal{P}\left(\mathbb{R}\right)\)-valued functions is a measure of the form \(\rho(\cdot)\), where \(\rho(\cdot)\) is a family of \(\mathcal{P}\left(\mathbb{R}\right)\)-valued functions. We also show that the Wasserstein distance of a family of \(\mathcal{P}\left(\mathbb{R}\right)\)-valued functions is a measure of the form \(\rho(\cdot)\), where \(\rho(\cdot)\) is a family of \(\mathcal{P}\left(\mathbb{R}\right)\)-valued functions. We also show that the Wasserstein distance of a family of \(\mathcal{P}\left(\mathbb{R}\right)\)-valued functions is a measure of the form \(\rho(\cdot)\), where \(\rho(\cdot)\) is a family of \(\mathcal{P}\left(\mathbb{R}\right)\)-valued functions. We also show that the Wasserstein distance of a family of \(\mathcal{P}\left(\mathbb{R}\right)\)-valued functions is a measure of the form \(\rho(\cdot)\), where \(\rho(\cdot)\) is a family of \(\mathcal{P}\left(\mathbb{R}\right)\)-valued functions. We also show that the Wasserstein distance of a family of \(\mathcal{P}\left(\mathbb{R}\right)\)-valued functions is a measure of the form \(\rho(\cdot)\), where \(\rho(\cdot)\) is a family of \(\mathcal{P}\left(\mathbb{R}\right)\)-valued functions. We also show that the Wasserstein distance of a family of \(\mathcal{P}\left(\mathbb{R}\right)\)-valued functions is a measure of the form \(\rho(\cdot)\), where \(\rho(\cdot)\) is a family of \(\mathcal{P}\left(\mathbb{R}\right)\)-valued functions is a measure of the form \(\rho(\cdot)\), where \(\rho(\cdot)\) is a family of \(\mathcal{P}\left(\mathbb{R}\right)\)-valued functions. We also show that the Wasserstein distance of a family of \(\mathcal{P}\left(\mathbb{R}\right)\)-valued functions is a measure of the form \(\rho(\cdot)\), where \(\rho(\cdot)\) is a family of \(\mathcal{P}\left(\mathbb{R}\right)\)-valued functions is a measure of the form \(\rho(\cdot)\), where \(\rho(\cdot)\) is a family of \(\mathcal{P}\left(\mathbb{R}\right)\)-valued functions.
Wasserstein-2 distance. Moreover, numerical experiments are conducted on barycenters in GMMs to visualize the geodesics.
As a graphical model, GMM can also be analyzed through the lens of probability simplex [5]. In literature, researchers have attempted to establish an optimal transport theory on graphs [25]. [10] studies the scaling limit of the Wasserstein metric on graphs using the Benamou-Brenier formulation and establishes the lower and upper bounds of the limiting metric. In [25], the author defines a metric over the probability simplex associated with a fixed Markov chain. Under this metric, the Markov chain can be identified with he gradient flow of entropy. Furthermore, in [11], the same authors prove that in a 1D periodic setting, discrete transport metrics converge to a limiting transport metric with a non-trivial effective mobility. Compared to previous work, we focus on the 1D models and use the closed-form formula for the Wasserstein metric in 1D [33] to define the WIMs for GMMs. Therefore, we can obtain more quantitative information of the Wasserstein metric on these models and at the same time prove the existence of the scaling limit of Wasserstein metric.
The paper is organized as follows. In Section 2, we establish the scaling limits of Fisher and Wasserstein information matrices. Next, we generalize results to general models, such as inhomogeneous GMMs, higher-order GMM metrics, and extended GMMs in Section 3. Last, in Section 4, we derive the gradient flows on these scaling Wasserstein metrics and relate them to gradient flows on density manifold. Numerical experiments and further discussions are left to Section 5 and Section 6, respectively.
## 2. A scaling limit of Fisher-Rao and Wasserstein metrics in Gaussian mixture models
In this section, we study a scaling limit of Fisher and Wasserstein metrics on 1D GMMs. They are defined via the pullback of the metric on the density manifold. The scaling limit is defined as the limit of variances of Gaussian components tend to 0.
### Information matrices and Gaussian mixture models
Let \(\mathscr{X}\subset\mathbb{R}^{n}\) be a compact manifold with a smooth Riemannian structure. This can be the computational domain of a scientific application or the sample space associated with statistical inference problems. Let \(\mathscr{P}^{2,\infty}(\mathscr{X})\) denote the space of probability distributions over \(\mathscr{X}\) which are absolutely continuous w.r.t. the Lebesgue measure on \(\mathscr{X}\) and have smooth positive density functions. Moreover, one further requires these distributions to have bounded second moments, which can be calculated via the embedding of \(\mathscr{X}\) into \(\mathbb{R}^{n}\). For those who do not have much familiarity with the Riemannian geometry, \(\mathscr{X}\) can be simply regarded as the whole Euclidean space \(\mathbb{R}^{n}\). Given a metric tensor \(g\) on \(\mathscr{P}^{2,\infty}(\mathscr{X})\), we call \((\mathscr{P}^{2,\infty}(\mathscr{X}),g)\) density manifold [16, 23]. Consider a parameter space \(\Theta\subset\mathbb{R}^{d}\) and a parameterization function
\[\rho\colon\Theta\to\mathscr{P}^{2,\infty}(\mathscr{X}),\quad\theta\mapsto\rho _{\theta}(x)\]
which can also be viewed as \(\rho\colon\mathscr{X}\times\Theta\to\mathbb{R}\). We assume the distributions have smooth density functions. The image of \(\Theta\) under the mapping \(\rho\), i.e., \(\rho(\Theta)\) is named a statistical model. Suppose that \(f,g\) are two vector-valued functions on \(\mathscr{X}\), we denote \(\langle f,h\rangle=\int_{\mathscr{X}}(f(x),h(x))dx\) as the \(L^{2}(\mathscr{X})\) inner product, where \(dx\) refers to the Lebesgue measure on \(\mathscr{X}\). We denote \((v,w)=v\cdot w\) as the (pointwise) Euclidean inner product of two vectors. On an arbitrary parametric space \(\rho:\Theta\to\mathscr{P}\left(\mathscr{X}\right)\), we define a pull-back metric \(g^{\Theta}\) and an
information matrix \(G\) associated to a metric \(g\) on density manifold \(\mathcal{P}\left(\mathcal{X}\right)\). See a related study in [22].
**Definition 1** (Pull-back metric & information matrix).: _Consider density manifold \(\left(\mathcal{P}\left(\mathcal{X}\right),g\right)\) with a metric tensor \(g\), i.e. \(\forall\nu\in\mathcal{P}\left(\mathcal{X}\right),g\left(\nu\right):T_{\nu} \mathcal{P}\left(\mathcal{X}\right)\times T_{\nu}\mathcal{P}\left(\mathcal{X }\right)\rightarrow\mathbb{R}\) bilinear, and a smoothly parametrized space \(\rho:\Theta\rightarrow\mathcal{P}\left(\mathcal{X}\right)\) with parameter \(\theta\in\Theta\subset\mathbb{R}^{d}\). Then the pull-back metric \(g^{\Theta}\) of \(g\) onto this parameter space \(\Theta\) is given by_
\[g^{\Theta}\left(\rho_{\theta}\right) :T_{\rho_{\theta}}\Theta\times T_{\rho_{\theta}}\Theta\rightarrow \mathbb{R}\] \[g^{\Theta}\left(\rho_{\theta}\right)\left(v_{1},v_{2}\right) =g\left(\rho_{\theta}\right)\left(v_{1},v_{2}\right),\forall v_{1},v_{2} \in T_{\rho_{\theta}}\Theta\subset T_{\rho_{\theta}}\mathcal{P}\left(\mathcal{ X}\right).\]
_Denote the information matrix associated with this model as \(G\left(\theta\right)\subset\mathbb{R}^{d\times d},\forall\theta\in\Theta\)_
\[G(\theta)_{ij}=g^{\Theta}\left(\rho_{\theta}\right)\left(\partial_{\theta_{i} }\rho_{\theta},\partial_{\theta_{j}}\rho_{\theta}\right).\]
In this paper, we restrict our attention to the special one-dimensional sample space \(\mathcal{X}=\mathbb{R}\). Consider an 1D GMM \(\rho:\Theta\rightarrow\mathcal{P}\left(\mathbb{R}\right),\Theta\subset \mathbb{R}^{N-1}\) with
\[\theta\mapsto\rho_{\theta}=\sum_{i=1}^{N-1}\theta_{i}\left(\rho_{i+1}-\rho_{i }\right)+\rho_{1},\qquad 1>\theta_{1}>\cdots>\theta_{N-1}>0,\quad\forall i=1,...,N-1,\] (GMM)
where each component is fixed to be a Gaussian \(\rho_{i}(x)=\frac{1}{\sqrt{2\pi}\sigma_{i}}e^{-\frac{\left(x-\mu_{i}\right)^{ 2}}{2\sigma_{i}^{2}}},i=1,2,...,N\) and components are listed in their means' increasing order \(\mu_{1}<\mu_{2}<...<\mu_{N-1}<\mu_{N}\). As mentioned before, we consider the simplified model where only mixing coefficients are allowed to vary. In Section 3.3, we will relax this constraint and consider GMM with more degrees of freedom. We further postulate \(\theta_{0}=1,\theta_{N}=0\). The parameters \(\theta_{i}\)s do not have any probabilistic meaning, we thus introduce another group of coordinate variables \(p_{i},i=1,2...N\) for GMMs, i.e.
\[\sum_{i=1}^{N-1}\theta_{i}\left(\rho_{i+1}-\rho_{i}\right)+\rho_{1}=\sum_{i=1 }^{N}\left(\theta_{i-1}-\theta_{i}\right)\rho_{i}=\sum_{i=1}^{N}p_{i}\rho_{i}.\]
Writing in this form, the GMM has close relation with the probability simplex, i.e. \(\left\{p_{i},i\in\left[N\right]\right\}\) represents a point in the probability simplex. [5] use heavily this connection to study the properties of optimal transport on GMMs. Consequently, any metric defined on GMMs can also be viewed as a metric on the probability simplex. Throughout the paper, we will use the coordinates \(\theta_{i}\)s and \(p_{i}\)s interchangeably as \(\theta\)-coordinate simplifies the presentation of the WIM while \(p\)-coordinate is better for understanding. To simplify the deduction, we postulate a homogeneous assumption, i.e. Assumption 1 of the model in this section and name it 1D homogeneous GMM.
**Assumption 1** (Homogeneity).: _The standard variances of all the components coincide and the differences between adjacent means of Gaussian components are the same:_
\[\sigma_{i} =\sigma,\qquad i=1,2,...,N-1,N,\] \[\mu_{i}-\mu_{i-1} =d,\qquad i=2,3,...,N-1,N.\]
In following examples, we investigate the Fisher and Wasserstein information matrices in the Gaussian mixture models.
**Example 1** (Fisher geometry of Gaussian mixture model).: _One can identify the tangent space of density manifold \(\mathcal{P}\) at arbitrary distribution \(\rho\) with \(C_{0}^{\infty}\):_
\[T_{\rho}\mathcal{P}(\mathcal{X})\simeq C_{0}^{\infty}(\mathcal{X})=\{f\in C^{ \infty}(\mathcal{X})|\int_{\mathcal{X}}f(x)dx=0\}, \tag{1}\]
_where \(C^{\infty}(\mathcal{X})\) is the space of continuous functions on \(\mathcal{X}\). Then, the Fisher-Rao metric is given as_
\[\left(\text{Fisher-Rao}\right):g_{F}\left(v_{1},v_{2}\right)=\int_{\mathcal{X}} \frac{v_{1}(x)v_{2}(x)}{\rho(x)}dx,\quad v_{1},v_{2}\in T_{\rho}\mathcal{P} \left(\mathcal{X}\right),\]
_where we omit the dependence of \(g_{F}\left(\rho\right)\) on \(\rho\). For a 1D GMM eq. (4), Fisher information matrices are provided as_
\[\left(G_{F}\left(\theta\right)\right)_{ij}=\int_{\mathbb{R}}\frac{\partial_{ \theta_{i}}\rho_{\theta}(x)\partial_{\theta_{j}}\rho_{\theta}(x)}{\rho_{ \theta}(x)}dx=\int_{\mathbb{R}}\frac{\left(\rho_{i+1}(x)-\rho_{i}(x)\right) \left(\rho_{j+1}(x)-\rho_{j}(x)\right)}{\rho_{\theta}(x)}dx. \tag{2}\]
**Example 2** (Wasserstein geometry of Gaussian mixture model).: _Based on the identification eq. (1) of the tangent space in last example, we have_
\[\left(\text{Wasserstein}\right):g_{W}\left(v_{1},v_{2}\right)=\int_{\mathcal{X} }v_{1}(x)\Phi_{2}(x)dx,\ v_{2}(x)+\nabla\cdot\left(\rho(x)\nabla\Phi_{2}(x) \right)=0,\quad v_{1},v_{2}\in T_{\rho}\mathcal{P}\left(\mathcal{X}\right),\]
_Moreover, when we focus on 1D cases, i.e. \(\mathcal{X}=\mathbb{R}\), the metric for Wasserstein geometry has a closed-form solution, namely_
\[g_{W}\left(v_{1},v_{2}\right)=\int_{\mathbb{R}}\frac{F_{1}(x)F_{2}(x)}{\rho(x )}dx,\quad F_{i}\left(x\right)=\int_{-\infty}^{x}v_{i}\left(s\right)ds,i=1,2.\]
_Hence, for a 1D GMM eq. (4), the WIMs are provided as_
\[\left(G_{W}\left(\theta\right)\right)_{ij}=\int_{\mathbb{R}}\frac{\partial_{ \theta_{i}}F_{\theta}(x)\partial_{\theta_{j}}F_{\theta}(x)}{\rho_{\theta}(x)} dx=\int_{\mathbb{R}}\frac{\left(F_{i+1}(x)-F_{i}(x)\right)\left(F_{j+1}(x)-F_{j}(x) \right)}{\rho_{\theta}(x)}dx, \tag{3}\]
_where \(F_{i}\) is the cumulative distribution function of density \(\rho_{i}\)._
### The scaling limit of information matrices
Although the analytic formula for the WIM in 1D GMM exists, the integral eq. (3) is impossible to evaluate explicitly. Thus, we consider the following scaling limit.
**Definition 2** (Scaling limit of the information matrices in GMMs).: _Let \(\Theta(\sigma^{2})\subset\mathcal{P}(\mathbb{R})\) denote a family of GMMs indexed by the variance of Gaussian components, i.e., they share the same means for each component but different variance \(\sigma^{2}\):_
\[\theta\mapsto\rho(\cdot|\theta,\sigma^{2})=\sum_{i=1}^{N-1}\theta_{i}\left( \rho(\cdot|\mu_{i+1},\sigma^{2})-\rho(\cdot|\mu_{i},\sigma^{2})\right)+\rho( \cdot|\mu_{1},\sigma^{2}), \tag{4}\]
_where \(\rho(\cdot|\mu_{i},\sigma^{2})=\mathcal{N}(\mu_{i},\sigma^{2})\). Denote \(G\left(\theta;\sigma\right)\) as the information matrices associated to the model \(\Theta(\sigma^{2})\). In our context, this can be either Fisher \(G_{F}\left(\theta;\sigma\right)\) or Wasserstein \(G_{W}\left(\theta;\sigma\right)\). Consider the limit as \(\sigma^{2}\to 0\), if there exists a function \(K(\sigma)\) of \(\sigma\) such that the following limit exists_
\[\lim_{\sigma\to 0}\frac{G\left(\theta;\sigma\right)}{K(\sigma)}=\widetilde{G} \left(\theta\right). \tag{5}\]
_We call the limit matrix \(\widetilde{G}\left(\theta\right)\) the scaling limit of the information matrices in GMMs, or briefly scaling information matrices._
There are two reasons to consider this scaling limit. The first one is that both Fisher and Wasserstein geometry possess a well-defined scaling limit, which is one of the main discoveries of this work. Another motivation is the following: as the standard variance \(\sigma\) of Gaussian \(\mathcal{N}\left(\mu,\sigma^{2}\right)\) tends to zero, the distribution converges weakly to a Dirac measure \(\delta_{\mu}\) centered at its mean \(\mu\). Specifically, Gaussian mixture models converge weakly to Dirac mixture models. Consequently, this limiting behavior of the information matrices characterizes the corresponding geometry on the Dirac mixture model, which can also be understood as probability simplices or discrete graphical models. If the limit exists, it is a candidate of the Wasserstein metric on probability simplex, which is important in both communities of optimal transport and graph theory [9, 20, 25, 32]. In comparison, our scaling metrics Definition 2 differs from [25] in the sense that metrics \(\widetilde{G}\left(\theta\right)\) depends only on the structure of graphs, while theirs depends also on transition kernels of jump processes. Moreover, scaling limit of the Wasserstein geometry is also studied in [11], but it considers the opposite direction. While they explore the convergence of optimal transport on grid to continuous situation as the size of grid tends to \(0\), we define the optimal transport on graphs by approximating it via continuous models.
As a warm-up, we calculate the scaling limit of the Fisher-Rao metric, which is simple in the sense that one can directly set the scaling factor \(K(\sigma)\) to \(1\) to obtain the desired limit.
**Theorem 2**.: _For a 1D homogeneous GMM, the scaling limit of Fisher information matrices is given by_
\[\lim_{\sigma\to 0}\left(G_{F}\left(\theta;\sigma\right)\right)_{ij}=\begin{cases} \dfrac{1}{p_{i}}+\dfrac{1}{p_{i+1}},&i=j,\\ -\dfrac{1}{p_{i}},&i=j+1,\\ -\dfrac{1}{p_{i+1}},&j=i+1,\\ 0,&\text{otherwise,}\end{cases}\] (F-limit)
_or in matrix form_
\[G_{\widetilde{F}}\left(\theta\right)=\lim_{\sigma\to 0}G_{F}\left(\theta; \sigma\right)=\begin{pmatrix}\frac{1}{p_{1}}+\frac{1}{p_{2}}&-\frac{1}{p_{2}} &0&\cdots&0&0\\ -\frac{1}{p_{2}}&\frac{1}{p_{2}}+\frac{1}{p_{3}}&-\frac{1}{p_{3}}&\cdots&0&0\\ 0&-\frac{1}{p_{3}}&\frac{1}{p_{3}}+\frac{1}{p_{4}}&\cdots&0&0\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\ 0&0&0&\cdots&-\frac{1}{p_{N-1}}&\frac{1}{p_{N-1}}+\frac{1}{p_{N}}\\ &&\text{(F-limit-matrix)}\end{pmatrix}.\]
Proof.: Recall that for a 1D homogeneous GMM, the Fisher information matrix reduces to eq. (2). For simplification, we omit the dependence of \(\rho_{\theta}\left(x\right)\) on \(\sigma\). Readers should not be confused when we take the limit of the above quantity as \(\sigma\) tends to \(0\). Thus, it suffices to prove the following relation
\[\lim_{\sigma\to 0}\int\frac{\rho_{i}\left(x\right)\rho_{j}\left(x\right)}{\rho_{ \theta}\left(x\right)}dx=\frac{\delta_{ij}}{p_{i}},\]
where \(\delta_{ij}\)s are Kronecker symbols. Since we know that a sequence of Gaussians with same mean \(\mu_{i}\) and variances shrink to \(0\) converges weakly to the Dirac measure at \(\mu_{i}\), i.e.
\(\mathcal{N}\left(\mu_{i},\sigma\right)\overset{w}{\longrightarrow}\delta_{\mu_{i}}\). We have
\[\lim_{\sigma\to 0}\int\frac{\rho_{i}\left(x\right)\rho_{j} \left(x\right)}{\rho_{\theta}\left(x\right)}dx\] \[= \lim_{\sigma\to 0}\mathbb{E}_{X\sim\rho_{i}}\frac{\rho_{j} \left(X\right)}{\rho_{\theta}\left(X\right)}\] \[= \mathbb{E}_{X\sim\delta_{\mu_{i}}}\lim_{\sigma\to 0}\frac{\rho_{j} \left(X\right)}{\rho_{\theta}\left(X\right)}\] \[= \lim_{\sigma\to 0}\frac{\rho_{j}\left(\mu_{i}\right)}{\rho_{ \theta}\left(\mu_{i}\right)}=\frac{\delta_{ij}}{p_{i}}.\]
We exchange the expectation and limit as the integrand \(\frac{\rho_{j}\left(X\right)}{\rho_{\theta}\left(X\right)}\) is bounded from above by \(\frac{1}{p_{i}}\) and thus uniformly integrable.
**Remark 1**.: _Readers who are familiar with Fisher information geometry can realize that the scaling limit eq. (F-limit-matrix) obtained is exactly the Fisher information matrix on the probability simplex_
\[\circ-\circ-\circ-\cdots-\circ-\circ,\]
_under \(\theta\)-parameterization eq. (4), except that Gaussian components are replaced by Dirac measures on each node. In other word, the scaling limit of Fisher-Rao metric on continuous models coincides with Fisher geometry on discrete models. This identification in Fisher geometry implies that the definition via scaling limit is at least canonical in Fisher geometry. This motivates us to study the counterpart in Wasserstein geometry in the next subsection._
### The scaling limit of Wasserstein metric
In this section, we study the scaling limits of WIMs \(G_{\widetilde{W}}\). We put this derivation into general scope by considering not only the scaling behavior over GMMs, but also all the mixture models of the form \(\rho\left(x;\theta\right)=\sum_{i=1}^{N}p_{i}\rho\left(x-\mu_{i}\right)\) under scaling \(\rho\left(x;\sigma\right)=\sigma\rho\left(\sigma x\right)\), where \(\rho(\cdot)\) is a probability density. Specifically, we will perform detailed calculation over Guassian and Laplace families and remark on general families.
Recall that for 1D models the Wasserstein inner product is given by eq. (3). However, in mixture models, the term \(\partial_{\theta_{i}}F_{\theta}\left(x\right)=F_{i+1}\left(x\right)-F_{i} \left(x\right)\) has the following behaviors shown in Section 2.3. We plot the density function of a 2-mixture model and \(F_{2}\left(x\right)-F_{1}\left(x\right)\). It is easy to observe that on a large part of the interval \(\left[\mu_{1},\mu_{2}\right]\), the function \(\partial_{\theta_{i}}F_{\theta}\left(x\right)\) stays close to 1 while \(\rho_{\theta}\left(x\right)\) is relatively small, as \(\sigma\) tends to 0. Consequently, as the scaling parameter \(\sigma\) tends to 0, eq. (3) blows up, i.e. \(\lim_{\sigma\to 0}G_{W}\left(\theta;\sigma\right)\) does not exist. This indicates the importance of considering the scaling limit eq. (5) in Definition 2. And the primary goal becomes quantifying the scaling factor \(K(\sigma)\). The main result is as follows
**Theorem 3**.: _For a 1D homogeneous GMM with difference between adjacent components given by \(d\), the scaling limit of WIMs is given by_
\[\lim_{\sigma\to 0}\frac{\left(G_{W}\left(\theta;\sigma\right)\right)_{ ij}}{K\left(\sigma\right)}=\frac{\delta_{ij}}{\sqrt{p_{i}p_{i+1}}},\quad K \left(\sigma\right)=\sqrt{2\pi^{3}}\frac{\sigma^{3}}{d}e^{\frac{d^{2}}{8\sigma^ {2}}}.\] (naive-W-limit)
_In matrix form,_
\[G_{\widetilde{W}}\left(\theta\right)=\lim_{\sigma\to 0}\frac{G_{W}\left( \theta;\sigma\right)}{K\left(\sigma\right)}=\begin{pmatrix}\frac{1}{\sqrt{p_{1} p_{2}}}&0&0&\cdots&0&0\\ 0&\frac{1}{\sqrt{p_{2}p_{3}}}&0&\cdots&0&0\\ 0&0&\frac{1}{\sqrt{p_{3}p_{4}}}&\cdots&0&0\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\ 0&0&0&\cdots&0&\frac{1}{\sqrt{p_{N-1}p_{N}}}\\ &&&\text{(naive-W-limit-matrix)}\end{pmatrix}.\]
**Remark 2**.: _Comparing eq. (naive-W-limit-matrix) and eq. (F-limit-matrix), the key difference is that scaling limit of WIM requires an additional rescaling factor \(K(\sigma)\). The reason is that Fisher geometry is invariant under parameterization (indeed, Fisher-Rao metric can be characterized as the unique metric satisfies a kind of invariance under the variable transformation), but Wasserstein geometry clearly does not satisfy the same property._
The rest of this section is devoted to the proof of Theorem 3, which contains a detailed analysis using Laplace's method. Before diving into technical proofs, We provide a brief overview on the proof idea. We analyze the WIM via the following reduction
\[\left(G_{W}\left(\theta\right)\right)_{ii}=\int_{\mathbb{R}}\frac{\left(F_{i}-F _{i+1}\right)^{2}}{\rho_{\theta}}dx\stackrel{{\Delta_{1}}}{{ \Longrightarrow}}\int_{\mu_{i}}^{\mu_{i+1}}\frac{dx}{p_{i}\rho_{i}+p_{i+1} \rho_{i+1}}\stackrel{{\Delta_{2}}}{{\Longrightarrow}}\text{ Laplace's method}. \tag{6}\]
In the \(\Delta_{1}\) reduction, we restrict the integration domain from \(\mathbb{R}\) to \([\mu_{i},\mu_{i+1}]\) which contains the most significant part of the numerator \(F_{i}(x)-F_{i+1}(x)\), c.f. Section 2.3. Meanwhile, we simplify the total density \(\rho_{\theta}\) to \(p_{i}\rho_{i}+p_{i+1}\rho_{i+1}\) which are the two nearest components and \((F_{i}(x)-F_{i+1}(x))^{2}\) to \(1\). In the next \(\Delta_{2}\) reduction step, we perform the limit \(\sigma\to 0\) and deduce the asymptotic formula for the integral using a variant of the Laplace's method [28].
The rigorous derivation of \(\Delta_{1},\Delta_{2}\) reduction are summarized in the following two propositions.
Figure 1. This figure plots an example of the function \(\partial_{\theta_{i}}F_{\theta}\left(x\right)\) for a GMM.
**Proposition 1** (\(\Delta_{1}\) reduction).: _Consider the WIM on the homogeneous GMM Assumption 1 with variance \(\sigma^{2}\) and gap \(d\). The non-diagonal term has the following upper bound:_
\[0\leq\left(G_{W}\right)_{ij}\leq\frac{3d\sigma^{2}M}{\min_{i}p_{i}}\left(1+ \frac{2\sigma^{2}}{3d^{2}}e^{-d^{2}/2\sigma^{2}}\right),\quad i\neq j, \tag{7}\]
_and the diagonal term can be bounded as_
\[\frac{\sigma^{4}M^{2}}{2\max\{p_{i},p_{i+1}\}}+\left(1-\sqrt{\frac{2\sigma}{ \pi}}e^{-1/2\sigma}-\frac{e^{-d^{2}/2\sigma^{2}}}{\min\{p_{i},p_{i+1}\}} \right)I_{i}\leq\left(G_{W}\right)_{ii}\leq\frac{\sigma^{4}M^{2}}{2\min\{p_{i},p_{i+1}\}}+I_{i}, \tag{8}\]
_with the integral \(I_{i}\) given by_
\[I_{i}=\int_{\mu_{i}}^{\mu_{i+1}}\frac{dx}{p_{i+1}\rho_{i+1}+p_{i}\rho_{i}}, \quad i\in[N-1]. \tag{9}\]
A glimpse over the bounds of non-diagonal terms and diagonal terms indicate two classed of matrix elements have significant scale difference. While the non-diagonal terms tend to \(0\) with speed \(O(\sigma^{2})\) as \(\sigma\to 0\), the diagonal terms tend to \(\infty\) similar to the asymptotic integral \(I_{i}\), which will be proved to with the order \(e^{d^{2}/8\sigma^{2}}\). In a word, the WIM rapidly becomes diagonally dominant as \(\sigma\to 0\), which is aligned with the conclusion in Theorem 3.
To deal with the reduction \(\Delta_{2}\), we have the following variant of Laplace's method. To maintain simplification, we focus on two specific density families, namely Gaussian and Laplace, to obtain their asymptotic analysis. Meanwhile, we establish a slightly more general version which can also handle a generalized model in Section 3.
**Proposition 2** (\(\Delta_{2}\) reduction).: _We have the following asymptotics of the eq. (9)_
\[\begin{split}&\lim_{\sigma\to 0}\frac{\int_{0}^{d}\frac{dx}{p_{i} \rho\left(x;\sigma\right)+p_{i+1}\rho\left(d-x;k\sigma\right)}}{\frac{\sqrt{2 \pi}\sigma^{3}}{p_{i}l}e^{\frac{1}{2}\left(l/\sigma\right)^{2}}}=g\left(k \right),\quad\text{(Gaussian)}\\ &\lim_{\sigma\to 0}\frac{\int_{0}^{d}\frac{dx}{p_{i}\rho \left(x;\sigma\right)+p_{i+1}\rho\left(d-x;k\sigma\right)}}{\frac{2\sigma^{2} }{p_{i}}e^{l/\sigma}}=g\left(k\right),\quad\text{(Laplace)}\end{split} \tag{10}\]
_where \(g\left(k\right)=\int_{0}^{\infty}\frac{dy}{1+y^{\left(k+1\right)/k}}\) and the parameter \(l\) satisfies the following matching condition_
\[p_{i}\rho\left(l;\sigma\right)=p_{i+1}\rho\left(d-l;k\sigma\right). \tag{11}\]
Proof of Theorem 3.: It suffices to prove the result eq. (naive-W-limit) elementwise. For crossing terms \(\left(G_{W}\right)_{ij},i\neq j\), by Proposition 1, we conclude
\[\lim_{\sigma\to 0}\frac{\left(G_{W}\right)_{ij}}{K\left(\sigma\right)}\leq \lim_{\sigma\to 0}\frac{\frac{3d\sigma^{2}M}{\min_{i}p_{i}}\left(1+\frac{2 \sigma^{2}}{3d^{2}}e^{-d^{2}/2\sigma^{2}}\right)}{K\left(\sigma\right)}=0.\]
For diagonal terms, similar reduction via Proposition 1 deduces
\[\lim_{\sigma\to 0}\frac{\left(G_{W}\right)_{ii}}{K\left(\sigma\right)}=\lim_{ \sigma\to 0}\frac{\int_{\mu_{i}}^{\mu_{i+1}}\frac{dx}{\frac{p_{i}\rho_{i}+p_{i+1} \rho_{i+1}}{K\left(\sigma\right)}}}{K\left(\sigma\right)}.\]
Applying Proposition 2 to the case \(k=1\), we obtain
\[\lim_{\sigma\to 0}\frac{\int_{\mu_{i}}^{\mu_{i+1}}\frac{dx}{p_{i}\rho_{i}+p_{i+1} \rho_{i+1}}}{\frac{\sqrt{2\pi}\sigma^{3}}{p_{i}l}e^{\frac{1}{2}\left(l/\sigma \right)^{2}}}=g\left(1\right)=\frac{\pi}{2}, \tag{12}\]
where \(l\) is given by eq. (32). Using this, we obtain
\[\frac{\sqrt{2\pi}\sigma^{3}}{p_{i}l}e^{\frac{1}{2}\left(l/\sigma\right)^{2}}= \frac{2\sqrt{2\pi}\sigma^{3}}{p_{i}d}\sqrt{\frac{p_{i}}{p_{i+1}}}e^{\frac{1}{2 }\left(d/2\sigma\right)^{2}}+O(\sigma^{2}). \tag{13}\]
Combining with previous results we conclude the proof.
**Remark 3** (Scaling limit of WIMs in Laplace mixture models).: _For Laplace family, the general workflow remains the same. Although the estimation for reduction \(\Delta_{1}\) is not proved for this case (Proposition 1 is only for GMMs), we comment here that correspondent bounds can be obtained in this family without much efforts. Applying Proposition 2 to the case \(k=1\), we obtain_
\[\lim_{\sigma\to 0}\frac{\int_{\mu_{i}}^{\mu_{i+1}}\frac{dx}{p_{i}\rho_{i}+p_{i+1} \rho_{i+1}}}{\frac{2\sigma^{2}}{p_{i}}e^{l/\sigma}}=g\left(1\right)=\frac{\pi }{2}, \tag{14}\]
_where \(l\) is given as follows using the matching condition eq. (33)._
\[l=\frac{d}{k+1}+\frac{k\sigma}{k+1}\log\frac{kp_{i}}{p_{i+1}}. \tag{15}\]
_Via elementary calculation, we derive_
\[\frac{2\sigma^{2}}{p_{i}}e^{l/\sigma}=\frac{2\sigma^{2}}{p_{i}}\left(\frac{p_ {i}k}{p_{i+1}}\right)^{k/(k+1)}e^{\frac{d}{(k+1)\sigma}}=\frac{2\sigma^{2}}{ \sqrt{p_{i}p_{i+1}}}e^{\frac{d}{2\sigma}}.\]
_By choosing \(K\left(\sigma\right)=\pi\sigma^{2}e^{\frac{d}{2\sigma}}\), we conclude that the scaling limit of \(G_{W}\left(\theta;\sigma\right)\) in Laplace mixture models is given by_
\[\lim_{\sigma\to 0}\frac{\left(G_{W}\left(\theta;\sigma\right)\right)_{ii}}{\pi \sigma^{2}e^{\frac{d}{2\sigma}}}=\frac{1}{\sqrt{p_{i}p_{i+1}}},\]
_which coincides with eq. (naive-W-limit-matrix) obtained from GMMs._
Proof of Proposition 1.: Throughout the proof we will use the following basic relation between the probability density function and cumulative density function \(\Phi(x)\) of standard Gaussian distribution:
\[\Phi(-x),1-\Phi(x)\leq\frac{1}{\sqrt{2\pi}x}e^{-x^{2}/2},\quad\forall x>0. \tag{16}\]
Specifically, we obtain a constant without dependence on \(x\) by dividing into two cases \(|x|\leq 1,|x|>1\). We have
\[\begin{split}\frac{\max\{\Phi(-x),1-\Phi(x)\}}{\rho(x)}& <1,\quad x>1,\\ \frac{\max\{\Phi(-x),1-\Phi(x)\}}{\rho(x)}&\leq \frac{\frac{1}{2}}{\frac{1}{\sqrt{2\pi}}e^{-1^{2}/2}}=\frac{\sqrt{2\pi e}}{2}: =M,\quad 0<x<1.\end{split} \tag{17}\]
This concludes that the constant \(M\) can bound the LHS over \(\mathbb{R}\). For general Gaussian with zero mean and \(\sigma^{2}\) variance, we have for \(x<0\)
\[\Phi(x;\sigma)=\Phi(x/\sigma)\leq\frac{1}{\sqrt{2\pi}}\frac{\sigma}{|x|}e^{-x^{2 }/2\sigma^{2}}, \tag{18}\]
which derives the following bound over \(\mathbb{R}\):
\[\frac{\max\{\Phi(-x;\sigma),1-\Phi(x;\sigma)\}}{\rho(x;\sigma)}<\sigma^{2}M, \quad x\geq 0. \tag{19}\]
Now, we can use bound the matrix element of WIM using this inequality.
_Bound the off-diagonal terms_: The integrand of the off-diagonal term has the following form:
\[g_{ij}(x):=\frac{\left(F_{i}(x)-F_{i+1}(x)\right)\left(F_{j}(x)-F_{j+1}(x) \right)}{\rho_{\theta}(x)},\quad i<j. \tag{20}\]
Using the fact that
\[\begin{split}&|F_{i}(x)-F_{i+1}(x)|<1\quad x\in\mathbb{R},\quad |F_{i}(x)-F_{i+1}(x)|<1-F_{i+1}(x)\leq\sigma^{2}M\rho_{i+1}(x),\quad x\geq\mu_ {i+1},\\ &|F_{j}(x)-F_{j+1}(x)|<1,\quad x\in\mathbb{R},\quad|F_{j}(x)-F_{ j+1}(x)|<F_{j}(x)\leq\sigma^{2}M\rho_{j}(x),\quad x\leq\mu_{j},\end{split} \tag{21}\]
we have
\[g_{ij}(x)\leq\left\{\begin{aligned} &|F_{i}(x)-F_{i+1}(x)|\,\frac{|F_{j}(x)-F_{j+1}(x)|} {\rho_{\theta}(x)}\leq\frac{|F_{j}(x)-F_{j+1}(x)|}{p_{j}\rho_{j}(x)}=\frac{ \sigma^{2}M}{p_{j}},\quad x\leq\mu_{j},\\ &|F_{j}(x)-F_{j+1}(x)|\,\frac{|F_{i}(x)-F_{i+1}(x)|}{\rho_{\theta }(x)}\leq\frac{|F_{i}(x)-F_{i+1}(x)|}{p_{i+1}\rho_{i+1}(x)}=\frac{\sigma^{2}M }{p_{i+1}},\quad x\geq\mu_{i+1}.\end{aligned}\right. \tag{22}\]
Notice that \(i<j\) implies \(\mu_{j}\geq\mu_{i+1}\), two intervals \((-\infty,\mu_{j}],[\mu_{i+1},\infty)\) covers \(\mathbb{R}\), which provides a global bound on the integrand \(g_{ij}(x)\). Decomposing the integrated domain as follows
\[\begin{split}\int_{-\infty}^{\infty}g_{ij}(x)dx=& \int_{-\infty}^{\mu_{i-1}}g_{ij}(x)dx+\int_{\mu_{i-1}}^{\mu_{j+2}}g_{ij}(x)dx +\int_{\mu_{j+2}}^{\infty}g_{ij}(x)dx\\ \leq&\int_{-\infty}^{\mu_{i-1}}\frac{F_{i}(x)}{p_{i- 1}\rho_{i-1}(x)}dx+\int_{\mu_{j+2}}^{\infty}\frac{1-F_{j+1}(x)}{p_{j+2}\rho_ {j+2}(x)}dx+\int_{\mu_{i-1}}^{\mu_{j+2}}g_{ij}(x)dx\\ \leq&\frac{\sigma^{4}M}{dp_{i-1}}e^{-d^{2}/2\sigma^{ 2}}+\frac{\sigma^{4}M}{dp_{j+2}}e^{-d^{2}/2\sigma^{2}}+\frac{3\sigma^{2}dM}{ \min\{p_{j},p_{i+1}\}},\end{split} \tag{23}\]
where we use the estimate
\[\int_{-\infty}^{\mu_{i-1}}\frac{F_{i}(x)}{p_{i-1}\rho_{i-1}(x)}dx\leq\frac{ \sigma^{2}M}{p_{i-1}}\int_{-\infty}^{\mu_{i-1}}\frac{e^{-(x-\mu_{i})^{2}/2 \sigma^{2}}}{e^{-(x-\mu_{i-1})^{2}/2\sigma^{2}}}dx=\frac{\sigma^{4}M}{dp_{i-1 }}e^{-d^{2}/2\sigma^{2}}. \tag{24}\]
_Bound the diagonal terms_: Focus on eq. (6), our goal is to reduce the complicated integral over \(\mathbb{R}\) to a much simpler integral over the interval \([\mu_{i-1},\mu_{i}]\). We achieve this goal via
three procedures: firstly we restrict the integrated domain to \(\left[\mu_{i},\mu_{i+1}\right]\) as follows
\[\int_{\mathbb{R}}\frac{\left(F_{i}-F_{i+1}\right)^{2}}{\rho_{\theta }}dx-\int_{\mu_{i}}^{\mu_{i+1}}\frac{\left(F_{i}-F_{i+1}\right)^{2}}{\rho_{ \theta}}dx\] \[= \int_{-\infty}^{\mu_{i}}\frac{\left(F_{i}-F_{i+1}\right)^{2}}{\rho _{\theta}}dx+\int_{\mu_{i+1}}^{\infty}\frac{\left(F_{i}-F_{i+1}\right)^{2}}{ \rho_{\theta}}dx \tag{25}\] \[\leq \int_{-\infty}^{\mu_{i}}\frac{\sigma^{4}M^{2}\rho_{i}^{2}(x)}{p_ {i}\rho_{i}(x)}dx+\int_{\mu_{i+1}}^{\infty}\frac{\sigma^{4}M^{2}\rho_{i+1}^{2} (x)}{p_{i+1}\rho_{i+1}(x)}dx=\sigma^{4}M^{2}\left(\frac{1}{2p_{i}}+\frac{1}{2p _{i+1}}\right).\]
Secondly, we simplify the denominator as
\[\frac{1}{M_{1}}\int_{\mu_{i}}^{\mu_{i+1}}\frac{\left(F_{i}-F_{i+1}\right)^{2} }{p_{i}\rho_{i}+p_{i+1}\rho_{i+1}}dx\leq\int_{\mu_{i}}^{\mu_{i+1}}\frac{\left( F_{i}-F_{i+1}\right)^{2}}{\rho_{\theta}}dx\leq\int_{\mu_{i}}^{\mu_{i+1}}\frac{ \left(F_{i}-F_{i+1}\right)^{2}}{p_{i}\rho_{i}+p_{i+1}\rho_{i+1}}dx, \tag{26}\]
with
\[M_{1}=\max_{x\in\left[\mu_{i},\mu_{i+1}\right]}\frac{\rho_{\theta}(x)}{p_{i} \rho_{i}(x)+p_{i+1}\rho_{i+1}(x)}\leq 1+\frac{e^{-d^{2}/2\sigma^{2}}}{\min\{p_{i},p_{ i+1}\}}. \tag{27}\]
Lastly, we deal with the numerator:
\[\int_{\mu_{i}}^{\mu_{i+1}}\frac{\left(F_{i}-F_{i+1}\right)^{2}}{p _{i}\rho_{i}+p_{i+1}\rho_{i+1}}dx-\int_{\mu_{i}+\sqrt{\sigma}}^{\mu_{i+1}- \sqrt{\sigma}}\frac{\left(F_{i}-F_{i+1}\right)^{2}}{p_{i}\rho_{i}+p_{i+1}\rho _{i+1}}dx\] \[= \int_{\mu_{i}}^{\mu_{i}+\sqrt{\sigma}}\frac{\left(F_{i}-F_{i+1} \right)^{2}}{p_{i}\rho_{i}+p_{i+1}\rho_{i+1}}dx+\int_{\mu_{i+1}-\sqrt{\sigma}} ^{\mu_{i+1}}\frac{\left(F_{i}-F_{i+1}\right)^{2}}{p_{i}\rho_{i}+p_{i+1}\rho_{ i+1}}dx \tag{28}\] \[\leq \frac{\sqrt{\sigma}}{\frac{p_{i}}{\sqrt{2\pi\sigma}}e^{-\left( \sqrt{\sigma}\right)^{2}/2\sigma^{2}}}+\frac{\sqrt{\sigma}}{\frac{p_{i+1}}{ \sqrt{2\pi\sigma}}e^{-\left(\sqrt{\sigma}\right)^{2}/2\sigma^{2}}}=\left( \frac{1}{p_{i}}+\frac{1}{p_{i+1}}\right)\sqrt{2\pi\sigma^{3}}e^{1/2\sigma}.\]
In the interval \(\left[\mu_{i}+\sqrt{\sigma},\mu_{i+1}-\sqrt{\sigma}\right]\), we have following estimation:
\[1-\sqrt{\frac{2\sigma}{\pi}}e^{-1/2\sigma}\leq(1-\sqrt{\frac{\sigma}{2\pi}}e^ {-1/2\sigma})^{2}\leq\left(\Phi(\sqrt{\sigma}/\sigma)-\Phi(-\sqrt{\sigma}/ \sigma)\right)^{2}=\left(F_{i}(x)-F_{i+1}(x)\right)^{2}\leq 1, \tag{29}\]
which translates to
\[\left(1-\sqrt{\frac{2\sigma}{\pi}}e^{-1/2\sigma}\right)\int_{\mu_{i}+\sqrt{ \sigma}}^{\mu_{i+1}-\sqrt{\sigma}}\frac{\left(F_{i}-F_{i+1}\right)^{2}}{p_{i} \rho_{i}+p_{i+1}\rho_{i+1}}dx\leq\int_{\mu_{i}+\sqrt{\sigma}}^{\mu_{i+1}- \sqrt{\sigma}}\frac{\left(F_{i}-F_{i+1}\right)^{2}}{p_{i}\rho_{i}+p_{i+1}\rho _{i+1}}dx. \tag{30}\]
Now, combining eq. (25), eq. (26), and eq. (30) together we conclude the bound of the diagonal term.
Proof of Proposition 2.: The proof technique is a variant of the Laplace's method in asymptotic analysis. We first prove the Laplace case as follows
\[\begin{split}&\lim_{\sigma\to 0}2\sigma\int_{0}^{d}\frac{dx}{p_{i}e^{-x/ \sigma}+\frac{p_{i+1}}{k}e^{-(d-x)/k\sigma}}\\ =&\lim_{\sigma\to 0}2\sigma\int_{-l}^{d-l}\frac{dx}{p_{i}e^{-(x +l)/\sigma}+\frac{p_{i+1}}{k}e^{-(d-l-x)/k\sigma}}\\ =&\lim_{\sigma\to 0}2\sigma^{2}\int_{-\infty}^{\infty} \frac{du}{p_{i}e^{-u-l/\sigma}+\frac{p_{i+1}}{k}e^{u/k-(d-l)/k\sigma}}\\ =&\lim_{\sigma\to 0}\frac{2\sigma^{2}}{p_{i}}e^{l/ \sigma}\int_{-\infty}^{\infty}\frac{du}{e^{-u}+e^{u/k}}\\ =&\lim_{\sigma\to 0}\frac{2\sigma^{2}}{p_{i}}e^{l/ \sigma}\int_{-\infty}^{\infty}\frac{e^{u}du}{1+e^{u(k+1)/k}}\\ =&\lim_{\sigma\to 0}\frac{2\sigma^{2}}{p_{i}}e^{l/ \sigma}\int_{0}^{\infty}\frac{dy}{1+y^{(k+1)/k}}.\end{split} \tag{31}\]
In the first, the second, and the last deduction, we use the change of variables \(x\to x+l,x\to u=\frac{x}{\sigma},u\to y=e^{u}\) respectively. The first approximation we used is changing integration domain from \([-l,d-l]\) to \((-\infty,\infty)\), which can be absorbed in an \(O(1)\) factor. Compared with the main part, this is sufficiently small to be ignore.
Next, we prove the asymptotic formula in the Gaussian case following the same derivation as the Laplace case:
\[\begin{split}&\lim_{\sigma\to 0}\sqrt{2\pi}\sigma\int_{0}^{d} \frac{dx}{p_{i}e^{-x^{2}/2\sigma^{2}}+\frac{p_{i+1}}{k}e^{-(d-x)^{2}/2(k\sigma )^{2}}}\\ =&\lim_{\sigma\to 0}\sqrt{2\pi}\sigma\int_{-l}^{d-l} \frac{dx}{p_{i}e^{-(x+l)^{2}/2\sigma^{2}}+\frac{p_{i+1}}{k}e^{-(d-l-x)^{2}/2(k \sigma)^{2}}}\\ =&\lim_{\sigma\to 0}\sqrt{2\pi}\sigma^{2}\int_{- \infty}^{\infty}\frac{du}{p_{i}e^{-(u+l/\sigma)^{2}/2}+\frac{p_{i+1}}{k}e^{-(( d-l)/\sigma-u)^{2}/2k^{2}}}\\ =&\lim_{\sigma\to 0}\frac{\sqrt{2\pi}\sigma^{2}}{p_{i}}e^{ \frac{1}{2}(l/\sigma)^{2}}\int_{-\infty}^{\infty}\frac{du}{e^{-u^{2}/2-ul/ \sigma}+e^{u(d-l)/(k^{2}\sigma)-(u/k)^{2}/2}}\\ =&\lim_{\sigma\to 0}\frac{\sqrt{2\pi}\sigma^{2}}{p_{i}}e^{ \frac{1}{2}(l/\sigma)^{2}}\int_{-\infty}^{\infty}\frac{du}{e^{-ul/\sigma}+e^{u (d-l)/(k^{2}\sigma)}}\\ =&\lim_{\sigma\to 0}\frac{\sqrt{2\pi}\sigma^{2}}{p_{i}}e^{ \frac{1}{2}(l/\sigma)^{2}}\int_{-\infty}^{\infty}\frac{e^{ul/\sigma}du}{1+e^{u (d-l)/(k^{2}\sigma)+ul/\sigma}}\\ =&\lim_{\sigma\to 0}\frac{\sqrt{2\pi}\sigma^{3}}{p_{i}l}e^{ \frac{1}{2}(l/\sigma)^{2}}\int_{0}^{\infty}\frac{dy}{1+y^{1+(d-l)/k^{2}l}}\\ =&\lim_{\sigma\to 0}\frac{\sqrt{2\pi}\sigma^{3}}{p_{i}l}e^{ \frac{1}{2}(l/\sigma)^{2}}\int_{0}^{\infty}\frac{dy}{1+y^{(k+1)/k}}\end{split}\]
Most of the derivation follows exactly as the Laplace case. There are two different steps: one is in the fourth derivation where we drop the quadratic term \(-u^{2}/2,\left(u/k\right)^{2}/2\), which
does not depend on the large variable \(l/\sigma\). This is a standard technique in reducing asymptotic integral, c.f. [28]. Later, we will use perturbation analysis on this term in Lemma 1 to derive the higher-order scaling Wasserstein metric in Theorem 5. Moreover, in the last deduction, we use the fact that the integral is convergent uniformly over \(t\) and the following limit:
\[\lim_{t\to\infty}\frac{d-l}{k^{2}l}=\lim_{\sigma\to 0}\frac{d-l}{k^{2}l}=\frac{1}{k},\]
which is derived using eq. (11) to obtain
\[l=\frac{d}{k+1}+\frac{k\sigma^{2}}{d}\log\frac{kp_{i}}{p_{i+1}}+o\left(\sigma ^{2}\right). \tag{32}\]
This is the second difference with the proof in Laplace mixture model and will also be studied in Lemma 1 for higher-order metric.
## 3. Generalizations of the scaling Wasserstein information matrices
The derivation of the scaling limit in Section 2 requires the unnecessary Assumption 1 and other constraint. It can only be applied to the 1D case where gaps between adjacent components are the same, which does not hold in most cases. In this section, we study several generalizations of this model in separate subsections.
### Inhomogeneous Gap
In this section, we first consider the case where gaps between adjacent means are not all the same, i.e., \(\mu_{i+1}-\mu_{i}=d_{i}\)s are not constants. We call these models 1D inhomogeneous GMMs.
We first argue why the method for homogeneous case does not generalize to current setting: if we apply the asymptotic formula in homogeneous case, the factors \(K_{i}\left(\sigma\right)=\sqrt{2\pi^{3}}\frac{\sigma^{3}}{d}e^{\frac{1}{2} \left(\frac{d_{i}}{2\sigma}\right)^{2}}\) appearing in scaling limits of \(\left(G_{W}\left(\theta;\sigma\right)\right)_{i,i}\) is not the same for all components. Consequently, different elements of WIMs have different scaling factors under the limit \(\sigma\to 0\) and there does not exist a scaling factor \(K(\sigma)\) such that eq. (naive-W-limit-matrix) holds with a non-trivial limit. Therefore, we consider GMMs with different scaling behaviors of each component's variances, i.e., \(\sigma_{i}\)s are not the same but are related by some quantities. We will show that WIMs of models behave well under such a scaling limit. The corresponding scaling limit is stated below.
**Theorem 4**.: _For an 1D inhomogeneous GMM with the gaps, variances given by \(\mu_{i+1}-\mu_{i}=d_{i},\sigma_{i}\) respectively, suppose the following relation holds:_
\[\frac{d_{1}}{s_{1}+s_{2}}=\frac{d_{2}}{s_{2}+s_{3}}=\cdots=\frac{d_{N-1}}{s_{ N-1}+s_{N}}=d,\quad\sigma_{i}=s_{i}\sigma\quad\forall i\in[N]. \tag{33}\]
_The scaling limit of WIMs is given by_
\[\lim_{\sigma\to 0}\frac{\left(G_{W}\left(\theta;\sigma\right)\right)_{ij}}{K_{ in}\left(\sigma\right)}=\delta_{ij}g\left(s_{i+1}/s_{i}\right)\left(\frac{s_{i+1}}{p _{i+1}}\right)^{\frac{s_{i+1}}{s_{i+1}+s_{i}}}\left(\frac{s_{i}}{p_{i}}\right) ^{\frac{s_{i}}{s_{i+1}+s_{i}}}s_{i},\qquad\qquad\text{(in-W-limit)}\]
_with_
\[K_{in}\left(\sigma\right)=\frac{\sqrt{2\pi}\sigma^{3}}{d}e^{\frac{1}{2}\left( \frac{d}{\sigma}\right)^{2}},\]
It is simple to derive that the inhomogeneous scaling limit eq. (in-W-limit) contains homogeneous scaling limit eq. (naive-W-limit) as a special case. Suppose Assumption 1 holds, a solution of the matching condition eq. (33) is given by \(s_{i}=k_{i}=1,\sigma_{i}=\sigma,d_{i}=2d,\ \forall i\in[N]\), and we have
\[\lim_{\sigma\to 0}\frac{\left(G_{W}\left(\theta;\sigma\right)\right)_{ii}}{K_{ in}\left(\sigma\right)}=g\left(1\right)\left(\frac{1}{p_{i+1}}\right)^{ \frac{1}{2}}\left(\frac{1}{p_{i}}\right)^{\frac{1}{2}}\rightarrow\frac{\pi/2} {\sqrt{p_{i}p_{i+1}}}.\]
Comparing with eq. (naive-W-limit), the additional \(\pi/2\) factor comes from our different definition of \(K(\sigma),K_{in}(\sigma)\). The calculation of the scaling limit requires the full strength of Proposition 2 to deal with in-homogeneous gaps.
Proof of Theorem 4.: Using the same argument in the proof of Theorem 3, we can focus on the diagonal terms of the WIM and consider the integral \(\int_{0}^{d_{i}}\frac{dx}{p_{i}\rho(x;\sigma_{i})+p_{i+1}\rho(d_{i}-x;\sigma_ {i+1})}\). Next, we use the Proposition 2 with \(\sigma=\sigma_{i},l=l_{i},k=k_{i}=s_{i+1}/s_{i}\) to conclude that
\[\lim_{\sigma\to 0}\frac{\int_{\mu_{i}}^{\mu_{i+1}}\frac{dx}{p_{i}\rho(x- \mu_{i};\sigma_{i})+p_{i+1}\rho(x-\mu_{i+1};\sigma_{i+1})}}{\frac{\sqrt{2\pi} \sigma_{i}^{3}}{p_{i}l_{i}}e^{\frac{1}{2}\left(l_{i}/\sigma_{i}\right)^{2}}}=g \left(k_{i}\right), \tag{34}\]
where \(l_{i}\) is given by eq. (32). Using eq. (32) and eq. (33), we obtain
\[\frac{\sqrt{2\pi}\sigma_{i}^{3}}{p_{i}l_{i}}e^{\frac{1}{2}\left(l_{i}/\sigma_ {i}\right)^{2}}=\left(\frac{\sqrt{2\pi}s_{i}^{2}\sigma^{3}}{p_{i}d}+O(\sigma^ {5})\right)\left(\frac{s_{i+1}p_{i}}{s_{i}p_{i+1}}\right)^{\frac{s_{i+1}}{s_{ i}+s_{i+1}}}e^{\frac{1}{2}\left(d/\sigma\right)^{2}}. \tag{35}\]
Combining with previous results we conclude the proof.
We use the following example to illustrate the geometric structure of scaling Wasserstein metric on inhomogeneous GMMs.
**Example 3** (2-d inhomogeneous GMM).: _We consider here a 3-component inhomogeneous GMM with \(\mu_{1}=-1,\mu_{2}=0,\mu_{3}=2\), i.e. \(d_{1}=1\neq 2=d_{2}\). If we choose the same variance for all the components, i.e. \(\mathcal{N}(\mu_{i},\sigma)\). Then, following the discussion of homogeneous models, the scaling WIMs is given by_
\[\lim_{\sigma\to 0}\frac{\left(G_{W}\left(\theta;\sigma \right)\right)_{11}}{\sqrt{2\pi^{3}}\sigma^{3}/d_{1}e^{\frac{1}{2}\left(\frac{ d_{1}}{2\sigma}\right)^{2}}}=\frac{1}{\sqrt{p_{1}p_{2}}},\] \[\lim_{\sigma\to 0}\frac{\left(G_{W}\left(\theta;\sigma \right)\right)_{22}}{\sqrt{2\pi^{3}}\sigma^{3}/d_{2}e^{\frac{1}{2}\left(\frac {d_{2}}{2\sigma}\right)^{2}}}=\frac{1}{\sqrt{p_{2}p_{3}}}.\]
_However, plugging \(d_{1}=1,d_{2}=2\) in the scaling limits of \(\left(G_{W}\left(\theta;\sigma\right)\right)_{11},\left(G_{W}\left(\theta; \sigma\right)\right)_{22}\) means they diverge with different speed \(e^{\frac{1}{2}\left(\frac{d_{1}}{2\sigma}\right)^{2}},e^{\frac{1}{2}\left( \frac{d_{2}}{2\sigma}\right)^{2}}\) as \(\sigma\to 0\). Consequently, we can not find a common scaling factor \(K\left(\sigma\right)\) to normalize the metric in the homogeneous sense, c.f. eq. (5). Therefore, we need to choose different variance scaling so that a common scaling factor exists. To start with, we choose variances of both the first and the second components to be the same, namely_
\[\sigma_{1}=\sigma_{2}=\sigma.\]
_Then we know that the matrix element \(\left(G_{W}\left(\theta;\sigma\right)\right)_{11}\) has the following scaling limit_
\[\lim_{\sigma\to 0}\frac{\left(G_{W}\left(\theta;\sigma\right)\right)_{11}}{ \sqrt{2\pi^{3}}\sigma^{3}\overset{\frac{1}{2}\left(\frac{d_{1}}{2\sigma} \right)^{2}}{d_{1}}}=\frac{1}{\sqrt{p_{1}p_{2}}}.\]
_Next, we need to choose a specific \(\sigma_{3}\) such that the matrix element \(\left(G_{W}\left(\theta;\sigma\right)\right)_{22}\) have the same scaling factor. Using the conclusion for the inhomogeneous models Theorem 4, we conclude that_
\[\lim_{\sigma\to 0}\frac{\left(G_{W}\left(\theta;\sigma\right)\right)_{22}}{ \frac{\sqrt{2\pi}\sigma^{3}\left(\frac{\sigma_{3}+\sigma}{\sigma}\right)}{d_ {2}}e^{\frac{1}{2}\left(\frac{d_{2}}{\sigma+\sigma_{3}}\right)^{2}}}=\frac{ \left(\frac{\sigma_{3}}{\sigma}\right)^{\frac{\sigma_{3}}{\sigma+\sigma_{3}} }}{p_{2}^{\frac{\sigma}{\sigma+\sigma_{3}}}p_{3}^{\frac{\sigma_{3}}{\sigma+ \sigma_{3}}}}\left(g\left(\frac{\sigma_{3}}{\sigma}\right)+\frac{\sigma_{3}}{ \sigma}f\left(\frac{\sigma_{3}}{\sigma}\right)\right).\]
_Thus we require_
\[\frac{d_{2}}{\sigma+\sigma_{3}}=\frac{d_{1}}{2\sigma},\]
_which is exactly the matching condition eq. (33) we stated before. With \(d_{2}=2d_{1}\), we have_
\[\sigma_{3}=3\sigma.\]
_The scaling limit of the metric inner product can be further given as_
\[\lim_{\sigma\to 0}\frac{\left(G_{W}\left(\theta;\sigma\right) \right)_{11}}{2\sqrt{2\pi}\sigma^{3}e^{1/8\sigma^{2}}}=\frac{\pi}{2\sqrt{p_{1 }p_{2}}},\] \[\lim_{\sigma\to 0}\frac{\left(G_{W}\left(\theta;\sigma\right) \right)_{22}}{2\sqrt{2\pi}\sigma^{3}e^{1/8\sigma^{2}}}=\frac{3^{\frac{3}{4}} }{p_{2}^{\frac{1}{4}}p_{3}^{\frac{3}{4}}}g(3).\]
_Combining all the result in hand, we have that the limit metric for this GMM is given by_
\[G_{\widetilde{W}}^{\left(in\right)}\left(\theta\right)=\lim_{\sigma\to 0} \frac{G_{W}\left(\theta;\sigma\right)}{2\sqrt{2\pi}\sigma^{3}e^{1/8\sigma^{2}} }=\begin{pmatrix}\frac{\pi}{2\sqrt{p_{1}p_{2}}}&0\\ 0&\frac{3^{\frac{3}{4}}}{p_{2}^{\frac{1}{4}}p_{3}^{\frac{3}{4}}}g(3)\end{pmatrix}.\]
### A Second-order Metric
In deriving the scaling limit eq. (naive-W-limit), we take the limit under \(\sigma\to 0\) and analyze the leading order term of the integral which defines the metric. In this subsection, we present a more accurate analysis which also address the second order term in the metric integral. We use this information to form the second extended model, i.e. the second order metric.
More intuitively speaking, we expand the metric \(G\left(\theta,\sigma\right)\) w.r.t. \(\sigma\), i.e.
\[G\left(\theta,\sigma\right)=G\left(\theta\right)+\frac{\partial G\left(\theta, \sigma\right)}{\partial\sigma}g\left(\sigma\right)+o\left(g\left(\sigma\right) \right).\]
Theorem 3 in the first section can be viewed as calculating \(\lim_{\sigma\to 0}G\left(\theta,\sigma\right)=G\left(\theta\right)\). In this section, we try to figure out \(\frac{\partial G\left(\theta,\sigma\right)}{\partial\sigma}\), which should be formally understood as some "derivative" of the \(G\) w.r.t. \(\sigma\). \(g\left(\sigma\right)\) is the order of the first expansion term. Notice the above formula is merely an illustration of the main idea instead of the rigor formulation which is deduced below. As before, we first state the result here.
**Theorem 5**.: _With the same homogeneous setting and scaling factor \(K(\sigma)=\sqrt{2\pi^{3}}\frac{\sigma^{3}}{d}e^{\frac{d^{2}}{8\sigma^{2}}}\) as Theorem 3, the scaling WIM of a 1D GMM has following expansion_
\[\begin{split}\frac{G_{\widetilde{W}}^{(2)}\left(\theta;\sigma \right)}{K\left(\sigma\right)}=&\begin{pmatrix}\frac{1+\left(\frac {\pi^{2}}{2}+\frac{4}{\pi}\log\frac{p_{1}}{p_{2}}g^{\prime}(1)+\frac{2}{\pi} \log^{2}\frac{p_{1}}{p_{2}}\right)\frac{\sigma^{2}}{d^{2}}}{\sqrt{p_{1}p_{2}}} &\dots&0\\ 0&\dots&0\\ \vdots&\ddots&\vdots\\ 0&\dots&\frac{1+\left(\frac{\pi^{2}}{2}+\frac{4}{\pi}\log\frac{p_{N-1}}{p_{N }}g^{\prime}(1)+\frac{2}{\pi}\log^{2}\frac{p_{N-1}}{p_{N}}\right)\frac{\sigma ^{2}}{d^{2}}}{\sqrt{p_{N-1}p_{N}}}\end{pmatrix}\\ =& G_{\widetilde{W}}\left(\theta\right)+\frac{\sigma^{2}}{d^{2}} \begin{pmatrix}\frac{\frac{\pi^{2}}{2}+\frac{4}{\pi}\log\frac{p_{1}}{p_{2}}g^ {\prime}(1)+\frac{2}{\pi}\log^{2}\frac{p_{1}}{p_{2}}}{\sqrt{p_{1}p_{2}}}&\dots &0\\ 0&\dots&0\\ \vdots&\ddots&\vdots\\ 0&\dots&\frac{\frac{\pi^{2}}{2}+\frac{4}{\pi}\log\frac{p_{N-1}}{p_{N}}g^{ \prime}(1)+\frac{2}{\pi}\log^{2}\frac{p_{N-1}}{p_{N}}}{\sqrt{p_{N-1}p_{N}}} \end{pmatrix},\]
_where higher order term \(O(\sigma^{4})\) is omitted._
One should understand this new result as a refinement of the old one eq. (naive-W-limit-matrix). A similarity shared by this higher-order Wasserstein metric eq. (2-W-limit) and first-order metric eq. (naive-W-limit-matrix) is that both of them are diagonal. Consequently, inverting the metric \(G_{W}^{-1}\) tensor is straight-forward, making it easy to obtain the gradient flow equations in these models, which will be discussed in Section 4. Moreover, the appearance of the fraction \(\frac{\sigma}{d}\) is natural as it describes the relative size of the Gaussian components w.r.t. the underlying grid space.
To begin with, we first develop a key estimation frequently used in the derivation below.
**Lemma 1**.: _We have following perturbation expansion_
\[\begin{split}&\int_{-\infty}^{\infty}\frac{du}{e^{-u^{2}/2-tu}+e ^{tu/k_{1}-\left(u/k_{2}\right)^{2}/2}}\\ =&\int_{-\infty}^{\infty}\frac{du}{e^{-ut}+e^{tu/k_{1}}}+ \frac{1+1/k_{2}^{2}}{4t^{3}}g_{2}(k_{1})+O\left(\frac{1}{t^{4}}\right),\end{split}\]
_where \(g_{2}(k_{1})=\int_{0}^{\infty}\frac{\log^{2}v}{1+v^{(k_{1}+1)/k_{1}}}dv\)._
Proof.: We first expand the ratio of two denominators according to \(u\):
\[\begin{split}&\frac{e^{-u^{2}/2-tu}+e^{tu/k_{1}-\left(u/k_{2} \right)^{2}/2}}{e^{-ut}+e^{tu/k_{1}}}\\ =&\frac{e^{-tu}\left[1-\frac{1}{2}u^{2}+O(u^{4}) \right]+e^{tu/k_{1}}\left[1-\frac{1}{2}\left(u/k_{2}\right)^{2}+O(u^{4}) \right]}{e^{-ut}+e^{tu/k_{1}}}\\ =& 1+\frac{-e^{-tu}(\frac{1}{2}u^{2}+O(u^{4}))-e^{tu/k_{1 }}(\frac{1}{2}\left(u/k_{2}\right)^{2})+O(u^{4})}{1-ut+\frac{1}{2}(ut)^{2}+1+tu /k_{1}+\frac{1}{2}(tu/k_{1})^{2}+O(u^{3})}\\ =& 1-\frac{1+1/k_{2}^{2}}{4}u^{2}+O(u^{3}).\end{split} \tag{36}\]
Consequently, we have
\[\begin{split}&\int_{-\infty}^{\infty}\frac{du}{e^{-u^{2}/2-tu}+e^{ tu/k_{1}-(u/k_{2})^{2}/2}}\\ =&\int_{-\infty}^{\infty}\frac{1}{1-\frac{1+1/k_{ 2}^{2}}{4}u^{2}+O(u^{3})}\frac{du}{e^{-ut}+e^{tu/k_{1}}}\\ =&\int_{-\infty}^{\infty}\frac{du}{e^{-ut}+e^{tu/k_ {1}}}+\int_{-\infty}^{\infty}\frac{\frac{1+1/k_{2}^{2}}{4}u^{2}+O(u^{3})}{e^{ -ut}+e^{tu/k_{1}}}du\\ =&\int_{-\infty}^{\infty}\frac{du}{e^{-ut}+e^{tu/k_ {1}}}+\frac{1+1/k_{2}^{2}}{4t^{3}}g_{2}(k_{1})+O(\frac{1}{t^{4}}),\end{split} \tag{37}\]
where the last derivation is based on following calculation
\[\int_{-\infty}^{\infty}\frac{u^{m}}{e^{-ut}+e^{tu/k_{1}}}du=\int_{-\infty}^{ \infty}\frac{u^{m}e^{tu}}{1+e^{tu(k_{1}+1)/k_{1}}}du=\frac{1}{t^{m+1}}\int_{0 }^{\infty}\frac{\log^{m}v}{1+v^{(k_{1}+1)/k_{1}}}dv. \tag{38}\]
Proof of Theorem 5.: The second order expansion consists of three parts we will calculate separately. Firstly, we apply Lemma 1 to the homogeneous case where \(k=1,t=\frac{l}{\sigma},k_{1}=\frac{l}{d-l}=1+O(\sigma^{2}),k_{2}=k=1\) to obtain
\[\begin{split}&\int_{-\infty}^{\infty}\frac{du}{e^{-u^{2}/2-ul/ \sigma}+e^{u(d-l)/(k^{2}\sigma)-(u/k)^{2}/2}}\\ =&\int_{-\infty}^{\infty}\frac{du}{e^{-ul/\sigma}+e^ {u(d-l)/(k^{2}\sigma)}}+\frac{\sigma^{3}}{2l^{3}}\frac{\pi^{3}}{8}+O(\sigma^{ 4}),\end{split}\]
where the integral of \(g_{2}(1)\) can be formed explicitly to obtain \(\frac{\pi^{3}}{8}\)[12]. The first term is the main part which integrates to \(\frac{\sigma\pi}{2l}\) in the proof of eq. (naive-W-limit).
The second approximation appears when we derive
\[\int_{0}^{\infty}\frac{dy}{1+y^{1+(d-l)/k^{2}l}}\rightarrow\int_{0}^{\infty} \frac{dy}{1+y^{(k+1)/k}}. \tag{39}\]
Recall the expansion of \(l\) over \(\sigma\) is given by eq. (32), hence we have
\[\begin{split}&\int_{0}^{\infty}\frac{dy}{1+y^{1+(d-l)/k^{2}l}}- \int_{0}^{\infty}\frac{dy}{1+y^{(k+1)/k}}\\ =& g(k+\frac{k(k+1)^{2}\sigma^{2}}{d^{2}}\log\frac{ kp_{i}}{p_{i+1}})-g(k)\\ =& g(1+\frac{4\sigma^{2}}{d^{2}}\log\frac{p_{i}}{p_{ i+1}})-g(1)=\frac{4\sigma^{2}}{d^{2}}\log\frac{p_{i}}{p_{i+1}}\frac{dg}{dx} \Big{|}_{x=1}+O(\sigma^{4}).\end{split} \tag{40}\]
This finishes part of the second order expansion of \(I_{i}\), i.e.
\[I_{i}=\frac{\sqrt{2\pi}\sigma^{2}}{p_{i}}\frac{\sigma}{l}e^{\frac{1}{2}(l/ \sigma)^{2}}\left(\frac{\pi}{2}+\frac{\sigma^{2}\pi^{3}}{4d^{2}}+\frac{4\sigma ^{2}}{d^{2}}\log\frac{p_{i}}{p_{i+1}}\frac{dg}{dx}\Big{|}_{x=1}+O(\sigma^{4}) \right). \tag{41}\]
Now, we work on the expansion for the factor contains \(l\) as it can be expanded as a power series of \(\sigma\). In the homogeneous scenario, eq. (32) reduces to exact formula
\[l=\frac{d}{2}+\frac{\sigma^{2}}{d}\log\frac{p_{i}}{p_{i+1}}. \tag{42}\]
Plug in, one obtains
\[\frac{\sigma}{l}e^{\frac{1}{2}(l/\sigma)^{2}}=\frac{2\sigma}{d}\sqrt{\frac{p_{ i}}{p_{i+1}}}e^{d^{2}/8\sigma^{2}+\left(\frac{\sigma}{d}\log\frac{p_{i}}{p_{i+1}} \right)^{2}/2}\left(1-\frac{2\sigma^{2}}{d^{2}}\log\frac{p_{i}}{p_{i+1}}+O( \sigma^{4})\right) \tag{43}\]
Consequently, we have
\[I_{i}=\sqrt{2\pi^{3}}\frac{\sigma^{3}}{d}e^{d^{2}/8\sigma^{2}}\frac{1}{\sqrt{p_ {i}p_{i+1}}}\left(1+\left(\frac{\pi^{2}}{2}+\frac{4}{\pi}\log\frac{p_{i}}{p_{ i+1}}\frac{dg}{dx}\Big{|}_{x=1}+\frac{2}{\pi}\log^{2}\frac{p_{i}}{p_{i+1}} \right)\frac{\sigma^{2}}{d^{2}}+O(\sigma^{4})\right). \tag{44}\]
Recall in the proof of Proposition 2 we compare the derivations in Gaussian and Laplace case. The approximation in Lemma 1 does not exist in Laplace case as its exponent is linear in \(x\). Similarly, the approximation related to \(\frac{dg}{dx}\) is also not present. Lastly, as the parameter \(l\) can be derived in closed-form in Laplace case comparing to the expansion of \(l\) in \(\sigma\) in Gaussian case, c.f. eq. (15), eq. (32), the scaling factor of Laplace mixture model is also exact. Consequently, we conclude that the scaling WIM is accurate to any order of \(\sigma\), i.e. all the higher-order expansions vanish. The difference between Laplace and Gaussian cases also indicates that the second-order metric is not universal for all the mixture models. Meanwhile, the same technique is applicable to obtain the metric of any order accuracy.
### An extended 1D GMM
In this section, as promised, we consider an extended GMM which is more akin to GMM in statistical community [31] in the sense that mean parameters are also allowed to vary. Recall in eq. (GMM) each component \(\rho_{i}(x)=\frac{1}{\sqrt{2\pi}\sigma_{i}}e^{-(x-\mu_{i})^{2}/2\sigma_{i}^{2}}\) is fixed a priori and only \(\theta\)-parameters can vary in the model. In this extended model, the mean variables \(\mu_{i}\)s are also included in the model's parameters, thus allowing each component to move along the real axis. We call this "1D extended GMMs". Considering its tangent space over \((\theta,\mu)\), it has tangent vectors \(\frac{\partial}{\partial\mu_{i}}\)s associated with each mean parameters. Thus, on this GMM, a basis of tangent space is given by \(\Big{\{}\frac{\partial}{\partial\mu_{i}},i\in[N],\frac{\partial}{\partial \theta_{j}},j\in[N-1]\Big{\}}\), which contains exactly a basis of original GMMs, i.e. \(\Big{\{}\frac{\partial}{\partial\theta_{j}},j\in[N-1]\Big{\}}\). We will analyze the WIM associated with this larger basis. The results is summarized as follows.
**Theorem 6**.: _The WIM in 1D extended homogeneous GMM is given in following block form_
\[\begin{split}\lim_{\sigma\to 0}G_{\widetilde{W}}^{(ext)}\left( \theta,\mu;\sigma\right)&=\begin{pmatrix}\left(G_{\widetilde{W}}^{( ext)}\right)_{\theta\theta}&\left(G_{\widetilde{W}}^{(ext)}\right)_{\theta\mu} \\ \left(G_{\widetilde{W}}^{(ext)}\right)_{\mu\theta}&\left(G_{\widetilde{W}}^{( ext)}\right)_{\mu\mu}\end{pmatrix},\\ \left(G_{\widetilde{W}}^{(ext)}\right)_{\theta\theta}=K\left( \sigma\right)&\begin{pmatrix}\frac{1}{\sqrt{p_{1}p_{2}}}&0&\cdots&0\\ 0&\frac{1}{\sqrt{p_{2}p_{3}}}&\cdots&0\\ \vdots&\vdots&\ddots&\vdots\\ 0&0&\cdots&\frac{1}{\sqrt{p_{N}-1p_{N}}}\end{pmatrix},\quad\left(G_{ \widetilde{W}}^{(ext)}\right)_{\mu\mu}=\begin{pmatrix}p_{1}&0&\cdots&0\\ 0&p_{2}&\cdots&0\\ \vdots&\vdots&\ddots&\vdots\\ 0&0&\cdots&p_{N}\end{pmatrix},\\ \left(\left(G_{\widetilde{W}}^{(ext)}\right)_{\mu\theta}\right)^{T}=\left(G_ {\widetilde{W}}^{(ext)}\right)_{\theta\mu}=\begin{pmatrix}\frac{\mu_{2}-\mu_{1 }}{2}&\frac{\mu_{2}-\mu_{1}}{2}&0&\cdots&0&0\\ 0&\frac{\mu_{3}-\mu_{2}}{2}&\frac{\mu_{3}-\mu_{2}}{2}&\cdots&0&0\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\ 0&0&0&\cdots&\frac{\mu_{N-1}-\mu_{N-2}}{2}&0\\ 0&0&0&\cdots&\frac{\mu_{N}-\mu_{N-1}}{2}&\frac{\mu_{N}-\mu_{N-1}}{2}\end{pmatrix}. \end{split} \tag{45}\]
Let us take a close look at each block. The left upper block \(\left(G_{\widetilde{W}}^{(ext)}\right)_{\theta\theta}\) is significantly greater than the other three blocks as \(\sigma\to 0\). In other words, the WIM of this model has a "multiscale" feature. Simply dividing the whole matrix by the factor \(K(\sigma)\) will result in a degenerate matrix. If we want to obtain a nondegenerate scaling limit of WIM, it suffices to rescale the tangent vectors \(\frac{\partial}{\partial\theta_{j}}\) by factor \(\frac{1}{\sqrt{K(\sigma)}}\) while remain \(\frac{\partial}{\partial\mu_{i}}\) unchanged. Consequently, the "rescaled" WIM for the extended GMM with tangent vectors given by \(\frac{1}{\sqrt{K(\sigma)}}\frac{\partial}{\partial\theta_{i}},i=1,2,\cdots,N-1\) and \(\frac{\partial}{\partial\mu_{i}}\) reads
\[\begin{split}\left(\begin{pmatrix}\frac{1}{\sqrt{p_{1}p_{2}}}&0& \cdots&0\\ 0&\frac{1}{\sqrt{p_{2}p_{3}}}&\cdots&0\\ \vdots&\vdots&\ddots&\vdots\\ 0&0&\cdots&\frac{1}{\sqrt{p_{N}-1p_{N}}}\end{pmatrix}\right)&\mathbf{0}\\ \begin{pmatrix}0&0&\cdots&0\\ 0&p_{2}&\cdots&0\\ \vdots&\vdots&\ddots&\vdots\\ 0&0&\cdots&p_{N}\end{pmatrix}\end{split}.\]
Notice the rescaled WIM is also a diagonal matrix, consistent with all the previous scaling limit. The off-diagonal terms vanish due to the scaling factor. Before the proof of Theorem 6, we state a relation between two subsets of basis \(\left\{\frac{\partial}{\partial\mu_{i}}\right\},\left\{\frac{\partial}{ \partial p_{j}}\right\}\) in term of the Fisher-Rao geometry and Wasserstein geometry. This result is used for the derivation of the scaling limit of WIM in extended GMM.
**Theorem 7**.: _For the extended GMM, we have the following relationship between the WIM of the set of tangent vectors \(\frac{\partial}{\partial\mu_{i}}\)s and Fisher information matrix of the set of tangent
vectors \(\frac{\partial}{\partial\theta_{i}}s\)_
\[G_{F}=\Sigma G_{W}\Sigma^{T}, \tag{46}\]
_where \((G_{F})_{ij}=g_{F}\left(\frac{\partial}{\partial\theta_{i}},\frac{\partial}{ \partial\theta_{j}}\right)\), \((G_{W})_{ij}=g_{W}\left(\frac{\partial}{\partial\mu_{i}},\frac{\partial}{ \partial\mu_{j}}\right)\) and the matrix \(\Sigma\in\mathbb{R}^{N-1,N}\) appears above is given by_
\[\Sigma=\begin{pmatrix}-\frac{1}{p_{1}}&\frac{1}{p_{2}}&0&\cdots&0&0\\ 0&-\frac{1}{p_{2}}&\frac{1}{p_{2}}&\cdots&0&0\\ \vdots&\vdots&\ddots&\cdots&\cdots&\cdots\\ 0&0&\cdots&-\frac{1}{p_{N-2}}&\frac{1}{p_{N-1}}&0\\ 0&0&0&\cdots&-\frac{1}{p_{N-1}}&\frac{1}{p_{N}}\end{pmatrix}.\]
Proof.: According to Theorem 7, we have that the Wasserstein score functions associated with this basis are given by
\[\nabla\Phi_{\theta_{i}}^{W}(x)=\frac{F_{i}(x)-F_{i+1}(x)}{\rho(x)},\quad \forall i=1,2,...,N-1,\]
\[\nabla\Phi_{\mu_{i}}^{W}(x)=p_{i}\frac{\rho_{i}(x)}{\rho(x)},\quad\forall i=1,2,...,N.\]
We give the metric tensor \(G_{W}\) in a block form, whose left-upper part has been shown in ordinary GMM computation, i.e. Theorem 3. For the right-lower part, the proof is the same
\[\lim_{\sigma\to 0}g_{W}\left(\partial_{\mu_{i}},\partial_{\mu_{j}}\right)\] \[= \lim_{\sigma\to 0}p_{i}p_{j}\int\frac{\rho_{i}(x)\rho_{j}(x)}{\rho(x)}dx\] \[= \lim_{\sigma\to 0}p_{j}\int_{(\mu_{i}-\epsilon,\mu_{i}+ \epsilon)}\rho_{j}(x)dx\] \[= \begin{cases}0,\quad i\neq j,\\ p_{i},\quad i=j,\end{cases}\]
where the \(\epsilon\) appears in the formula is a constant depends on \(\sigma\) s.t. \(\lim_{\sigma\to 0}\epsilon=0\). Notice that the derivation above is similar to the scaling limit of the Fisher-Rao metric. This is attribute to the linear relation between the Wasserstein geometry and Fisher-Rao geometry we posed in the Theorem 7.
The last part is to compute the Wasserstein inner product between these two basis sets. With the help of the closed-forms of these two Wasserstein score functions, we conclude that \(\lim_{\sigma\to 0}\mathbb{E}[\nabla\Phi_{\mu_{i}}^{W}\cdot\nabla\Phi_{ \theta_{j}}^{W}]=0\) with \(i<j,j<i-1\). The proof is simply a common argument of the decomposition trick stated in [22]. For the other term, we have
\[\lim_{\sigma\to 0}g_{W}\left(\partial_{\mu_{i}},\partial_{\theta_{i}}\right)\] \[= \lim_{\sigma\to 0}p_{i}\int\frac{\rho_{i}(x)}{\rho(x)}\left(F_{i}(x)-F_ {i+1}(x)\right)dx\] \[= \lim_{\sigma\to 0}p_{i}\int_{\mu_{i}}^{\mu_{i+1}}\frac{\rho_{i}(x)}{ \rho}dx\] \[= \frac{\mu_{i+1}-\mu_{i}}{2},\]
where the second equality holds by the weak convergence of Gaussian \(\mathcal{N}\left(\mu,\sigma\right)\rightarrow\delta_{\mu}\) as \(\sigma\to 0\). The last equality holds by the decomposition of the integral and estimation on each part of the integral
\[\lim_{\sigma\to 0}p_{i}\int_{\mu_{i}}^{\frac{\mu_{i+1}+\mu_{i}}{2}} \frac{\rho_{i}(x)}{\rho(x)}dx\rightarrow\lim_{\sigma\to 0}\int_{\mu_{i}}^{ \frac{\mu_{i+1}+\mu_{i}}{2}}dx=\frac{\mu_{i+1}-\mu_{i}}{2},\] \[\lim_{\sigma\to 0}p_{i}\int_{\frac{\mu_{i+1}+\mu_{i}}{2}} ^{\mu_{i+1}+\mu_{i}}\frac{\rho_{i}(x)}{\rho(x)}dx\rightarrow\lim_{\sigma\to 0 }\int_{\mu_{i}}^{\frac{\mu_{i+1}+\mu_{i}}{2}}0\cdot dx=0.\]
Proof of Theorem 7.: Using the language of our previous paper, the Fisher score functions correspond to the tangent vectors \(\frac{\partial}{\partial\theta_{i}}\) given by
\[\Phi_{F}^{i}(x)=\frac{\rho_{i+1}(x)-\rho_{i}(x)}{\rho(x)},\quad\forall i=1,2,...,N-1.\]
While, in term of Wasserstein score function for \(\frac{\partial}{\partial\mu_{i}}\), we have
\[\nabla\Phi_{W}^{i}(x)=p_{i}\frac{\rho_{i}(x)}{\rho(x)},\quad\forall i=1,2,...,N,\]
are associated with the tangent vector \(\frac{\partial}{\partial\mu_{i}}\). Consequently, we conclude that the Fisher score function and Wasserstein score function can be connected via the following linear relation
\[\Phi_{F}^{i}=\Sigma\nabla\Phi_{W}^{i}.\]
Thus, by the dual formulation of the statistical metric tensor [22], namely
\[G_{F}=\mathbb{E}\left[\Phi_{F}\cdot\Phi_{F}^{T}\right],\quad G_{W}=\mathbb{E} \left[\nabla\Phi_{W}\cdot\nabla\Phi_{W}^{T}\right],\]
we conclude that the linear relation between the FIM and WIM holds.
**Remark 4**.: _The above theorem points out the fact that on each point \(\theta\) in this parametric statistical model, we have two disjoint tangent subspace \(V_{\theta}^{F}\) and \(V_{\theta}^{W}\), \(V_{\theta}^{F}\cap V_{\theta}^{W}=\{0\}\). The WIM \(G_{W}\) on \(V_{\theta}^{W}\) has a linear relation with the Fisher information matrix on \(V_{\theta}^{F}\)._
## 4. Gradient flows on various scaling Wasserstein geometry
Gradient flows have been intensively studied in the community of optimal transport for several decades [14, 25]. In this section, we study the gradient flows under the scaling Wasserstein metric. We derive explicit forms of gradient flows under this scaling geometry and consider its connection with gradient flows on density manifold. We first give a brief summary of gradient flow on density manifold, see detailed studies in [3, 29].
### Review of gradient flows on density manifold
First, we provide the general formulation of Wasserstein gradient flow on density manifold. Given a functional on the density manifold \(\mathcal{E}:\mathcal{P}\rightarrow\mathbb{R}\), its Wasserstein gradient is given by [33]
\[\nabla_{W}F(\rho)=-\nabla\cdot\left(\rho\nabla\left(\frac{\delta\mathcal{E}}{ \delta\rho}\right)\right). \tag{47}\]
Consequently, the gradient flow of a potential functional \(\mathcal{V}(\rho)=\mathbb{E}_{X\sim\rho}V(X)\) is given by
\[\partial_{t}\rho(x)=-\nabla_{W}\mathcal{V}(\rho)=\nabla\cdot\left(\rho\nabla \left(\frac{\delta\mathcal{V}}{\delta\rho}\right)\right)=\nabla\cdot\left(\rho \nabla V(x)\right). \tag{48}\]
Moreover, the gradient flow of the negative entropy functional \(\mathcal{H}(\rho)=\mathbb{E}_{X\sim\rho}\log\rho(X)\) is given by the heat equation
\[\partial_{t}\rho(x)=\nabla\cdot\left(\rho\nabla\left(\frac{\delta\mathcal{H}}{ \delta\rho}\right)\right)=\nabla\cdot\left(\rho\nabla(\log\rho(x)+1)\right))= \Delta\rho. \tag{49}\]
In the next subsection, we will derive parametric gradient flow equation under the scaling Wasserstein metric.
### Gradient flows on scaling Wasserstein metric
In this section, we derive analytic forms of gradient flows in different scaling Wasserstein geometries introduced before. We first state the equation of gradient flow of a general functional \(F\) on a parametric model \(\Theta\subset\mathbb{R}^{N-1}\).
**Proposition 3**.: _Consider the parametric model \(\Theta\subset\mathbb{R}^{N-1}\) with the scaling Wasserstein metric \(G_{\widetilde{W}}\left(\theta\right)\) eq. (5) defined on it. Given a function on this space \(F:\Theta\rightarrow\mathbb{R}\), the gradient flow of this function is given by_
\[\begin{split}\dot{\theta}&\;=-G_{\widetilde{W}}^{-1} \left(\theta\right)\nabla_{\theta}V\left(\theta\right),\\ \dot{\theta}_{i}&\;=-\sqrt{p_{i}p_{i+1}}\left(\nabla _{\theta}V\left(\theta\right)\right)_{i},\\ \dot{p}_{i}&\;=-\sqrt{p_{i}p_{i-1}}\left(\nabla_{ \theta}V\left(\theta\right)\right)_{i-1}+\sqrt{p_{i}p_{i+1}}\left(\nabla_{ \theta}V\left(\theta\right)\right)_{i}.\end{split} \tag{50}\]
Proof.: The first equation is exactly the definition of gradient flows in Riemannian manifold. The second equation is a scalar version of the first one and \(\left(\nabla_{\theta}V\left(\theta\right)\right)_{i}\) refers to its \(i\)-th component. This formula is simplified due to the fact that scaling WIMs are diagonal in homogeneous case, c.f. Theorem 3.
However, parameters \(\theta_{i}\)s do not have a probabilistic meaning compared to \(p_{i}\)s, we rewrite the above gradient flow equation in term of parameters \(p_{i}\)s, which are illustrated as ratios of components
\[\begin{split}\dot{p}_{i}&\;=\dot{\theta}_{i-1}- \dot{\theta}_{i}\\ &\;=-\sqrt{p_{i}p_{i-1}}\left(\nabla_{\theta}V\left(\theta\right) \right)_{i-1}+\sqrt{p_{i}p_{i+1}}\left(\nabla_{\theta}V\left(\theta\right) \right)_{i}.\end{split}\]
Notice that if we divided the metric tensor \(G_{W}\) by this factor \(K(\sigma)\), the solution of the gradient flow is changed by a time reparameterization, namely, suppose the solution of
\[\dot{\theta}=-G_{\widetilde{W}}\left(\theta\right)^{-1}\nabla_{\theta}H,\]
is given by \(\theta\left(t\right)\). Then, the solution of the scaling gradient flow
\[\dot{\theta}=-G_{\widetilde{W}}\left(\theta\right)^{-1}K\left(\sigma\right) \nabla_{\theta}H,\]
is exactly \(\widetilde{\theta}\left(t\right)=\theta\left(K\left(\sigma\right)t\right)\). Consequently, we have the freedom to discard this scaling factor when studying the properties of the gradient flows.
Using this we can write out the analytic form of arbitrary gradient flows in the probability simplex under scaling Wasserstein metric. In [33], the author introduced three classes of important energy, i.e. internal \(\mathcal{U}\), potential \(\mathcal{V}\), and interaction energy \(\mathcal{W}\) on the
density manifold and carefully derive their corresponding gradient flow equations. Hereby we follow the same spirit of [33] to list the analytic form of three classes of energies and their gradient flows. To achieve this, we have to calculate the corresponding energy forms on the probability simplex and then apply eq. (50). We take the potential functional as an example: viewing the point on probability simplex as a mixture of Dirac measure, i.e. \(\{p_{i},i\in[N]\}\rightarrow\sum_{i=1}^{N}p_{i}\delta_{\mu_{i}}\), we can calculate the value of potential functional on this point as
\[\mathcal{V}(\{p_{i}\})=\mathcal{V}(\sum_{i=1}^{N}p_{i}\delta_{\mu_{i}})=\int V (x)\sum_{i=1}^{N}p_{i}\delta_{\mu_{i}}(x)dx=\sum_{i=1}^{N}p_{i}V(\mu_{i}). \tag{51}\]
One can obtain the other two energies on the probability simplex via the same procedure. We list without derivation all the energy formula and corresponding gradient flows below:
\[\mathcal{U}\left(\rho\right) =\int U\left(\rho\left(x\right)\right)dx=\sum_{i=1}^{N}U\left(p_{ i}\right),\] \[\dot{p_{i}} =-\sqrt{p_{i}p_{i-1}}\left(U^{\prime}\left(p_{i}\right)-U^{ \prime}\left(p_{i-1}\right)\right)+\sqrt{p_{i}p_{i+1}}\left(U^{\prime}\left(p_ {i+1}\right)-U^{\prime}\left(p_{i}\right)\right),\] \[\mathcal{V}\left(\rho\right) =\int V\left(x\right)\rho(x)dx=\sum_{i=1}^{N}V(\mu_{i})p_{i},\] \[\dot{p_{i}} =-\sqrt{p_{i}p_{i-1}}\left(V_{i}-V_{i-1}\right)+\sqrt{p_{i}p_{i+ 1}}\left(V_{i+1}-V_{i}\right),\] \[\mathcal{W}\left(\rho\right) =\frac{1}{2}\int\int W\left(x-y\right)\rho\left(x\right)\rho\left( y\right)dxdy=\frac{1}{2}\sum_{i,j=1}^{N}W(\mu_{i}-\mu_{j})p_{i}p_{j}.\] \[\dot{p_{i}} =-\sqrt{p_{i}p_{i-1}}\left(\sum_{k=1}^{N}W_{ik}p_{k}-\sum_{k=1}^{ N}W_{i-1,k}p_{k}\right)+\sqrt{p_{i}p_{i+1}}\left(\sum_{k=1}^{N}W_{i+1,k}p_{k}- \sum_{k=1}^{N}W_{ik}p_{k}\right). \tag{52}\]
#### 4.2.1. Gradient flow of entropy in GMM
In this subsection, we use the entropy functional which belongs to the internal functional as an examples and leave other derivations to interested readers. As illustrated in Section 4.1, the gradient flow of the entropy functional is nothing but the heat equation and it will also be the numerical scenario we investigated in Section 5.
Recall the discrete entropy functional on a probability simplex is given by
\[H(p)=\sum_{i=1}^{N}p_{i}\log p_{i}, \tag{53}\]
i.e. \(U(p_{i})=p_{i}\log p_{i}\). Differentiating to obtain \(U^{\prime}(p_{i})=\log p_{i}+1\) and using eq. (52), one obtains
\[\dot{p}_{i}=-\sqrt{p_{i}p_{i-1}}\log\frac{p_{i}}{p_{i-1}}+\sqrt{p_{i}p_{i+1}} \log\frac{p_{i+1}}{p_{i}}. \tag{54}\]
This parametric gradient flow equation can be interpreted from the perspective of the Markov transitional kernel. We use 2D dynamics to illustrate this point.
**Example 4** (2D dynamics).: _For example, we write out the exact formula of entropy gradient flow for a probability simplex with three components_
\[\begin{cases}\dot{p_{1}}=&\sqrt{p_{1}p_{2}}\log\frac{p_{2}}{p_{1}},\\ \dot{p_{2}}=&-\sqrt{p_{1}p_{2}}\log\frac{p_{2}}{p_{1}}+\sqrt{p_{2}p_{3}}\log \frac{p_{3}}{p_{2}},\\ \dot{p_{3}}=&-\sqrt{p_{2}p_{3}}\log\frac{p_{3}}{p_{2}}.\end{cases}\]
_We can also write out the evolution formula for this process in the matrix form, viewing the matrix as a Markovian kernel_
\[\begin{pmatrix}\dot{p_{1}}\\ \dot{p_{2}}\\ \dot{p_{3}}\end{pmatrix}=\begin{pmatrix}0&\sqrt{\frac{p_{1}}{p_{2}}}\log\frac {p_{2}}{p_{1}}&0\\ \sqrt{\frac{p_{2}}{p_{1}}}\log\frac{p_{1}}{p_{2}}&0&\sqrt{\frac{p_{2}}{p_{3}}} \log\frac{p_{3}}{p_{2}}\\ 0&\sqrt{\frac{p_{3}}{p_{2}}}\log\frac{p_{2}}{p_{3}}&0\end{pmatrix}\begin{pmatrix} p_{1}\\ p_{2}\\ p_{3}\end{pmatrix}. \tag{55}\]
_Notice that the Markovian kernel here depends on the density and thus is not temporally homogeneous._
_An observation shows that the unique equilibrium state for this Markov jump process is \(p_{1}=p_{2}=p_{3}=\frac{1}{3}\) and this fact actually holds for an arbitrary number of components, which is consistent with the continuous case._
_Introducing new parameter_
\[a=\log\frac{p_{1}}{p_{2}},b=\log\frac{p_{2}}{p_{3}},\]
_the Markovian kernel transform to_
\[M_{G}=\begin{pmatrix}0&\sqrt{\frac{p_{1}}{p_{2}}}\log\frac{p_{2}}{p_{1}}&0\\ \sqrt{\frac{p_{2}}{p_{1}}}\log\frac{p_{1}}{p_{2}}&0&\sqrt{\frac{p_{2}}{p_{3}}} \log\frac{p_{3}}{p_{2}}\\ 0&\sqrt{\frac{p_{3}}{p_{2}}}\log\frac{p_{2}}{p_{3}}&0\end{pmatrix}=\begin{pmatrix} 0&-ae^{\frac{a}{2}}&0\\ ae^{-\frac{a}{2}}&0&-be^{\frac{b}{2}}\\ 0&be^{-\frac{b}{2}}&0\end{pmatrix}.\]
_The evolution equation w.r.t. the parameter \(a,b\) can be formulated as_
\[\begin{cases}\dot{a}=&\frac{d}{dt}\log\frac{p_{1}}{p_{2}}=\frac{ \dot{p_{1}}}{p_{1}}-\frac{\dot{p_{2}}}{p_{2}}=-a\left(e^{\frac{a}{2}}+e^{- \frac{a}{2}}\right)-be^{-\frac{b}{2}},\\ \dot{b}=&\frac{d}{dt}\log\frac{p_{2}}{p_{3}}=\frac{\dot{p_{2}}}{p_{2}}-\frac{ \dot{p_{3}}}{p_{3}}=-b\left(e^{\frac{b}{2}}+e^{-\frac{b}{2}}\right)-ae^{\frac{ a}{2}}.\end{cases}\]
Moreover, although our asymptotic analysis can not handle the situations when sample spaces are high dimensional, it is simple to generalize the gradient flow formula to higher dimensional cases. Here, we formally state the parametric gradient flow equation of 2-d heat flow under this framework without rigorous derivation. Consider a 2D grid \([0,N]\times[0,N]\) endowed with the natural graph structure which appears frequently as the underlying grid of finite difference method. Each node will have four neighbors, two of which in \(x\)-direction and others in \(y\)-direction. Then, we can simply write the RHS of 2D parametric heat equation as the sum of the RHS of two 1D parametric heat equation in
x, y direction, i.e.
\[\begin{split}\dot{p}_{ij}=&-\left(\sqrt{p_{i-1,j}p_{i, ij}}\log\frac{p_{ij}}{p_{i-1,j}}-\sqrt{p_{i+1,j}p_{ij}}\log\frac{p_{i+1,j}}{p_{ij}} \right.\\ &\qquad\left.+\sqrt{p_{i,j-1}p_{i,ij}}\log\frac{p_{ij}}{p_{i,j-1} }-\sqrt{p_{i,j+1}p_{ij}}\log\frac{p_{i,j+1}}{p_{ij}}\right).\end{split} \tag{56}\]
We denote this as the 2D parametric heat equation and investigate it as a numerical scheme in Section 5.
#### 4.2.2. Gradient flow of potential function in extended GMM
In this subsection, we will derive step-by-step the gradient flow of a potential functional on the extended GMM eq. (45).
Recall that for an extended GMM there exists two classes of parameters, \(\theta_{i}\)s and \(\mu_{i}\)s. Therefore, we first take the gradient of the potential functional w.r.t. these parameters. In eq. (52), the discrete potential functional is written as
\[\mathcal{V}\left(\rho\right)=\sum_{i=1}^{N}V(\mu_{i})p_{i}. \tag{57}\]
Notice the difference with homogeneous GMMs is that \(V(\mu_{i})\) is no longer a constant but rather depends on parameter \(\mu_{i}\). Consequently, we have
\[\partial_{p_{i}}\mathcal{V}=V(\mu_{i}),\quad\partial_{\mu_{i}}\mathcal{V}=p_{ i}V^{\prime}(\mu_{i}). \tag{58}\]
Now, recall that the WIM eq. (45) is block-wise, one has
\[\begin{split}\dot{\theta_{i}}&=-\left(G_{\widetilde {W}}^{ext}\right)_{\theta\theta,ii}^{-1}\partial_{\theta_{i}}V\left(\theta, \mu\right)-\sum_{j}\left(G_{\widetilde{W}}^{ext}\right)_{\theta\mu,ij}^{-1} \partial_{\mu_{j}}V\left(\theta,\mu\right),\\ \dot{\mu_{i}}&=-\left(G_{\widetilde{W}}^{ext}\right) _{\mu\mu,ii}^{-1}\partial_{\mu_{i}}V\left(\theta,\mu\right)-\sum_{j}\left(G_{ \widetilde{W}}^{ext}\right)_{\mu\theta,ij}^{-1}\partial_{\theta_{j}}V\left( \theta,\mu\right),\end{split} \tag{59}\]
where the notation \(\left(G_{\widetilde{W}}^{ext}\right)_{\theta\theta,ii}^{-1}\) refers to the \((i,i)\) element of the block \(\left(G_{\widetilde{W}}^{ext}\right)_{\theta\theta}^{-1}\). Combine all the ingredients together, we conclude the following formula as the parametric gradient flow of potential functional on extended GMM:
\[\begin{split}\dot{\theta}_{i}=&-\frac{\sqrt{p_{i}p_ {i+1}}}{K\left(\sigma\right)}\left(\left(V\left(\mu_{i+1}\right)-V\left(\mu_{ i}\right)\right)-\frac{\mu_{i+1}-\mu_{i}}{2}\left(V^{\prime}\left(\mu_{i}\right)+V^{ \prime}\left(\mu_{i+1}\right)\right)\right),\\ &\quad i=1,2,\cdots,N-1,\\ \dot{\mu}_{i}=&-V^{\prime}\left(\mu_{i}\right)+\frac{1 }{K\left(\sigma\right)}\left(\sqrt{\frac{p_{i+1}}{p_{i}}}\frac{\mu_{i+1}-\mu_{ i}}{2}\left(V\left(\mu_{i+1}\right)-V\left(\mu_{i}\right)\right)\right.\\ &\quad\left.+\sqrt{\frac{p_{i-1}}{p_{i}}}\frac{\mu_{i}-\mu_{i-1} }{2}\left(V\left(\mu_{i}\right)-V\left(\mu_{i-1}\right)\right)\right),\quad i =2,3,\cdots,N-1.\end{split} \tag{60}\]
We will conduct some illustrative experiments on this model in the next section.
## 5. Numerical experiments
Starting from parametric gradient flow equations derived in eq. (50), we will show that these equations can be viewed as numerical schemes for certain partial differential equation. Moreover, these schemes form sharp contrast with traditional method such as finite difference method, finite element method, and finite volume method in the sense that it origins from approximation using GMMs. We verifies the accuracy of these schemes using several partial differential equations.
### 1D heat equation
We start from the 1D heat equation. Consider a GMM with all its means form a computational grid with gap \(\Delta x\), i.e.
\[\mu_{i}=i\Delta x,\quad i=-n,-n+1,\cdots,n-1,n. \tag{61}\]
Here we shift the indices to make the computational domain \([-5,5]=[-n\Delta x,n\Delta x]\) symmetric w.r.t. \(0\). We use periodic boundary condition on this domain and set the initial condition to be a Gaussian profile, i.e.
\[p_{i}=\frac{e^{-\frac{\mu_{i}^{2}}{2}}}{\sqrt{2\pi}}. \tag{62}\]
Recall in eq. (54) we derive the parametric gradient flow equation of entropy functional, dividing RHS by spatial discretization size \(\Delta x\) we obtain
\[\dot{p}_{i}=-\frac{1}{(\Delta x)^{2}}\left(\sqrt{p_{i-1}p_{i}}\log\frac{p_{i}} {p_{i-1}}-\sqrt{p_{i+1}p_{i}}\log\frac{p_{i+1}}{p_{i}}\right). \tag{63}\]
Now, if we think the \(p_{i}\) as the density value on given grid point \(\mu_{i}=i\Delta x\), i.e. \(p_{i}=p(\mu_{i})\), then the LHS of eq. (63) can be thought of as a spatial discretization of the Laplacian operator. We implement this scheme and test it with Crank-Nicolson scheme to show its accuracy in Section 5.1. From now on, we use SW (scaling Wasserstein) to represent the solution obtained via scaling Wasserstein metric and FD to denote the classical finite difference solution.
### 2D heat equation
Similarly, recall in eq. (56) we formally derive the parametric gradient flow equation of entropy functional in 2D. Again we divide the RHS by the spatial discretization size \(\Delta x\) to obtain
\[\dot{p}_{ij}= \ -\frac{1}{(\Delta x)^{2}}\left(\sqrt{p_{i-1,j}p_{i,ij}}\log\frac{ p_{ij}}{p_{i-1,j}}-\sqrt{p_{i+1,j}p_{ij}}\log\frac{p_{i+1,j}}{p_{ij}}\right.\] \[\qquad\qquad\left.+\sqrt{p_{i,j-1}p_{i,ij}}\log\frac{p_{ij}}{p_{i,j-1}}-\sqrt{p_{i,j+1}p_{ij}}\log\frac{p_{i,j+1}}{p_{ij}}\right).\]
Comparing with eq. (63), the 2D scheme can be viewed as a summation of two 1D schemes on \(x,y\) directions, which manifests the fact that the Laplacian is separable, i.e. the 2D Laplacian is simply the summation of two 1D Laplaces. Simulating Section 5.2 on a periodic domain with Gaussian initial distribution is presented in Section 5.2. The parameters of the simulation are given by \(\Delta x=0.1,\Delta t=0.001\). The initial distribution is given by \(\rho\left(x;0\right)=e^{-\frac{x^{2}+y^{2}}{2}}/2\pi\). In (a), we plot the snapshot of the density obtained via SW at different time step. In (b), we plot the density obtained via FD at the same time steps. In (c), we plot the difference between these two solutions. It can be observed that the solution derived from scaling Wasserstein metric has high accuracy.
### 1D transport equation on extended GMM
In this last numerical example, we present some preliminary experiments on the extended GMM Section 3.3. We simulate a Wasserstein gradient flow of the potential function \(V\left(x\right)=\sin x\), i.e.
\[\partial_{t}\rho(x)-\nabla\cdot\left(\rho(x)\nabla V(x)\right)=0. \tag{64}\]
The discretization according to the extended GMM is presented in eq. (60). We use forward Euler for the temporal discretization. The initial data is given by \(\rho_{0}=0.2*\mathcal{N}\left(-1,0.1\right)+0.5*\mathcal{N}\left(0,0.1\right)+ 0.3*\mathcal{N}\left(3,0.1\right)\). The time step is set as \(\Delta t=0.01\) with number of iteration \(5000\). As the minimums of \(V(x)\) appear in period \(2\pi\), each Gaussian component will be driven to the nearest minimum by the gradient flow. Therefore, the first two components collapse to a common minimum \(x=-\frac{\pi}{2}\) while the rightmost component converges to \(x=\frac{3\pi}{2}\). We observe the mode degeneracy of two means \(\mu_{1},\mu_{2}\) in Section 5.3.
Figure 2. This figure plots a simulation of the 1D heat flow via the discretization introduced in this paper. The parameters of the simulation are given by \(\Delta x=0.1,\Delta t=0.001\). The initial distribution is given by \(\rho\left(x;0\right)=\frac{e^{-\frac{x^{2}}{2}}}{\sqrt{2\pi}}\) and we use periodic boundary condition. In the left figure, we plot the solution using the scheme derived from scaling Wasserstein distance in red line and benchmark solution in green dashed line. The benchmark finite difference solution is solved with Crank-Nicolson scheme with the same parameters. In the right figure, we plot the log of the difference between these two solutions. It can be observed that the solution derived from the scaling Wasserstein metric has high accuracy.
## 6. Discussion
In this paper, we study the Wasserstein pullback metric on the Gaussian mixture models. Our key observation is that as the variance of Gaussian components tends to \(0\), the scaling WIM reveals a diagonal structure. Based on this, we construct and prove the convergence of a metric structure under the scaling limit that variance tends to \(0\). We establish its analytic form via asymptotic analysis and introduce other related models with inhomogeneous gaps, higher-order terms, and extra degrees of freedom for the mean value parameters. Next, the gradient flows on this metric space are introduced, focusing on their relations with the numerical schemes of partial differential equations. We verify the correctness and accuracy of these schemes on 1D and 2D heat equations and present some preliminary experiments on the extended Gaussian mixture models.
A systematic study of the scaling Wasserstein geometry is needed. Questions such as functional inequalities and geometric properties remain to be explored. Connections to spectral graph theory deserve to be studied. Besides, one also needs to establish a correspondent theory for high-dimensional sample spaces. The current method to obtain the
Figure 3. This figure plots a simulation of the 2D heat flow via the discretization introduced in this paper. The parameters of the simulation are given by \(\Delta x=0.1,\Delta t=0.001\). The initial distribution is given by \(\rho\left(x;0\right)=\frac{e^{-\frac{x^{2}+y^{2}}{2}}}{2\pi}\) and we use periodic boundary condition. In (a), we plot the snapshot of the density obtained via SW at different time step. In (b), we plot the density obtained via FD at the same time steps. In (c), we plot the difference between these two solutions. It can be observed that the solution derived from scaling Wasserstein metric has high accuracy.
conclusion on 1D sample spaces relies significantly on the closed-form solution of WIMs, which is often absent in high dimensions. Lastly, apart from gradient flows, Hamiltonian flows and related mathematical physics equations, including Schrodinger equations, Schrodinger bridge problems, etc., can also be studied in the Gaussian mixture models. Detailed convergence analysis of WIM from GMM models to the Wasserstein metric in density space is left in future work.
|
2302.00039 | Between Coherent and Constructible Local Langlands Correspondences | Refined forms of the local Langlands correspondence seek to relate
representations of reductive groups over local fields with sheaves on stacks of
Langlands parameters. But what kind of sheaves? Conjectures in the spirit of
Kazhdan-Lusztig theory (due to Vogan and Soergel) describe representations of a
group and its pure inner forms with fixed central character in terms of
constructible sheaves. Conjectures in the spirit of geometric Langlands (due to
Fargues, Zhu and Hellmann) describe representations with varying central
character of a large family of groups associated to isocrystals in terms of
coherent sheaves. The latter conjectures also take place on a larger parameter
space, in which Frobenius (or complex conjugation) is allowed a unipotent part.
In this article we propose a general mechanism that interpolates between
these two settings. This mechanism derives from the theory of cyclic homology,
as interpreted through circle actions in derived algebraic geometry. We apply
this perspective to categorical forms of the local Langlands conjectures for
both archimedean and non-archimedean local fields. In the nonarchimedean case,
we describe how circle actions relate coherent and constructible realizations
of affine Hecke algebras and of all smooth representations of $GL_n$, and
propose a mechanism to relate the two settings in general. In the archimedean
case, we explain how to use circle actions to derive the constructible local
Langlands correspondence (in the form due to Adams-Barbasch-Vogan and Soergel)
from a coherent form (a real counterpart to Fargues' conjecture): the tamely
ramified geometric Langlands conjecture on the twistor line, which we survey. | David Ben-Zvi, Harrison Chen, David Helm, David Nadler | 2023-01-31T19:19:21Z | http://arxiv.org/abs/2302.00039v1 | # Between Coherent and Constructible Local Langlands Correspondences
###### Abstract.
Refined forms of the local Langlands correspondence seek to relate representations of reductive groups over local fields with sheaves on stacks of Langlands parameters. But what kind of sheaves? Conjectures in the spirit of Kazhdan-Lusztig theory (due to Vogan and Soergel) describe representations of a group and its pure inner forms with fixed central character in terms of constructible sheaves. Conjectures in the spirit of geometric Langlands (due to Fargues, Zhu and Hellmann) describe representations with varying central character of a large family of groups associated to isocrystals in terms of coherent sheaves. The latter conjectures also take place on a larger parameter space, in which Frobenius (or complex conjugation) is allowed a unipotent part.
In this article we propose a general mechanism that interpolates between these two settings. This mechanism derives from the theory of cyclic homology, as interpreted through circle actions in derived algebraic geometry. We apply this perspective to categorical forms of the local Langlands conjectures for both archimedean and non-archimedean local fields. In the nonarchimedean case, we describe how circle actions relate coherent and constructible realizations of affine Hecke algebras and of all smooth representations of \(GL_{n}\), and propose a mechanism to relate the two settings in general. In the archimedean case, we explain how to use circle actions to derive the constructible local Langlands correspondence (in the form due to Adams-Barbasch-Vogan and Soergel) from a coherent form (a real counterpart to Fargues' conjecture): the tamely ramified geometric Langlands conjecture on the twistor line, which we survey.
2020 Mathematics Subject Classification: Primary 20G25
###### Contents
* 1 Overview: Sheaves on Langlands Parameters
* 2 Equivariant Sheaves on Loop Spaces
* 3 Nonarchimedean Local Langlands and Circle Actions
* 4 From Archimedean Local Langlands to Twistor Geometric Langlands
* A Foundations
## 1. Overview: Sheaves on Langlands Parameters
The fundamental theme of spectral decomposition in representation theory seeks to describe representations of a group \(G\) in terms of a dual object \(\widehat{G}\), which parametrizes (suitable) irreducible representations. Harmonic analysis asks for the spectral description of large representations of \(G\) (such as functions on \(G\)-spaces) as families of vector spaces (the multiplicity spaces) over \(\widehat{G}\). We might also seek to describe indecomposable or standard modules in terms of irreducibles and solve extension problems, or more generally describe families of representations, in terms of the geometry of \(\widehat{G}\).
This theme underlies much of geometric representation theory, where one seeks to describe representations as sheaves on a parameter space, and utilize the geometry of sheaves to solve representation-theoretic problems. There are two main types of sheaves that play this role: _constructible_ (taken in the broad sense to include perverse sheaves as well as \(\mathcal{D}\)-modules) and _coherent_ (used informally to include quasicoherent and ind-coherent sheaves).
**N.B.:** We restrict our attention completely to characteristic zero representation theory in this article, and work in a derived (\(\infty\)-categorical) rather than abelian setting, so that all categories will be \(k\)-linear dg categories (or \(k\)-linear stable \(\infty\)-categories) for a field \(k\supset\mathbb{Q}\).
### Constructible sheaves in representation theory
Let us recall two classic instances of the role of constructible sheaves, Kazhdan-Lusztig theory and Springer theory, which serve as prototypes for the local Langlands correspondence over archimedean and non-archimedean fields, respectively. In these and other situations of interest in geometric representation theory, we are dealing with equivariance for group actions with finitely many orbits, and can work equivalently with either Betti or de Rham versions of equivariant derived categories, i.e., with either constructible sheaves or \(\mathcal{D}\)-modules.
Kazhdan-Lusztig theory identifies category \(\mathcal{O}_{0}\) of highest-weight modules (with trivial infinitesimal character) for a complex Lie algebra \(\mathfrak{g}\) as the category of \(N\)-equivariant perverse sheaves on the flag variety \(\mathcal{B}=G/B\). This description sets up a bijection between irreducibles in category \(\mathcal{O}_{0}\) and the set of \(N\)-orbits on the flag variety (which are simply connected hence carry a unique local system). Much more significantly it lets one describe composition series of indecomposable modules in terms of the equivariant geometry of the flag variety, as captured by its category of sheaves (the subject of the Kazhdan-Lusztig conjecture).
In Springer theory, we consider complex representations of the Chevalley group \(G(\mathbb{F}_{q})\), and in particular those representations that admit a \(B(\mathbb{F}_{q})\)-fixed vector, the unipotent principal series representations. Springer theory realizes these representations inside the category of \(G\)-equivariant perverse sheaves on the nilpotent cone \(\mathcal{N}\subset\mathfrak{g}\). This goes as follows: unipotent principal series representations are identified with modules for the finite Hecke algebra
\[\mathcal{H}^{f}=\mathbb{C}[B(\mathbb{F}_{q})\backslash G(\mathbb{F}_{q})/B( \mathbb{F}_{q})]=\operatorname{End}_{G(\mathbb{F}_{q})}(\mathbb{C}[G(\mathbb{ F}_{q})/B(\mathbb{F}_{q})]),\]
which (after choosing a square-root of \(q\)) may be identified with the Weyl group \(W\) of \(G\) (though that identification is in a sense incidental to our story). Let
\(\mu:\widehat{\mathcal{N}}=T^{*}\mathcal{B}\to\mathcal{N}\) denote the Springer resolution, and consider the Springer sheaf
\[\mathbf{S}=\mu_{*}\mathbb{C}_{\widetilde{\mathcal{N}}/G}[\dim(G/B)]\in\mathrm{ Perv}(\mathcal{N}/G).\]
There is an isomorphism
\[\mathcal{H}^{f}\simeq\mathrm{End}_{\mathcal{N}/G}(\mathbf{S})\]
between the finite Hecke algebra and the endomorphisms of the Springer sheaf. The Springer sheaf is projective (in fact the whole category \(\mathrm{Perv}(\mathcal{N}/G)\) is semisimple), whence we obtain a full embedding
\[\{\text{unipotent principal series of }G(\mathbb{F}_{q})\}\simeq\mathcal{H}^{f}\text{-mod} \stackrel{{\sim}}{{\longrightarrow}}\langle\mathbf{S}\rangle \subset\mathrm{Perv}(\mathcal{N}/G)\]
of Hecke-modules into perverse sheaves as the subcategory generated by the Springer sheaf. As a result, irreducible unipotent principal series representations are classified by nilpotent orbits together with _certain_ equivariant local systems on them, or equivalently representations of the component group of the centralizer of a nilpotent. (The missing local systems are accounted for by Lusztig's generalized Springer correspondence.)
### Constructible sheaves in the local Langlands program
The local Langlands correspondence in Vogan's formulation [10] proposes an analogous classification of representations of reductive groups over local fields. Let us fix a local field \(K\) and a field \(k\) of characteristic zero. We also fix for simplicity a connected split reductive group \(G\) over \(K\) with Langlands dual group \(\check{G}\), an algebraic group over \(k\). The local Langlands correspondence provides a map
\[\{\text{Irreducible smooth reps. of }G(K)\}\longrightarrow\mathcal{L}_{\check{G}}(K):=\{ \mathcal{L}_{K}\longrightarrow\check{G}\text{ continuous}\}/\check{G}\]
from representations to Langlands parameters, [continuous] representations of the Weil(-Deligne) group of \(K\) into \(G\). The fibers of this map, the _L-packets_, are expected to be parametrized by _certain_ irreducible representations of the component group of the centralizer of the corresponding Langlands parameter - i.e., by certain irreducible equivariant local systems. To see the missing local systems, one replaces the single group \(G(K)\) by its collection of pure inner forms of (which arises naturally when thinking sheaf-theoretically about representations of \(G(K)\), see e.g. [1] and Section 4.1 below).
Thus, we obtain a conjectural bijection between equivariant local systems on the orbit of a Langlands parameter and a collection of irreducible representations of pure inner forms of \(G(K)\). This lead to Vogan's conjectures [10], providing a complete description on the level of Grothendieck groups of the representation theory of \(G(K)\) and its pure inner forms with fixed central character in terms of constructible sheaves on suitable spaces of Langlands parameters, in the spirit of Kazhdan-Lusztig theory and Springer theory. The relevant spaces of Langlands parameters involve a fixed infinitesimal character (as described in [10]), which means in particular fixing a semisimple conjugacy class in \(\check{G}\) in the non-archimedean setting (the class of a semisimple Frobenius element) or a semisimple conjugacy class in \(\check{\mathfrak{g}}\) in the archimedean setting (coming from the derivative of the action of the Weil group).
In the archimedean setting this picture was established by Adams-Barbasch-Vogan [1] (cf. Mason-Brown's lecture [11]). Moreover Soergel [1] conjectured a categorical enhancement of these results, asserting that categories of Harish-Chandra modules (with fixed infinitesimal character) for pure inner forms
of a real reductive group are Koszul dual to equivariant derived categories of constructible sheaves on the ABV spaces of Langlands parameters. Soergel's conjecture was proved in [11] for complex groups and \(SL_{2}(\mathbb{R})\) and in [10] for quasisplit blocks.
In the nonarchimedean setting, Kazhdan and Lusztig [12] and Ginzburg [13] established an affine counterpart to Springer theory, identifying the representation theory of the affine Hecke algebra (or equivalently unramified principal series representations of \(G(K)\)) in terms of (certain) equivariant constructible sheaves on the spaces of unipotent Langlands parameters. Lusztig [14] (following [15, 16]) completed this picture to an "affine generalized Springer correspondence", establishing the local Langlands correspondence between all unipotent representations of pure inner forms of \(G(K)\) for \(G\) adjoint, simple and unramified and all equivariant constructible sheaves on unipotent Langlands parameters. This work has recently been extended by Solleveld [17, 18] first to all unipotent representations of connected \(G\) and then [17, 18] to prove Vogan's \(p\)-adic Kazhdan-Lusztig conjecture for a much broader class of representations. The main input in all these developments is an identification Hecke algebras associated to blocks of representations with Ext-algebras of suitable perverse sheaves. When combined with formality theorems as in [19, 20, 21, 22] this results in derived equivalences between categories of representations and categories of constructible sheaves on Langlands parameters.
One disadvantage of this constructible picture is that it appears inadequate to describe variation of the continuous parameters - indeed the spaces of Langlands parameters appearing in Vogan's conjectures vary discontinuously as a function of infinitesimal characters.
### Coherent sheaves in representation theory
The origin of the role of coherent sheaves is the identification of all modules for a commutative algebra \(Z\) as quasicoherent sheaves on \(\operatorname{Spec}(Z)\). If \(Z\) arises as the center of an associative algebra \(A\) then we obtain a localization functor from \(A\)-modules to quasicoherent sheaves on \(\operatorname{Spec}(Z)\). More abstractly a category \(\mathcal{C}\) sheafifies over the spectrum of its Bernstein center (or Hochschild cohomology), with Hom spaces localizing to quasicoherent sheaves, providing a spectral decomposition of \(\mathcal{C}\). Thus coherent sheaves appear naturally in contexts where it is important to study families of representations, in particular with varying central character. Among the many motivations in the setting of the local Langlands correspondence we mention connections with harmonic analysis (see e.g. [14]), \(K\)-theory and the Baum-Connes conjecture [1], and modular and integral representation theory [1, 2, 15].
The Langlands correspondence encodes ambitious algebraic spectral decompositions of this flavor, with spaces of Langlands parameters playing the role of central parameters. On one hand, the theory of (unramified) Hecke operators identifies a large commutative algebra of symmetries on automorphic forms (in classical Langlands) or automorphic sheaves (in geometric Langlands) for a reductive group \(G\). On the other hand, the spectrum of this commutative algebra is related (via the Satake correspondence) with a space of Langlands parameters into the dual group \(\check{G}\). We then seek to spectrally decompose the automorphic side in terms of algebraic geometry of Langlands parameters. This leads to the geometric Langlands conjecture (in any of its variants) for curves over algebraically closed fields, in which
a category of automorphic sheaves is identified with a category of (ind-)coherent sheaves on a stack of local systems on the curve.
Recently a new "coherent" categorical form of the local Langlands correspondence has emerged, inspired by the geometric Langlands program. in which categories of representations of groups over local fields are described via coherent sheaves on stacks of Langlands parameters (for details see the lectures of Fargues-Scholze, Emerton-Hellmann-Gee and Zhu in this summer school). In the archimedean setting such a coherent formulation was proposed as the tamely geometric Langlands correspondence over the twistor line in [1, 2], and will be discussed in this article. In the non-archimedean setting such a picture began to emerge from several different directions (see in particular [1, 2, 3, 4, 5, 6, 7]), leading to conjectures of Zhu [2], Hellmann [10]) and Fargues [11]. This perspective was profoundly developed in the monumental work of Fargues and Scholze [12], which in particular establishes the "automorphic-to-Galois" direction, showing that the automorphic category (and thus its subcategory of representations of \(G(K)\)) sheafifies over the stack of Langlands parameters. Upcoming work [10] of Hemo and Zhu applies the theory of categorical traces (as in [2]) and Bezrukavnikov's tamely ramified local geometric Langlands correspondence [1, 2] to establish a coherent local Langlands correspondence for unipotent representations (the principal series part of which is proved in [1]).
Categories of coherent sheaves on stacks are much larger than those of constructible sheaves. For example, at a point with stabilizer group \(H\), quasicoherent sheaves are indexed by all algebraic representations of \(H\) while constructible sheaves correspond only to representations of the component group of \(H\) (or in the derived setting to modules for chains on \(H\)). Thus we must consider a much larger representation theoretic side to match the entire categories of coherent sheaves on Langlands parameters (as opposed to distinguished full subcategories). Indeed, rather than the finite collection of pure inner forms appearing in Vogan's conjectures, the conjectures of [2, 10, 11] identify coherent sheaves on Langlands parameters with representations of an infinite family of groups. These groups, which are all (groups of points of) inner forms of Levis of \(G\), arise as the automorphism groups of \(G\)-isocrystals (or of points in a suitable stack of \(G\)-bundles) and are indexed by Kottwitz' set \(B(G)\).
### Relating algebra and topology
Broadly speaking, we have described the following two flavors of the local Langlands correspondence:
\[\begin{array}{|c|c|}\hline\text{Representations}&\text{\raisebox{-1.29pt}{ \includegraphics[width=1.29pt]{./.
- specifically, we propose that the latter appears as the generic point in a deformation of a full subcategory of the former. In particular we propose a form of the constructible categories which does vary well with infinitesimal character, and an operation which (conjecturally) cuts down the representation theory from all \(G\)-isocrystals to pure inner forms.
For \(X\) be a smooth complex variety, the algebraic de Rham theorem identifies the cohomology of the constant sheaf \(H^{*}(X,\mathbb{C}_{X})\simeq\operatorname{Ext}^{*}_{Shv(X)}(\mathbb{C}_{X}, \mathbb{C}_{X})\), i.e., the derived endomorphisms of the local system, with de Rham cohomology \(H^{*}(X,\Omega^{\bullet},d)\simeq\operatorname{Ext}^{*}_{\mathcal{D}_{X}}( \mathcal{O}_{X},\mathcal{O}_{X})\), i.e., the derived endomorphisms of the \(\mathcal{D}\)-module \(\mathcal{O}_{X}\) equipped with its natural flat connection. The Riemann-Hilbert correspondence generalizes this to an equivalence between the abelian category of local systems on \(X\) and the abelian category of vector bundles with regular flat connections on \(X\), and may be extended to an equivalence of categories between the constructible derived category \(\mathcal{S}hv_{c}(X;\mathbb{C})\) of \(X\) and a full subcategory \(\mathcal{D}_{rh}(X)\) (consisting of regular holonomic objects) of the derived category \(\mathcal{D}(X)\) of \(\mathcal{D}_{X}\)-modules.
This equivalence extends to the equivariant setting via descent, i.e., to smooth stacks \(X\). Moreover in the situations of greatest interest in geometric representation theory, where in particular we are dealing with equivariant sheaves on schemes acted on with finitely many orbits, we can drop the modifiers "regular holonomic" and identify the equivariant constructible derived category with the derived category \(\mathcal{D}_{c}(X)\) of all coherent strongly equivariant \(\mathcal{D}\)-modules. Thus for example in the settings of Kazhdan-Lusztig or Springer theory as recounted above we can simply replace the use of constructible sheaves by \(\mathcal{D}\)-modules.
\(\mathcal{D}\)-modules themselves can be described as deformation quantizations of quasicoherent sheaves on the cotangent bundle \(\mathbb{T}^{*}_{X}\). The sheaf \(\mathcal{D}_{X}\) of differential operators is filtered by the order of operators, with associated graded \(\operatorname{gr}(\mathcal{D}_{X})\simeq\mathcal{O}_{\mathbb{T}^{*}_{X}}\) identified with symbols, i.e., functions on the cotangent bundle of \(X\). As a result the sheaf \(\mathcal{D}_{X}\) can be described as a deformation quantization of \(\mathcal{O}_{\mathbb{T}^{*}_{X}}\) with its standard symplectic form: the Rees construction on \(\mathcal{D}_{X}\) produces a \(\mathbb{G}_{m}\)-equivariant family of sheaves of algebras \(\mathcal{D}_{\hbar,X}\) for \(\hbar\in\mathbb{A}^{1}/\mathbb{G}_{m}\), recovering \(\mathcal{D}_{X}\) at \(\hbar\neq 0\) and \(\mathcal{O}_{\mathbb{T}^{*}_{X}}\) with Poisson bracket from the \(\hbar\)-linear term in the commutator at \(\hbar=0\). Passing to modules we see the category of \(\mathcal{D}_{X}\)-modules as the fiber at \(\hbar\neq 0\) of a \(\mathbb{G}_{m}\)-equivariant family of categories over \(\mathbb{A}^{1}\), degenerating to \(\operatorname{QC}(\mathbb{T}^{*}_{X})^{\mathbb{G}_{m}}\) at \(\hbar=0\).
The theory of cyclic homology due to Connes and Feigin-Tsygan, as seen through the eyes of derived algebraic geometry, provides a topological interpretation of the de Rham complex, whence of the category of \(\mathcal{D}\)-modules. Namely, consider the simplicial (or "animated") circle \(S^{1}=B\mathbb{Z}\) and the _derived loop space_
\[\mathcal{L}X=\operatorname{Map}(S^{1},X)=X\times_{X\times X}X.\]
On a scheme, the Hochschild-Kostant-Rosenberg theorem identifies \(\mathcal{L}X\) with the spectrum of the algebra of differential forms (in negative cohomological degrees), while the de Rham differential has the pleasant interpretation as the linearization of the loop-rotation \(S^{1}\)-action on \(\mathcal{L}X\), and is encoded by the Connes \(B\)-operator of degree \(-1\). The Rees parameter \(\hbar\in\mathbb{A}^{1}\) gets interpreted (up to a cohomological shift) as the \(S^{1}\)-equivariant parameter \(k[u]\simeq H^{*}_{S^{1}}(\operatorname{pt})\simeq H^{*}(\mathbb{CP}^{\infty})\). Thus (up to cohomological shift and renormalization) the \(\hbar\)-deformation from \(\operatorname{QC}(\mathbb{T}^{*}_{X})\) to \(\mathcal{D}\)-modules on \(X\) is identified with the \(k[u]\)-linear category of _cyclic sheaves_\(\operatorname{QC}^{!}(\mathcal{L}\mathcal{X})^{S^{1}}\) on \(\mathcal{L}X\):
\(S^{1}\)-equivariant (ind-)coherent sheaves on the loop space, i.e. a categorification of the theory of cyclic homology. Passing to _periodic cyclic sheaves_ - inverting the equivariant parameter \(u\) - we recover the base-change \({\mathcal{D}}_{X}\mbox{-}{\rm mod}\otimes_{k}k[u,u^{-1}]\) of the category of \({\mathcal{D}}_{X}\)-modules.
This cyclic perspective on \({\mathcal{D}}\)-modules may be considered a formal manipulation in the setting of smooth schemes, but takes on a completely different flavor in the setting of smooth stacks \(X\). The derived loop space \({\mathcal{L}}X\) is identified with the (derived) _inertia stack_ of \(X\), i.e., the stack of pairs of points and automorphisms
\[{\mathcal{L}}X\simeq\{x\in X,g\in{\rm Aut}(x)\},\]
which in the special case of finite orbit stacks is in fact an _ordinary_ (underived) stack. An extension of the Koszul duality pattern described above relates the category of \({\mathcal{D}}\)-modules on a smooth stack \(X\) to the circle action on the _formal completion_ of the loop space (where we replace \({\rm Aut}(x)\) by its formal group). But the full category of coherent sheaves on \({\mathcal{L}}X\) (with its circle action) gives a new and richer category, which enhances \({\mathcal{D}}\)-modules on \(X\) by adding new continuous parameters (the "central characters", appearing as eigenvalues of the loop \(g\)).
The message of this lecture is that such loop spaces and their variants arise naturally as stacks of Langlands parameters, and their categories of coherent sheaves provide natural spectral sides for categorical forms of the local Langlands correspondence. Crucially, this description endows these categories with circle actions. In the archimedean setting, the circle action appears from the geometric action of \(S^{1}\) on the twistor line. In the unipotent non-archimedean setting, the circle action arises from interpolating between the trace of Frobenius and the trace of the identity (loop space) by varying \(q\) (informally, it rotates the "circle" \({\rm Spec}({\mathbb{F}}_{q})\) with fundamental group generated by Frobenius). In general we only speculate about a general construction of circle actions reflecting a subtle aspect of the theory of traces of Frobenius.
The utility of the circle actions is that they provide the desired mechanism relating coherent sheaves to \({\mathcal{D}}\)-modules on Langlands parameter spaces with fixed central characters, i.e., interpolating between coherent and constructible Langlands correspondences. Namely the category of cyclic (i.e., \(S^{1}\)-equivariant) sheaves forms a family over the equivariant parameter \(u\). Specializing to \(u=0\) we obtain a full subcategory of (\(S^{1}\)-invariant) coherent sheaves, the setting of the "geometric-Langlands-type" local Langlands correspondence. On the other hand, for nonzero \(u\) we obtain the categories of \({\mathcal{D}}\)-modules that parametrize representations of pure inner forms via Vogan's local Langlands correspondence.
### Jordan decomposition
The key mechanism relating cyclic sheaves with representation theory is the "Jordan decomposition for loops", a stacky generalization of the Jordan decomposition for conjugacy classes in a reductive group which allows us to interpolate different central characters. Let us illustrate the most basic example. Consider the stack \(\check{G}/\check{G}\) (which we recall is the inertia or derived loops \(\check{G}/\check{G}={\mathcal{L}}(B\check{G})\) in the stack \(B\check{G}={\rm pt}\,/\check{G}\)). Taking invariant polynomials (i.e., passing to the affinization) defines a map
\[\chi:\check{G}/\check{G}\longrightarrow\check{G}/\check{G}={\rm Spec}({ \mathcal{O}}(\check{G})^{\check{G}})\simeq\check{H}/\!\!/W\]
to the space of continuous parameters.
If we interpret \(\check{G}/\!\check{G}\) as semisimple conjugacy classes in \(\check{G}\), as in the constructible form of the local Langlands correspondence, we encounter an apparent discontinuity: picking a semisimple representative \(\alpha\in(\check{G})^{ss}\) of a class \([\alpha]\in\check{G}/\!\check{G}\), the corresponding centralizers \(\check{G}(\alpha)\) depend discontinuously on the eigenvalues. Likewise the classifying stacks \(B\check{G}(\alpha)\) don't form a nice family over \(\check{H}/\!\check{W}\).
However the Jordan decomposition in \(\check{G}\) corrects this by adding commuting unipotents: the fiber \(\chi^{-1}(\widehat{[\alpha]})\) of the formal neighborhood of \([\alpha]\) is identified with \(\check{G}(\alpha)^{u}/\check{G}(\alpha)\), the stack of unipotent conjugacy classes in the centralizer \(\check{G}(\alpha)\). This is a special case of a general construction, the _unipotent loop space_, a variant of the inertia defined for any stack in which we only allow _unipotent_ automorphisms: \(\mathcal{L}^{u}(B\check{G}(\alpha))=\check{G}(\alpha)^{u}/\check{G}(\alpha)\). In other words, if we augment \(B\check{G}(\alpha)\) to allow nontrivial unipotent classes in \(\check{G}(\alpha)\) (replacing it by its unipotent loops) then we obtain a good family, with total space the loops in \(B\check{G}\).
A crucial feature of unipotent automorphisms (or abstractly of unipotent loops) is that they carry a \(\mathbb{G}_{m}\) action contracting them to the identity. This action allows us to relate coherent sheaves on the full fiber \(\chi^{-1}(\widehat{[\alpha]})=\check{G}(\alpha)^{u}/\check{G}(\alpha)\) with those on its formal completion. Finally Koszul duality relates cyclic sheaves on this formal completion with \(\mathcal{D}\)-modules on the stack \(B\check{G}(\alpha)\) itself. As a result the family of categories of \(\mathcal{D}\)-modules on \(B\check{G}(\alpha)\) obtain a uniform expression in terms of the circle action on the category of coherent sheaves on \(\check{G}/\check{G}\).
We will see similar phenomena occurring in both the archimedean and non-archimedean settings: by dropping the condition that the action of Frobenius or the archimedean Weil group be semisimple, we obtain versions of Langlands parameter spaces which smoothly interpolate different central characters, but still capture the same categories of constructible sheaves thanks to the application of contracting \(\mathbb{G}_{m}\) actions combined with \(S^{1}\)-equivariant localization. Moreover the categories of coherent sheaves on these larger Langlands parameter spaces turn out to be the ones needed for the geometric-Langlands-flavored formulations of the local Langlands correspondence.
### Outline
The rest of this paper is organized as follows: Section 2 is a brief and gentle introduction to derived loop spaces. In Sections 3 and 4 we describe the archmidean and non-archimedean local Langlands correspondences, respectively, from the perspective of loop spaces and circle actions, pushing back to Section A an overview of the underlying technical mechanisms from derived algebraic geometry. In more detail:
In Section 2 we review the notion of derived loop spaces and circle actions and give an overview of the general pattern from cyclic homology and derived algebraic geometry that we will apply, postponing all technical details to the Appendix.
In Section 3 we describe the role of loop spaces in the unipotent non-archimedean local Langlands correspondence. We first describe some recent results in the categorical local Langlands program, realizing categories of representations as full subcategories of coherent sheaves. We then describe results and conjectures concerning the role of circle actions in the unipotent local Langlands correspondence as interpolating coherent and constructible forms of the correspondence. We then briefly speculate on the appearance of circle actions in the general non-archimedean local Langlands correspondence.
In Section 4 we describe the role of loop spaces in the archimedean local Langlands correspondence. In particular we give a simple stacky description of the ABV geometric parameter spaces [1] and a corresponding statement of Soergel's conjecture, following [10]. We then introduce the Langlands parameter stack associated to the twistor line \(\widetilde{\mathbb{P}}^{1}_{\mathbb{R}}\), which smoothly interpolates between the ABV parameter spaces for varying infinitesimal character and extend Soergel's conjecture to this setting in terms of cyclic sheaves. On the automorphic side, we recover representations of pure inner forms from the stack of real parabolic bundles on the twistor line. We formulate the tamely ramified geometric Langlands conjecture for \(\widetilde{\mathbb{P}}^{1}_{\mathbb{R}}\) (following [1, 16]) and explain how it recovers Soergel's conjecture as its periodic cyclic deformation.
In the Appendix, Section A, we describe the key technical mechanisms underlying the paper. We relate categories of \(S^{1}\)-equivariant coherent sheaves on loop spaces (among which we find stacks of Langlands parameters) to categories of \(\mathcal{D}\)-modules (in particular on Vogan varieties) in three steps. The first is the Jordan decomposition of loops, a general pattern relating loop spaces of quotient stacks to unipotent loop spaces. Next, the contracting \(\mathbb{G}_{m}\)-action on unipotent loop spaces relates their sheaf theory to that of formal loops. Finally, we apply the Koszul duality between \(S^{1}\)-equivariant coherent sheaves on formal loop spaces and \(\mathcal{D}\)-modules.
## 2. Equivariant Sheaves on Loop Spaces
In this section, we summarize key statements about equivariant sheaves on loop spaces in derived algebraic geometry, the setting for the representation theoretic constructions in the following sections. We defer further details of the general theory to Section A.
We will work with a broadly applicable set-up: over a fixed field \(k\) of characteristic zero with a smooth Artin stack \(X\) with affine diagonal. In all cases of present interest, we may assume \(X=Y/G\) where \(Y\) is a smooth quasi-projective scheme with the action of an affine algebraic group \(G\).
### Variations on the theme of loops
We view the topological group \(S^{1}=B\mathbb{Z}=K(1,\mathbb{Z})\) as a locally constant group object in prestacks. We may view it as the suspension of \(S^{0}=\operatorname{Spec}k\coprod\operatorname{Spec}k\):
\[S^{1}\simeq\Sigma S^{0}=\operatorname{Spec}k\coprod_{\operatorname{Spec}k \coprod\operatorname{Spec}k}\operatorname{Spec}k.\]
This presentation of \(S^{1}\) as a colimit leads to a presentation of the mapping stack out of \(S^{1}\), i.e., the (derived) _loop space_, as a limit
\[\mathcal{L}X:=\operatorname{Map}(S^{1},X)=\operatorname{Map}(\operatorname{ Spec}k\coprod_{\operatorname{Spec}k\coprod\operatorname{Spec}k}\operatorname{Spec}k,X)=X \times_{X\times X}X.\]
Here the (derived) fiber product is the self-intersection of the diagonal of \(X\). Equivalently, the loop space is the (derived) inertia stack with \(k\)-points \(\mathcal{L}X(k)\) given by pairs \((x,\gamma)\) where \(x\in X(k)\) and \(\gamma\in\operatorname{Aut}(x)=(\{x\}\times_{X}\{x\})(k)\) is an automorphism of the point \(x\), modulo \(\operatorname{Aut}(x)\)-conjugacy.
**Example 2.1**.: We consider the following examples.
(1) For \(X\) a scheme, letting \(\mathcal{B}_{X}\) denote the sheafified cyclic bar complex on \(X\) viewed as a sheaf of dg algebras under the shuffle product, we have \(\mathcal{L}X=\operatorname{Spec}_{X}\mathcal{B}_{X}\)
(2) For \(X=BG\) a classifying stack, the diagonal is smooth, and in particular flat. Thus, \(\mathcal{L}X\) is a underived stack, i.e. is the classical inertia stack \(G/G\).
(3) For \(X=Y/G\) a global quotient stack, the loop space has the following presentation as a derived fiber product of stacks along representable maps:
where \(G\) acts diagonally on \(Y\times Y\) and \(Y\times G\), \(a\) is the action map, \(p\) is the projection map, and \(\Delta\) the diagonal. This stack has \(k\)-points
\[\mathcal{L}(Y/G)(k)=\{(y,g)\in Y(k)\times G(k)\mid g\cdot y=y\}/G(k).\]
(4) For a smooth finite orbit stack \(X\) (i.e. a stack with finitely many \(k\)-points), the derived loop space \(\mathcal{L}X\) is in fact an _ordinary_ (underived) stack, and is identified with the classical inertia stack of \(X\). This absence of derived structure does not depend on flatness of any of the morphisms in the diagram of Example 2.1, but rather a dimension count, i.e. if \(X\) is a finite orbit stack then \(\mathcal{L}X\) is a union of irreducible components of (stacky) dimension \(0\), which is equal to the expected dimension.1
Footnote 1: Though this is well known, let us outline the argument for the uninitiated since it is fundamental. First, any fiber product \(X\times_{Y}Z\) may be written as the intersection of \(X\times Z\) and \(X\times Z\) (under two different mappings) in \(X\times Y\times Z\), so we may assume our derived fiber product is a derived intersection \(X\cap_{Y}Z\). Locally, \(X,Z\) are cut out by regular sequences since everything is smooth, and since \(\operatorname{codim}(X\cap_{Y}Z)=\operatorname{codim}(X)+\operatorname{codim} (Z)\), the union of the sequence is regular as well, thus the higher Tor’s vanish.
#### 2.1.1. Structures on loops
The loop space \(\mathcal{L}X=\operatorname{Map}(S^{1},X)\) inherits many familiar algebraic structures from topology, of which we will focus on the following:
1. Restriction to a point \(pt\in S^{1}\) gives _projection to the base-point_\(\mathcal{L}X\to X\).
2. Pullback along \(S^{1}\to pt\) gives _constant loops_\(X\subset\mathcal{L}X\).
3. The \(S^{1}\)-action on itself by rotation gives an \(S^{1}\)-action on \(\mathcal{L}X\) by _loop rotation_ (see Section A.1).
It will not play a role in our development, but it is also worth mentioning the the collapse map \(S^{1}\to S^{1}\wedge S^{1}\), inverse map \(S^{1}\to S^{1}\), and their compatibilities give a _relative group structure on \(\mathcal{L}X\to X\)_.
One of our main goals is to explicate the loop rotation action concretely. Among a coherent collection of structures, the first order part of such an action is an automorphism of the identity self-map of \(\mathcal{L}X\), i.e., a canonical automorphism of each point of \(\mathcal{L}X\). In a discrete setting, this structure is concrete and familiar:
**Example 2.2**.: For \(X=BG\) with \(G\) finite, we can identify the adjoint quotient \(\mathcal{L}X=G/G\) with the disjoint union \(\coprod_{[\alpha]}BG(\alpha)\) of classifying stacks of centralizers \(G(\alpha)\subset G\) of group elements \(\alpha\) ranging over conjugacy classes \([\alpha]\). For each component \(BG(\alpha)\), its space of automorphisms is the homotopy quotient \(\operatorname{Aut}(G(\alpha))/G\), where \(G\) acts by composition with conjugation, and the identity automorphism has stabilizer \(Z(G(\alpha))\). The \(S^{1}\)-action on the component \(BG(\alpha)\) is then given by the central element \(\alpha\in Z(G(\alpha))\), considered as an automorphism of the identity map of \(BG(\alpha)\), i.e. a canonical automorphism of any \(G(\alpha)\)-bundle.
#### 2.1.2. Odd tangent bundles
For typical reasons, we may attempt to understand complicated non-linear objects via their linearizations. The relevant linearization of the derived loop space is the _odd tangent bundle_
\[\mathbb{T}_{X}[-1]=\operatorname{Spec}_{X}\operatorname{Sym}_{X}^{\bullet} \Omega^{1}_{X}\]
where \(\Omega^{1}_{X}\) is the _cotangent complex_ of \(X\), i.e., the derived Kahler differentials. The odd tangent bundle may be identified with the normal bundle of the constant loops \(X\subset\mathcal{L}X\). It is a derived vector bundle over \(X\), and thus equipped with a natural contracting \(\mathbb{G}_{m}\)-action.
For the linearization of \(S^{1}=B\mathbb{Z}\) itself, we may take its affinization \(B\mathbb{G}_{a}=\operatorname{Spec}\mathcal{O}(S^{1})\), the functor on connective \(k\)-algebras represented by functions \(\mathcal{O}(S^{1})\simeq C^{\bullet}(S^{1};k)\) on \(S^{1}\), i.e., \(k\)-valued cochains on \(S^{1}\).2 The group \(\mathbb{G}_{a}\) has a canonical contracting action of \(\mathbb{G}_{m}\) via group homomorphisms inducing a contracting \(\mathbb{G}_{m}\)-action on \(B\mathbb{G}_{a}\). In more down-to-earth terms, \(\mathcal{O}(S^{1})\simeq C^{\bullet}(S^{1};k)\) is formal as a dg algebra, thus has a canonical internal weight grading agreeing with the cohomological grading.
Footnote 2: Note that since \(\mathcal{O}(S^{1})\) is coconnective (and not connective), its “spectrum” is not an affine derived scheme but an _affine stack_ via Töen (or _caffine stack_ via Lurie).
We will also be interested in the _formal odd tangent bundle_\(\widehat{\mathbb{T}}_{X}[-1]\), the formal completion of \(\mathbb{T}_{X}[-1]\) along its zero section. The \(S^{1}\)-action on \(\mathcal{L}X\) induces \(S^{1}\)-actions on \(\mathbb{T}_{X}[-1]\) and \(\widehat{\mathbb{T}}_{X}[-1]\), and the latter factors through the affinization map \(S^{1}\to B\mathbb{G}_{a}\)[1, Prop. 3.2.3]. We may organize the actions on \(\widehat{\mathbb{T}}_{X}[-1]\) by a single action of the semidirect product \(B\mathbb{G}_{a}\rtimes\mathbb{G}_{m}\).
**Example 2.3**.: We continue with the examples from Example 2.1.
(1) For \(X\) a scheme, the Hochschild-Kostant-Rosenberg (or exponential) map defines an \(S^{1}\)-equivariant (factoring through \(B\mathbb{G}_{a}\)-equivariant) equivalence (see Proposition 4.4 of [1]):
\[\exp:\ \mathbb{T}_{X}[-1]\xrightarrow{\simeq}\mathcal{L}X.\]
In particular, when \(X\) is a scheme, \(\Omega^{1}_{X}[1]\) has negative Tor-amplitude, so \(X\subset\mathbb{T}_{X}[-1]\) is a nilthickening. Thus we have \(\mathcal{L}X\simeq\mathbb{T}_{X}[-1]\simeq\widehat{\mathbb{T}}_{X}[-1]\). So for \(X\) a scheme, the derived loop space \(\mathcal{L}X\) is no more complicated than its linearization.
(2) For \(X=BG\) a classifying stack, the odd tangent bundle \(\mathbb{T}_{X}[-1]=\mathfrak{g}/G\) is the adjoint-equivariant Lie algebra, and the formal odd tangent bundle \(\widehat{\mathbb{T}}_{X}[-1]=\widehat{\mathfrak{g}}/G\) is the adjoint-equivariant formal Lie algebra.
#### 2.1.3. Formal and unipotent loops
We saw above that when \(X\) is a scheme, the exponential map identified the loop space with the odd tangent bundle. When \(X\) is a stack, the exponential map is not even well-defined: for example, there is no map of algebraic stacks \(\exp:\mathfrak{g}/G\to G/G\).
To write down an exponential map we need to restrict to the _formal loop space_\(\widehat{\mathcal{L}}X\), the completion of \(\mathcal{L}X\) along constant loops \(X\subset\mathcal{L}X\). By Theorem 6.9 of [1], there is an \(S^{1}\)-equivariant equivalence between the _formal_ odd tangent bundle and _formal_ loop space:
\[\exp:\ \widehat{\mathbb{T}}_{X}[-1]\xrightarrow{\simeq}\widehat{\mathcal{L}}X.\]
It is convenient to introduce another, intermediate, variant of the loop space \(\mathcal{L}X\), the _unipotent loop space_:
\[\mathcal{L}^{u}X:=\operatorname{Map}(B\mathbb{G}_{a},X)\]
Note \(B\mathbb{G}_{a}\)-action on itself equips \(\mathcal{L}^{u}X\) with a canonical \(B\mathbb{G}_{a}\)-action. The affinization map \(S^{1}\to B\mathbb{G}_{a}\) induces an \(S^{1}\)-equivariant map \(\mathcal{L}^{u}X\to\mathcal{L}X\); in our setting where \(X\) is an Artin 1-stack with affine diagonal, this map is a monomorphism (see Proposition 2.1.24 in [1]).
Altogether, we have a sequence of \(S^{1}\)-equivariant monomorphisms:
\[\widehat{\mathcal{L}}X\hookrightarrow\mathcal{L}^{u}X\hookrightarrow\mathcal{ L}X.\]
**Example 2.4**.: We continue with the examples from Example 2.1.
(1) If \(X\) is a scheme, then the inclusions above are all identities, i.e.
\[\widehat{\mathcal{L}}X=\mathcal{L}^{u}X=\mathcal{L}X\]
That is, the closed inclusion \(X\hookrightarrow\mathcal{L}X\) is a derived thickening.
(2) When \(X=BG\), the formal loop space is \(\widehat{\mathcal{L}}(BG)=\widehat{G}_{e}/G\), i.e. the adjoint-quotient of the formal group, and the unipotent loop space is \(\mathcal{L}^{u}(BG)=\widehat{G}_{\mathcal{U}}/G\), i.e. the adjoint-quotient of the formal completion along the unipotent cone.
(3) The map \(Y/G\to BG\) gives rise to a map \(\mathcal{L}(Y/G)\to G/G\). By Propositions 2.1.20 and 2.1.25 of [1], the formal loop space is the completion along the inverse image of \(\{e\}/G\), and the unipotent loop space is the completion along the inverse image of \(\mathcal{U}/G\).
We summarize the different loop spaces in the following picture:
(2.1)
\(X\) stack w/ multiplicative automorphisms \(\widehat{\mathcal{L}}X=\mathcal{L}^{u}X\subsetneq\mathcal{L}X\)\(X\) stack w/ additive automorphisms \(\widehat{\mathcal{L}}X=\mathcal{L}^{u}X\subsetneq\mathcal{L}X\)\(X\) stack \(\widehat{\mathcal{L}}X\subsetneq\mathcal{L}^{u}X=\mathcal{L}X\)
### Koszul dual description of equivariant sheaves
The main object we would like to study is the category of \(S^{1}\)-equivariant coherent sheaves on the loop space \(\mathcal{L}X\). We defer all technical discussion until Section A and aim here simply to state the main results to be invoked. In particular, we refer to Section A.1 for the precise definition of the category (compactly renormalized) \(S^{1}\)-equivariant coherent sheaves \(\operatorname{QC}^{!}(\mathcal{L}X)^{\omega S^{1}}\) we will work with.
#### 2.2.1. Case of schemes
Recall from Section 2.1.2 that when \(X\) is a scheme, the Hochschild-Kostant-Rosenberg (or exponential) map defines an equivalence
\[\exp:\ \mathbb{T}_{X}[-1]\xrightarrow{\simeq}\mathcal{L}X.\]
compatible with the factorization of the \(S^{1}\)-action through the \(B\mathbb{G}_{a}\)-action. Thus we have an equivalence
\[\mathrm{QC}^{!}(\mathcal{L}X)^{\omega S^{1}}\xrightarrow{\simeq}\mathrm{QC}^{! }(\mathbb{T}_{X}[-1])^{\omega B\mathbb{G}_{a}}\]
Recall as well the \(B\mathbb{G}_{a}\)-action on the right-hand side lifts to a \(B\mathbb{G}_{a}\rtimes\mathbb{G}_{m}\)-action, so we may likewise consider the graded enhancement \(\mathrm{QC}^{!}(\mathbb{T}_{X}[-1])^{\omega B\mathbb{G}_{a}\rtimes\mathbb{G}_{m}}\). Kapranov [10] proved the following version of Koszul duality, which was interpreted in terms of loop spaces in Corollary 5.2 of [11].
**Theorem 2.5** (Koszul duality for loop spaces of schemes).: _Let \(X\) be a smooth scheme, \(\mathcal{D}(X)\) the category of \(\,\mathcal{D}\)-modules on \(X\), and \(F\mathcal{D}(X)\) the category of filtered \(\,\mathcal{D}\)-modules on \(X\), i.e., modules for the Rees algebra attached to the order filtration on differential operators._
_Then, we have compatible equivalences:_
#### 2.2.2. Case of stacks
When \(X\) is a stack, Koszul duality alone is only sufficient to describe \(S^{1}\)-equivariant coherent sheaves on the formal loop space \(\widehat{\mathcal{L}}X\) or the unipotent loop space \(\mathcal{L}^{u}X\). (Recall when \(X\) is a scheme, the inclusions are equivalences \(\widehat{\mathcal{L}}X\simeq\mathcal{L}^{u}X\simeq\mathcal{L}X\).) The following Koszul duality is a theorem of [12]; see Section A.2.1 for the "renormalized" categories involved. In particular, the category
\[\widehat{\mathrm{QC}^{!}}(\widehat{\mathcal{L}}X)=\mathrm{Ind}(\widehat{ \mathrm{Coh}}(\widehat{\mathcal{L}}X))\]
is a suitably renormalized version of ind-coherent sheaves.
**Theorem 2.6** (Koszul duality for loop spaces of stacks).: _Let \(X\) be a smooth geometric stack, \(\widetilde{\mathcal{D}}(X)\) the suitably renormalized category of \(\,\mathcal{D}\)-modules on \(X\), and \(F\mathcal{D}(X)\) the suitably renormalized category of filtered \(\mathcal{D}\)-modules on \(X\)._
_We have compatible equivalences:_
_In particular, we have a Koszul duality (in the sense of Section A.1.3):_
\[\widehat{\mathrm{QC}^{!}}(\widehat{\mathcal{L}}X)^{\mathrm{Tate}}\underset{Kos} {\sim}\widetilde{\mathcal{D}}(X)\otimes\mathrm{QC}(\mathbb{G}_{m}).\]
Going further, we do not have to pass to formal loops \(\widehat{\mathcal{L}}X\), but can work more directly with unipotent loops \(\mathcal{L}^{u}X\). This is justified by the following theorem to appear in [1]; see Example A.14 for a worked example.
**Theorem 2.7**.: _Let \(X\) be a geometric stack. The pullback functor induces an equivalence_
\[\widehat{\operatorname{QC}^{!}}(\mathcal{L}^{u}X)^{\operatorname{Tate}\rtimes \mathbb{G}_{m}}\simeq\widehat{\operatorname{QC}^{!}}(\widehat{\mathcal{L}}X)^{ \operatorname{Tate}\rtimes\mathbb{G}_{m}}\]
_and hence we have an equivalence_
\[\widehat{\operatorname{QC}^{!}}(\mathcal{L}^{u}X)^{\operatorname{Tate}\rtimes \mathbb{G}_{m}}\simeq\check{\mathcal{D}}(X)\]
### Jordan decomposition and equivariant localization
With the preceding theory for \(S^{1}\)-equivariant coherent sheaves on the unipotent loop space \(\mathcal{L}^{u}X\) in hand, we would like to go further and describe \(S^{1}\)-equivariant coherent sheaves on the entire loop space \(\mathcal{L}X\). Our approach will involve factoring loops into semisimple and unipotent parts and then applying the prior theory to the unipotent part as twisted by the semisimple part.
#### 2.3.1. Jordan decomposition of loops
In this section, we specialize to the case of global quotient stacks \(X/G\), where \(G\) is a reductive group \(G\) (or more generally, an affine algebraic group \(G\) with reductive neutral component). See Section 3 of [1] for an extended discussion.
Consider the "characteristic polynomial" map
\[\mu:\mathcal{L}(BG)=G/G\longrightarrow G//G\]
from the adjoint-quotient stack to the affine quotient scheme, i.e., to the variety parametrizing semisimple conjugacy classes. Note the pre-image of the class of the identity is precisely the unipotent cone \(\mathcal{U}_{G}\subset G\).
For any semisimple \(\alpha\in G\), set \([\alpha]:=\mu(\alpha)\in G//G\) and write \(\mathbb{O}_{\alpha}=G\cdot\{\alpha\}\simeq G/G(\alpha)\subset G\) for the conjugacy class of \(\alpha\), where \(G(\alpha)\subset G\) is the centralizer of \(\alpha\). Then we have the Jordan decomposition of group elements viewed as loops
\[\mu^{-1}([\alpha])\simeq G\times^{G(\alpha)}\left\{su\in G\mid s\in\mathbb{O} _{\alpha},u\in\mathcal{U}_{G},su=us\right\}\]
Now for a scheme \(X\) with a \(G\)-action, by taking fibers of the natural maps
\[\mathcal{L}(X/G)\longrightarrow\mathcal{L}(pt/G)\simeq G/G\longrightarrow G//G\]
we can define variants of loop space. In particular, taking completions over points in \(G//G\) gives rise to generalizations of unipotent loop spaces, while taking completions over closed orbits of \(G/G\) gives rise to generalizations of formal loop spaces.
**Definition 2.8**.: Let \(G\) be a linear algebraic group with reductive neutral component acting on a prestack \(X\). Let \(\mathbb{O}\subset G\) denote a semisimple adjoint orbit, defining a closed substack \(\mathbb{O}/G\subset G/G\simeq\mathcal{L}(BG)\), and denote by \([\alpha]=:\mu(\mathbb{O})\in G//G\) its class in the affine quotient.
We define the following loop spaces:
1. The \(\alpha\)_-unipotent loop space_\(\mathcal{L}^{u}_{\alpha}(X/G)\) is the completion of \(\mathcal{L}(X/G)\) along the inverse image of \([\alpha]\in G//G\), or equivalently along the saturation3 of the orbit \(\mathbb{O}/G\). Footnote 3: I.e. the maximal closed substack of \(G/G\) containing \(\mathbb{O}\) as its unique closed orbit.
2. The \(\alpha\)_-formal loop space_\(\widehat{\mathcal{L}}_{\alpha}(X/G)\) is the completion of \(\mathcal{L}(X/G)\) along the inverse image of the orbit \(\mathbb{O}/G\).
3. The \(\alpha\)_-specialized loop space_\(\mathcal{L}^{\prime}_{\alpha}(X/G)=\mathcal{L}(X/G)\times_{\mathcal{L}(BG)}\mathbb{O}/G\) is the (derived) fiber of \(\mathcal{L}(X/G)\) over \(\mathbb{O}/G\).
Though the notation suggests otherwise, the above constructions only depend on the orbit \(\mathbb{O}\) (equivalently, \([\alpha]\in G//G\)) and not a choice of representative (or lift) \(\alpha\in\mathbb{O}\). If we make such a choice of representative, then we obtain an equivalence \(\mathbb{O}/G\simeq BG(\alpha)\) with the classifying stack of the centralizer of \(\alpha\), and we may define the following (which depends on \(\alpha\)):
* The _\(\alpha\)-twisted loop space4_\(\mathcal{L}_{\alpha}(X)\) is the (derived) fiber of \(\mathcal{L}(X/G)\) along \(\operatorname{Spec}k=\{\alpha\}\to BG(\alpha)\simeq\mathbb{O}/G\subset G/G\). Footnote 4: Also known as the derived fixed points for the self-map of \(X\) defined by the action of \(\alpha\) on \(X\). In particular, the \(\alpha\)-twisted loop space on \(X\) only requires this self-map as an input and not the entire \(G\)-action on \(X\).
The \(S^{1}\)-action on \(\mathcal{L}(X/G)\) restricts to \(S^{1}\)-equivariant inclusions
\[\mathcal{L}^{\prime}_{\alpha}(X/G)\subset\widehat{\mathcal{L}}_{\alpha}(X/G) \subset\mathcal{L}^{u}_{\alpha}(X/G)\subset\mathcal{L}(X/G).\]
The twisted loop space \(\mathcal{L}_{\alpha}(X)\) does not admit an \(S^{1}\)-action, but there is a map \(\mathcal{L}_{\alpha}(X)\to\mathcal{L}^{\prime}_{\alpha}(X/G)\).
#### 2.3.2. Sheaves via equivariant localization for loop spaces
We state here the equivariant localization theorem of [1] describing loops in the quotient stack \(X/G\) with given semisimple part \(\alpha\) in terms of unipotent loops on the quotient stack \(X(\alpha)/G(\alpha)\). Here \(X(\alpha)\) denotes the homotopy fixed-points \(X^{A}\) with respect to the closed subgroup \(A=\langle\alpha\rangle\subset G\) generated by \(\alpha\). We refer to Section A.2.2 below for a more detailed discussion.
Consider the natural map
\[\ell_{\alpha}:\mathcal{L}_{\alpha}(X(\alpha)/G(\alpha))\to\mathcal{L}_{\alpha} (X/G)\]
obtained by applying \(\mathcal{L}_{\alpha}\) to the natural map \(X(\alpha)/G(\alpha)\hookrightarrow X/G(\alpha)\to X/G\).
**Theorem 2.9** (Equivariant localization for derived loop spaces).: _Let \(G\) be a linear algebraic group with reductive neutral component acting on a locally Noetherian derived scheme \(X\), and \(\alpha\in G\) a semisimple element. Then, the unipotent \(\alpha\)-localization map_
\[\ell^{u}_{\alpha}:\mathcal{L}^{u}_{\alpha}(X(\alpha)/G(\alpha))\to\mathcal{L} ^{u}_{\alpha}(X/G)\]
_is an \(S^{1}\)-equivariant equivalence. The same is true for the corresponding localization maps on formal and specialized loops_
\[\hat{\ell}_{\alpha}:\widehat{\mathcal{L}}_{\alpha}(X(\alpha)/G(\alpha))\to \widehat{\mathcal{L}}_{\alpha}(X/G),\qquad\ell^{\prime}_{\alpha}:\mathcal{L} ^{\prime}_{\alpha}(X(\alpha)/G(\alpha))\to\mathcal{L}^{\prime}_{\alpha}(X/G).\]
Proof.: When \(X\) is smooth, the statement is Theorem A in [1]. The general case follows from the following: by the reduction in _loc_. _cit_. we may assume that \(\alpha\) is central, and let \(A=\langle\alpha\rangle\). We first claim that every derived scheme may locally be written as an iterated fiber product of smooth affine \(A\)-schemes. Since \(A\) has neutral component a multiplicative torus, every \(A\)-scheme has a Zariski cover by \(A\)-closed opens, so we may assume that \(X\) is affine. Since \(X\) is locally Noetherian, we may choose a \(A\)-equivariant semi-free resolution of \(\mathcal{O}(X)\), i.e. a finite-dimensional graded \(A\)-subrepresentation \(V_{0}^{\ast}\subset\mathcal{O}(X)\) which generates as a dg algebra, and likewise \(V_{1}^{\ast}\subset\mathcal{O}(V_{0})\) generating relations, \(V_{2}\subset\mathcal{O}(V_{0}\times V_{1})\) generating relations between relations, et cetera. These assemble into a diagram of smooth
affine \(A\)-schemes where the vertical arrows are zero-inclusions and the horizontal arrows are defined by the differentials in the semi-free resolution:
The limit of this diagram is \(X\), and since loop spaces and homotopy fixed points commute with limits we may deduce the theorem from the smooth case.
Finally, combining equivariant localization with Koszul duality, we have the following. As above, we refer to Section A.2.1 for details of the various renormalized categories, and Sections A.2.3 and A.2.4 for the notion of twisted \(S^{1}\)-actions and \(\alpha\)-trivial blocks.
**Theorem 2.10**.: _Let \(\alpha\in G\) be a semisimple element, and \(X/G\) a global quotient stack. We have compatible equivalences_
_functorial with respect to smooth pullback and proper pushforward, as well as graded versions._
_In particular, passing through Theorem 2.7 we have a Koszul duality:_
\[\widehat{\operatorname{QC}^{!}}(\mathcal{L}_{\alpha}^{u}(X/G))_{\alpha}^{ \operatorname{Tate}}\simeq\widehat{\operatorname{QC}^{!}}(\widehat{\mathcal{L }}(X/G))_{\alpha}^{\operatorname{Tate}}\underset{\text{Kos}}{\sim}\breve{ \mathcal{D}}(X(\alpha)/G(\alpha))_{\alpha}. \tag{2.2}\]
#### 2.3.3. Twisted Springer sheaves and graded lifts
We now consider the following situation, and example of the theory we have developed so far. Let \(\mu:\widetilde{X}/G\to X/G\) be a proper map of smooth stacks, and let \(\mathcal{S}=\mathcal{L}\mu_{*}\mathcal{O}_{\mathcal{L}(\widetilde{X}/G)}\in \operatorname{Coh}(\mathcal{L}(X/G)\). In Section 3 this will be the _coherent Springer sheaf_. Letting \(\hat{\ell}_{\alpha}\) denote the localization map, we let
\[\mathcal{S}(\alpha):=\hat{\ell}_{\alpha}^{\dagger}\mathcal{S}\in\widehat{ \operatorname{Coh}}(\widehat{\mathcal{L}}_{\alpha}(X(\alpha)/G(\alpha))\subset \operatorname{QC}^{!}(\widehat{\mathcal{L}}_{\alpha}(X(\alpha)/G(\alpha))\]
denote its \(!\)-restriction to \(\alpha\)-formal loops, which is equipped with
1. a canonical \(S^{1}\)-equivariant structure coming from the canonical \(S^{1}\)-equivariant structure on the dualizing sheaf \(\mathcal{O}_{\mathcal{L}(\widetilde{X}/G)}\simeq\omega_{\mathcal{L}(\widetilde {X}/G)}\) (see Section A.1),
2. a canonical \(\alpha\)-trivialization, since \(\alpha\) acts trivially on the structure sheaf, and therefore a factorization of the \(S^{1}\)-equivariant structure through a \(B\mathbb{G}_{a}\)-equivariant structure (see Section A.2.4),
3. a canonical graded lift \(\widetilde{\mathcal{S}}(\alpha)\) coming from the graded lift on \(\omega_{\widehat{\mathcal{L}}(\widetilde{X}(\alpha)/G(\alpha))}\), which comes from the identification of \(\widehat{\mathcal{L}}_{\alpha}(\widetilde{X}(\alpha)/G(\alpha))\) as an odd tangent bundle (see Section A.1.3).5
Koszul dually, we consider the induced map \(\mu^{\alpha}:\widetilde{X}(\alpha)/G(\alpha)\to X(\alpha)/G(\alpha)\). The object corresponding to the \(S^{1}\)-equivariant sheaf \(\widetilde{\mathcal{S}}(\alpha)\) is the filtered \(\mathcal{D}\)-module \(\widetilde{\mathbf{S}}(\alpha)\simeq\mu^{\alpha}_{\ast}\mathcal{O}_{ \widetilde{X}(\alpha)/G(\alpha)}\) where \(\mathcal{O}_{\widetilde{X}(\alpha)/G(\alpha)}\) is equipped with its canonical filtration. We denote the corresponding unfiltered \(\mathcal{D}\)-module by \(\mathbf{S}(\alpha)\). These objects have (derived) endomorphism algebras:
\[\mathcal{H}(\alpha)=\operatorname{End}_{\hat{\mathcal{L}}_{\alpha}(X/G)}( \mathcal{S}(\alpha))^{S^{1}},\qquad\mathbf{H}(\alpha)=\operatorname{End}_{X( \alpha)/G(\alpha)}(\mathbf{S}(\alpha))\]
and likewise \(\widetilde{\mathcal{H}}(\alpha),\widetilde{\mathbf{H}}(\alpha)\) for the corresponding graded lifts. We have the following immediate corollary of the Koszul duality equivalences.
**Corollary 2.11**: _View the sheaves \(\mathcal{S}(\alpha),\widetilde{\mathcal{S}}(\alpha)\) in the \(S^{1}\)-equivariant categories. We have an equivalence of graded algebras \(\widetilde{\mathcal{H}}(\alpha)^{\!\not\!{\sim}}\simeq\widetilde{\mathbf{H}}(\alpha)\) and of \(\mathbb{G}_{m}\)-equivariant categories:_
\[\langle\widetilde{\mathcal{S}}(\alpha)\rangle^{\mathbb{G}_{m}\!\not\!{\sim} }\simeq\operatorname{Mod}^{\mathbb{G}_{m}}(\widetilde{\mathcal{H}}(\alpha))^{ \!\not\!{\sim}}\simeq\operatorname{Mod}^{\mathbb{G}_{m}}(\widetilde{\mathbf{H }}(\alpha))\simeq\langle\widetilde{\mathbf{S}}(\alpha)\rangle^{\mathbb{G}_{m}}\]
_and therefore a Koszul duality:_
\[\langle\mathcal{S}(\alpha)\rangle\simeq\operatorname{Mod}(\mathcal{H}(\alpha ))\underset{Kos}{\sim}\operatorname{Mod}(\mathbf{H}(\alpha))\simeq\langle \mathbf{S}(\alpha)\rangle.\]
## 3 Nonarchimedean Local Langlands and Circle Actions
### Unipotent representations
Fix a non-archimedean local field \(F\) with residue field \(\mathbb{F}_{q}\), and a connected split reductive group \(G\) over \(F\) (though much of the discussion carries over to unramified groups). Lusztig [15] and Solleveld [14], [16] established a Langlands correspondence for the unipotent representations of such groups, as well as their pure inner forms. (Recall that a representation \(\pi\) of an inner form \(G^{\ast}\) of \(G\) is called unipotent if there exists a parahoric subgroup \(P\) of \(G\), and a unipotent representation \(\tau\) of the \(\mathbb{F}_{q}\)-points of the reductive quotient of \(P\), such that the restriction of \(\pi\) to \(P\) contains the representation \(\tau\).) The unipotent representations coming from a given \(G^{\ast}\), \(P\), and \(\pi\) are in bijection with the irreducible representations of the Hecke algebra \(\mathcal{H}_{G^{\ast},P,\pi}\), which is a so-called "affine Hecke algebra with unequal parameters".
Lusztig's construction yields a bijection of pairs \((G^{\ast},\pi)\), where \(G^{\ast}\) is a pure inner form of \(G\) and \(\pi\) is a unipotent representation, with the set of _unipotent extended Langlands parameters_ for \(G\); that is, the set of \(\check{G}\)-conjugacy classes of triples \((\sigma,n,\chi)\), where \(\sigma\) is a semisimple element in \(\check{G}\), \(n\in\mathcal{N}\) such that \(\sigma\cdot n=qn\), and \(\chi\) is an irreducible representation of the component group of the \(\check{G}\)-centralizer \(\check{G}^{\sigma,n}\) of \((\sigma,n)\).
If we fix a semisimple \(\sigma\) in \(\check{G}\), we can define the Vogan variety \(\mathcal{N}^{(\sigma,q)}/\check{G}^{\sigma}\) to be the space of \(n\in\mathcal{N}\) such that \(\sigma\cdot n=qn\); then a triple \((\sigma,n,\chi)\) determines a local system on the orbit of \(n\) in \(\mathcal{N}^{(\sigma,q)}/\check{G}^{\sigma}\), and thus a perverse sheaf on \(\mathcal{N}^{(\sigma,q)}/\check{G}^{\sigma}\). Lusztig's argument relates such perverse sheaves to representations of Hecke algebras by identifying the completions of each Hecke algebra \(\mathcal{H}_{G^{\ast},P,\pi}\) at maximal ideals of their centers with the derived endomorphisms of certain perverse sheaves on Vogan varieties \(\mathcal{N}^{\sigma,q}/\check{G}^{\sigma}\).
We make this construction explicit in the case of the principal block of \(G\); that is, the full subcategory of representations of \(G\) generated by their Iwahori fixed vectors. The irreducible such representations are precisely the irreducible representations of the affine Hecke algebra \(\mathcal{H}_{q}:=\mathcal{H}_{G,B,\operatorname{triv}}\). The maximal ideals of
the center of this Hecke algebra are in bijection with semisimple conjugacy classes in \(\check{G}\).
Assuming \(q\) is not a root of unity, Lusztig's construction realizes completions of the affine Hecke algebra \(\mathcal{H}_{q}(\sigma)\) at the maximal ideal corresponding to \(\sigma\in\check{G}\) as Koszul equivalent to the derived endomorphisms of a certain constructible sheaf on \(\mathcal{N}^{(\sigma,q)}/\check{G}^{\sigma}\).6 Concretely, we can define the \(\sigma\)-Springer sheaf \(\mathbf{S}_{q}(\sigma)\) to be the pushforward \(\mu^{(\sigma,q)}_{\mathfrak{s}}\mathbf{C}_{\widehat{\mathcal{N}}^{(\sigma,q)}}\) along the "\(\sigma\)-fixed" Springer resolution:
Footnote 6: In particular, the former is in cohomological degree \(0\), and the latter has generators in cohomological degree \(2\). See Section 3.4 and Definition A.10 for a discussion.
\[\mu^{(\sigma,q)}:\widehat{\mathcal{N}}^{(\sigma,q)}/\check{G}^{\sigma}\to \mathcal{N}^{(\sigma,q)}/\check{G}^{\sigma}.\]
The BBD decomposition theorem implies a direct sum decomposition
\[\mathbf{S}_{q}(\sigma):=\bigoplus_{(\mathcal{O},\mathcal{L})}E_{\mathcal{O}, \mathcal{L}}\otimes\mathbf{IC}(\mathcal{O},\mathcal{L})[d_{\mathcal{O}, \mathcal{L}}].\]
The intersection cohomology sheaves (extended from such local systems) which appear in the \(\sigma\)-Springer sheaf \(\mathbf{S}_{q}(\sigma)\) are the principal series representations that appear in the given L-packet. Furthermore, we have that \(\mathrm{End}(\mathbf{S}_{q}(\sigma))\) is an algebra whose degree \(0\) part is the specialization \(\mathcal{H}_{q,[\sigma]}\) of the affine Hecke algebra at the central character given by \([\sigma]\in\check{G}/\check{G}\), and thus \(\mathcal{H}_{q,[\sigma]}\) acts on simple summands of \(\mathbf{S}_{q}(\sigma)\). The Deligne-Langlands correspondence is given by this decomposition, i.e. \(E_{\mathcal{O},\mathcal{L}}\) are the Iwahori-invariants of the \(G(F)\)-representation corresponding to the Langlands parameter given by the \((\mathcal{O},\mathcal{L})\).
### The stack of unipotent Langlands parameters
An "algebraic" or "families" perspective on the unramified principal series was developed in [1]. Namely we consider the stack of _unipotent Langlands parameters_ which may be defined in the following equivalent ways. We fix \(q\) to be a prime power, in particular a nonzero integer, morally the order of the residue field of \(F\).
(1) Explicitly, it is the stack
\[\mathbb{L}^{u}_{F,G}=\mathbb{L}^{u}_{q,G}:=\{n\in\mathcal{N}_{\check{G}},g\in \check{G}\mid\mathrm{Ad}(g)\cdot n=qn\}/\check{G}\]
Note that we have dropped the condition on Deligne-Langlands parameter that the image of Frobenius \(g\) be semisimple.
(2) It is the \(q\)-twisted loop space7
Footnote 7: The first identification uses the fact that \(q\) is not a root of unity.
\[\mathcal{L}_{q}(\widehat{\mathcal{N}}_{\check{G}}/\check{G})=\mathcal{L}_{q} \big{(}\check{\mathfrak{g}}/\check{G}\big{)}=\mathcal{L}\big{(}\check{ \mathfrak{g}}/(\check{G}\times\mathbb{G}_{m})\big{)}\times_{\mathcal{L}(B \mathbb{G}_{m})}\{q\}.\]
Via the discussion in Definition 2.8, there is no \(S^{1}\)-rotation on the \(q\)-twisted loop space. However, there is an \(S^{1}\)-action coming from the action on \(\widehat{\mathcal{N}}_{\check{G}}/\check{G}=\mathcal{L}^{u}(B\check{G})\), which we do not consider. Equivalently, we may write \(\mathcal{L}_{q}(\mathcal{N}_{\check{G}}/\check{G})=\mathcal{L}_{q}(\check{U} /\check{G})\), where \(q\) acts by exponentiation.
(3) It is a substack of the stack of of \(\check{G}\)-local systems \(\mathcal{L}oc_{\check{G}}(T_{q})\) on the \(q\)-twisted topological torus \(T_{q}\) where we require the monodromy around the non-twisted loop to be unipotent, where \(T_{q}\) is the torus obtained by gluing the two boundaries of the cylinder \(S^{1}\times[0,1]\) along the degree \(q\) map. The space \(T_{q}\) has an \(S^{1}\)-action in the meridian direction (by "speeding up the loop") corresponding to the tame
monodromy (which we do not consider), but not in the longitudinal (Frobenius) direction. Equivalently, we can define \(T_{q}^{u}\) as the gluing of \(B\mathbb{G}_{a}\times[0,1]\) along the scaling by \(q\) map, so that \(\mathbb{L}_{q,G}^{u}=\mathcal{L}oc_{\tilde{G}}(T_{q}^{u})\) (here, \(q\) is not required to be an integer).
There is a natural map \(\mu:\mathbb{L}_{q,B}^{u}\to\mathbb{L}_{q,G}^{u}\) and we define the _coherent Springer sheaf_ by
\[\mathcal{S}_{q}:=\mu_{*}\mathcal{O}_{\mathbb{L}_{q,B}^{u}}\in\mathrm{Coh}( \mathbb{L}_{q,G}^{u}).\]
We have the following main theorem of [1].
**Theorem 3.1**.: _The affine Hecke algebra is naturally isomorphic to the \(\mathrm{Ext}\) algebra of the coherent Springer sheaf \(\mathcal{S}_{q}\), which vanishes in non-zero cohomological degrees. Therefore we have a full embedding_
_from the unramified principal series of \(G(F)\) to ind-coherent sheaves on the stack of unipotent Langlands parameters._
It is thus a natural question to relate our families, or coherent, or algebraic realization of unipotent principal series representations to the specialized, or constructible, or topological realization at parameters given by \(\sigma\)-Springer theory described above.
In the case \(G=GL_{n}\), the Bushnell-Kutzko theory of types, combined with the local Langlands theorem of Harris-Taylor and Henniart, allows us to reduce the entire smooth representation theory of \(G\) to the unramified principal series. As a result we deduce that the entire category of smooth representations of \(GL_{n}\) embeds into ind-coherent sheaves on the stack of all Langlands parameters \(\mathbb{L}_{F,GL_{n}}\), as constructed in [1, 2]:
**Theorem 3.2** ([1]).: _There is a full embedding_
\[D(GL_{n}(F))\hookrightarrow\mathrm{QC}^{!}(\mathbb{L}_{F,GL_{n}})\]
_from the derived category of smooth representations of \(GL_{n}(F)\) into ind-coherent sheaves on the stack of Langlands parameters. The embedding is characterized by taking irreducible cuspidals to skyscrapers at the corresponding Langlands parameters and compatibility with parabolic induction._
### Loop spaces and cyclic deformation
An intriguing feature of the stack \(\mathbb{L}_{q}^{u}\) of unipotent Langlands parameters is its proximity to the derived loop space of the equivariant nilpotent cone,
\[\mathcal{L}(\check{\mathcal{N}}/\check{G}\times\mathbb{G}_{m})\simeq\{n\in \check{\mathcal{N}},g\in\check{G},q\in\mathbb{G}_{m}\ :\ g\cdot n=qn\}/\check{G}\times\mathbb{G}_{m},\]
and Theorem 3.1 has a variant defined over the entire loop space and recovering the affine Hecke algebra with \(q\in\mathbb{G}_{m}\) as a parameter. More significantly, while the stack \(\mathbb{L}_{q}^{u}\) - the fiber over \(\{q\}\to\mathbb{G}_{m}/\mathbb{G}_{m}\) - does not inherit the circle action, the stack of _graded unipotent Langlands parameters_
\[\mathbb{L}_{\underline{q}}^{u}:=\mathbb{L}_{q}^{u}/\mathbb{G}_{m}\]
- the fiber over \(\{q\}/\mathbb{G}_{m}\in\mathbb{G}_{m}/\mathbb{G}_{m}\)
- does. Moreover Theorem 3.1 carries over unchanged over \(\mathbb{L}_{q}^{u}\), and in fact has an \(S^{1}\)-equivariant enhancement in which the
coherent Springer block deforms _trivially_ over the equivariant parameter \(k[u]=H^{*}(BS^{1})\).
**Theorem 3.3** (([Bchn22])).: _We have a full embedding_
\[\mathcal{H}_{q}[u]\text{-}\mathrm{mod}\simeq\langle\mathcal{S}_{q}\rangle \subset\mathrm{QC}^{!}(\mathbb{L}^{u}_{\mathfrak{Z}})^{\omega S^{1}}\]
_from the \(k[u]\)-basechange of the unramified principal series of \(G(K)\) to (renormalized) cyclic sheaves on the stack of graded unipotent Langlands parameters._
### Relation to constructible sheaves and graded Hecke algebras
The unipotent loop space construction of section 2.1 gives us a link between this uniform coherent description of the representation theory of \(G\) and the perverse description for fixed infinitesimal characters. Indeed, if one fixes a semisimple element \(\sigma\in\check{G}\), one can consider the formal completion \(\mathbb{L}^{u}_{\mathfrak{Z},[\sigma]}\) of \(\mathbb{L}^{u}_{\mathfrak{Z}}\) along the locus of pairs \((\sigma^{\prime},n)\) on which the semisimple part of \(\sigma^{\prime}\) lies in the conjugacy class \([\sigma]\) of \(\sigma\). One can then identify this formal completion with the \(q\)-specialized unipotent loop space \(\mathcal{L}^{u}_{\mathfrak{Z}}(\check{\mathcal{N}}^{\alpha}/\check{G}^{ \alpha})\) of the Vogan variety attached to the pair \(\alpha:=(\sigma,q)\).
Theorem 2.6 thus provides us a link between the (\(S^{1}\)-equivariant) coherent sheaves on the stack of unipotent Langlands parameters and \(\check{G}^{\alpha}\)-equivariant8\(\mathcal{D}\)-modules on \(\check{\mathcal{N}}^{\alpha}/\check{G}^{\alpha}\); this equivalence carries the \(!\)-restriction to \(\mathbb{L}^{u}_{\mathfrak{Z},[\sigma]}\) of the coherent Springer sheaf (henceforth denoted \(\mathcal{S}(\alpha)\)) to the \(\mathcal{D}\)-module that corresponds to the \(\alpha\)-Springer sheaf \(\mathbf{S}(\alpha)\). We have the following direct application of Corollary 2.11 to this setting.
Footnote 8: Since we specialized at \(q\in\mathbb{G}_{m}\) rather than completed, the resulting \(\mathcal{D}\)-modules are \(\mathbb{G}_{m}\)-constructible rather than \(\mathbb{G}_{m}\)-equivariant. However, \(\mathbb{G}_{m}\)-constructibility is automatic given \(\check{G}^{\alpha}\)-equivariance; see Remark 3.3.13 in [2].
**Theorem 3.4**.: _Let \(\mathbf{H}(\alpha)=\mathrm{End}_{\check{\mathcal{N}}^{\alpha}/\check{G}^{ \alpha}}(\mathbf{S}(\alpha))\) and \(\mathcal{H}(\alpha)\) denote the completion of the affine Hecke algebra at \(\alpha\). We have a Koszul duality:_
\[\widehat{\mathrm{QC}^{!}}(\mathbb{L}^{u}_{\mathfrak{Z},[\sigma]})^{\omega S^ {1}}\supset\langle\mathcal{S}(\alpha)\rangle\simeq\mathrm{Mod}(\mathcal{H}( \alpha))\underset{\text{Kos}}{\sim}\mathrm{Mod}(\mathbf{H}(\alpha))\simeq \langle\mathbf{S}(\alpha)\rangle\subset\mathcal{D}(\check{\mathcal{N}}^{ \alpha}/\check{G}^{\alpha}).\]
_Working 2-periodically, we have an identification of the full subcategories_
\[\widehat{\mathrm{QC}^{!}}(\mathbb{L}^{u}_{\mathfrak{Z},[\sigma]})^{\mathrm{ Tate}}\supset\mathrm{Mod}_{k[u,u^{-1}]}(\mathcal{H}(\alpha)^{per})\subset\mathcal{D}( \check{\mathcal{N}}^{\alpha}/\check{G}^{\alpha})^{per}.\]
The algebras \(\mathbf{H}(\alpha)\) are shearings of completions of Lusztig's graded Hecke algebras [11, 12]. We note that the equivariantization with respect to graded lifts, shearing, and de-equivariantization that appears in Koszul duality (as discussed in A.1.3) is compatible with the corresponding gradings in statements in [2, Sol19], and that one can avoid such complications by working 2-periodically.
### Periodic cyclic sheaves and pure inner forms
#### 3.5.1. The case \(G=Gl_{n}\)
In the case of \(GL_{n}\), the unramified principal series exhausts all the unipotent representations (and \(GL_{n}\) has only one pure inner form). Also in the case of \(GL_{n}\), Springer theory simplifies considerably: the Springer sheaf generates all equivariant \(\mathcal{D}\)-modules sheaves on the nilpotent cone, i.e. the only local systems that appear in the decomposition of the Springer sheaf are trivial, and all such local systems appear. This gives a derived equivalence between the category of \(D\)-modules \(\mathcal{D}(\mathcal{N}_{GL_{n}}/GL_{n})\) and modules for the graded Hecke algebra. Moreover
the same is true with arbitrary central character, i.e., for the \((\sigma,q)\)-variants of the nilpotent cone. In other words, the image of the coherent Springer sheaf generates the categories of periodic cyclic sheaves when completed at arbitrary parameters.
In [1] we show the following:
**Theorem 3.5**.: _For \(G=GL_{n}\), the embedding_
\[D(GL_{n})\otimes_{k}k[u]\subset\mathrm{QC}^{!}(\mathbb{L}_{F,GL_{n}})^{\omega S ^{1}}\]
_becomes an equivalence after inverting \(u\),_
\[D(GL_{n})\otimes_{k}k[u,u^{-1}]\simeq\mathrm{QC}^{!}(\mathbb{L}_{F,GL_{n}})^{ \mathrm{Tate}}.\]
As with Theorem 3.2, standard techniques reduce the assertion to the corresponding statement for the Iwahori block: namely, under the deformation from all coherent sheaves to periodic cyclic sheaves only the coherent Springer block survives. There we establish that the Springer resolution and Hecke algebras enjoy strong enough finiteness properties that we can deduce the generation property of the Springer sheaf
\[\langle\mathcal{S}_{q}\rangle\simeq\mathrm{QC}^{!}(\mathbb{L}_{\underline{q }}^{u})^{\mathrm{Tate}}\]
from the corresponding generation assertions one completion at a time (a sort of "fracture theorem").
#### 3.5.2. The case of general \(G\)
For a general reductive group, there are more perverse sheaves on the nilpotent cone and its \((\sigma,q)\)-fixed loci, coming from _cuspidal local systems_ on Levi subgroups of \(\check{G}\), as classified by Lusztig (see e.g. [11, 12]). On the other hand there are also more unipotent representations of \(G\) and of its pure inner forms than unramified principal series. As described in Section 3.1, the two are matched by the unipotent local Langlands correspondence proved by Lusztig [11] for \(G\) adjoint, simple and unramified and extended by Solleveld [13, 14] to all connected groups.
One expects (following variants of conjectures of [10, 1, 12], see forthcoming work [1] of Hemo and Zhu) that the entire category of coherent sheaves on \(\mathbb{L}_{q}^{u}\) parametrizes unipotent representations of the family of stabilizer groups of \(G\)-isocrystals.
Inside of this large category one might try to identify the _pure_ subcategory, parametrizing representations of the family of pure inner forms of \(G\) itself. For \(G\) semisimple these coincide with the groups (extended pure inner forms) associated to _basic_ isocrystals, which are those isocrystals whose stabilizers are actually inner forms of \(G\). For \(G\) general (for example \(G=GL_{n}\)) pure inner forms correspond only to a subset of basic isocrystals.
We propose in [1] that the cyclic deformation interpolates between all isocrystals and pure inner forms: i.e., the category of periodic cyclic sheaves on \(\mathbb{L}_{q}^{u}\) is identified with the category of all unipotent representations of pure inner forms of \(\check{G}\).
**Conjecture 3.6**.: There is a full embedding
\[\bigoplus_{\eta}D_{f.g.}(\check{G_{\eta}})^{u}\otimes_{k}k[u]\subset\mathrm{ Coh}(\mathbb{L}_{\underline{q}}^{u})^{S^{1}}\]
from the trivial \(u\)-deformation of the sum of unipotent representation categories of pure inner forms \(G_{\eta}\) of \(G\) into cyclic sheaves on graded unipotent Langlands parameters. Moreover this embedding becomes an equivalence after inverting \(u\),
\[\bigoplus_{\eta}D(\check{G}_{\eta})^{u}\otimes_{k}k[u,u^{-1}]\simeq\mathrm{QC}^ {!}(\mathbb{L}_{\underline{q}}^{u})^{\mathrm{Tate}}.\]
In other words, under the deformation from all coherent sheaves to periodic cyclic sheaves we expect only the representations of pure inner forms survive and moreover the deformation is trivial on this subcategory.
**Remark 3.7**.: The embedding of \(S^{1}\)-equivariant sheaves in the above conjecture is not expected to be an equivalence, which can be seen even when \(G=T\) is a torus in Example A.13. That is, the \(S^{1}\)-equivariant category has torsion which is killed by applying inverting \(u\in C^{\bullet}(BS^{1};k)\).
**Problem 3.8**.: Give an automorphic characterization of the full subcategory of \(S^{1}\)-invariant sheaves
\[\mathrm{QC}^{!}(\mathbb{L}_{\underline{q}}^{u})^{\omega S^{1}}\otimes_{k[u]}k \subset\mathrm{QC}^{!}(\mathbb{L}_{\underline{q}}^{u}).\]
### Whence circle actions?
It is tempting to propose a cyclic mechanism to relate coherent and constructible forms of the local Langlands correspondence in general, extending the results discussed in the unipotent nonarchimedean setting and paralleling those to be discussed in the archimedean setting. To do so, at the very least, would require a circle action on stacks of Langlands parameters (thus its category of coherent sheaves).
In the non-Archimedian case the stack of unipotent arithmetic Langlands parameters \(\mathbb{L}_{q}^{u}\) (representations of the Weil-Deligne group of \(F\)) is realized by taking the derived fixed points of a non-trivial automorphism (corresponding to Frobenius on the automorphic side) on the stack of unipotent geometric Langlands parameters \(\mathcal{N}_{\check{G}}/\check{G}\) (Weil-Deligne representations of the inertia subgroup), which does not carry a circle action. We are able to define a circle action by instead taking an \(S^{1}\)-invariant substack of the derived loop space, i.e., the derived fixed points of the identity map, of \(\mathcal{N}_{\check{G}}/(\check{G}\times\mathbb{G}_{m})\), which does carry such an action. On the level of categories, the trace of the identity operator (Hochschild homology) carries a circle action, while the trace of Frobenius does not.
Let us re-examine the origin of the circle action on the stack of unipotent Langlands parameters, which we first identified from the explicit form of the equations. The moduli space \(\mathbb{L}_{q}^{u}\) is "polynomial in \(q\)", in the sense that it is the fiber over \(q\in\mathbb{G}_{m}\) of a natural algebraic family over \(\mathbb{G}_{m}\). When we work \(\mathbb{G}_{m}\)-equivariantly, the total space of this family became a loop space - i.e., we recover not \(\mathbb{L}_{q}^{u}\) itself but its quotient \(\mathbb{L}_{\underline{q}}^{u}\) by \(\mathbb{G}_{m}\) by setting the \(S^{1}\)-invariant equation "Frobenius acts by \(q\)" on a loop space. From the perspective of realizing representations of the affine Hecke algebra, this graded version served equally well, as the coherent Springer sheaf and all of its endomorphisms are \(\mathbb{G}_{m}\)-invariant.
#### 3.6.1. Graded unipotent Langlands parameters
Another way to express the origin of the graded lift in the setting of unipotent Langlands parameters is through the theory of categorical traces, as in [1, 2, 18, 19]. Namely the entire category of ind-coherent sheaves on \(\mathbb{L}_{q}^{u}\) arises by [10] as the categorical trace of Frobenius acting on the spectral form of the affine Hecke category (the monoidal category of coherent sheaves on the Steinberg stack). Taking \(K=\overline{\mathbb{F}}_{q}((t))\), Bezrukavnikov's tamely ramified local geometric Langlands correspondence [1, 1] gives a monoidal equivalence
\[\mathcal{D}(I\backslash G_{F}/I)\simeq\operatorname{QC}^{!}(\widehat{\mathcal{ N}}_{\tilde{G}}\times_{\tilde{\mathfrak{g}}}\widehat{\mathcal{N}}_{\tilde{G}})\]
between the automorphic and spectral affine Hecke categories, intertwining the pullback by geometric Frobenius with the pushforward by scaling by \(q\), whence the trace of Frobenius on the two monoidal categories is identified as well. On the automorphic side Hemo and Zhu [19] relate this trace to unipotent representations of pure inner forms of \(G(K)\) (among the more general groups associated to isocrystals).
However Bezrukavnikov's equivalence has an expected graded lift (announced in [19]). On the spectral side this simply involves incorporating \(\mathbb{G}_{m}\)-equivariance on Steinberg, whence the trace of Frobenius becomes \(\operatorname{QC}^{!}(\mathbb{L}_{q}^{u})\). On the automorphic side one obtains the "mixed" affine Hecke category [1], where we replace perverse sheaves by "Tate" Weil sheaves on the affine flag variety, or more conceptually the graded category of \(\ell\)-adic sheaves as defined in great generality by Ho and Li [19].
Thus one can consider the trace of the identity on the graded affine Hecke category, calculated automorphically and spectrally, as providing a mechanism to extend the results of [19] to a proof of Conjecture 3.6.
#### 3.6.2. Beyond the unipotent setting
Our experience in the unipotent setting suggests we should look for \(S^{1}\)-actions on categories of \(\mathbb{G}_{m}\)-equivariant sheaves on stacks of Langlands parameters. Indeed in the case of \(GL_{n}\), we obtained such an action by a standard reduction to the Iwahori block. In general stacks of Weil-Deligne representations carry natural \(\mathbb{G}_{m}\)-actions rescaling the nilpotent endomorphism, and one can construct \(S^{1}\) actions on the quotient by reduction to unipotent cases. We summarize our hopes in the following broad list of problems:
**Problem 3.9**.: Relate the coherent and constructible non-archimedean local Langlands correspondences as follows:
* Identify a natural source for \(S^{1}\)-actions on the \(\mathbb{G}_{m}\)-quotients of stacks of Weil-Deligne representations.
* Construct an \(S^{1}\)-action on the graded form (in the sense of [19]) of \(\ell\)-adic sheaves on stacks of \(G\)-isocrystals.
* see Remark 4.6.8 in [20].
* Compare the full subcategory of \(S^{1}\)-equivariantizable sheaves on the automorphic side to the subcategory corresponding to representations of pure inner forms.
* Show that the cyclic deformation of the coherent local Langlands correspondence on this subcategory specializes, for fixed infinitesimal character, to (a categorical form) of Vogan's constructible local Langlands correspondence.
**Remark 3.10**.: It may be tempting to forget \(\mathbb{G}_{m}\)-equivariance and introduce a \(S^{1}\)-action on the trace of an automorphism in general by imposing \(\mathbb{Z}\)-equivariance with respect to only that automorphism. Namely, let \(X\) be a stack with an automorphism \(\phi\); we may form the stack \(X/\phi^{\mathbb{Z}}\) which is a stack over \(B\mathbb{Z}=S^{1}\). Then, its loop space \(\mathcal{L}(X/\phi^{\mathbb{Z}})\) lives over \(\mathcal{L}(B\mathbb{Z})\simeq\mathbb{Z}\times S^{1}\). Taking the fiber over \(\{1\}\times S^{1}\) (put another way, the substack of \(\operatorname{Map}(S^{1},X/\phi^{\mathbb{Z}})\) whose composition along \(X/\phi^{\mathbb{Z}}\to B\mathbb{Z}=S^{1}\) is a degree 1 map), we evidently obtain a space with an \(S^{1}\)-action. Unfortunately, the resulting \(S^{1}\)-action is free, thus the resulting categories will be entirely \(u\)-torsion and thus be killed by the \(S^{1}\)-localization. Roughly speaking, the difference between \(\mathbb{Z}\)-equivariance and \(\mathbb{G}_{m}\)-equivariance can be seen via the failure of the functor \(\operatorname{Rep}(\mathbb{G}_{m})\to\operatorname{Rep}(\mathbb{Z})\) (i.e. pullback along the map \(\mathbb{Z}\to\mathbb{G}_{m}\) which sends 1 to our chosen automorphism) to be fully faithful; the latter has nontrivial \(\operatorname{Ext}^{1}\) while the former does not.
## 4. From Archimedean Local Langlands to Twistor Geometric Langlands
In this section we discuss how to realize the basic paradigm of this paper, relating coherent and constructible forms of the local Langlands correspondence via equivariant localization for circle actions on derived loop spaces, in the archimedean setting, following [1, 2, 3]. To do so we must first describe these two forms of the correspondence. The constructible form we take is a variant on Soergel's conjecture, while the coherent form is given by the tamely ramified geometric Langlands correspondence on the twistor line \(\widetilde{\mathbb{P}}^{1}_{\mathbb{R}}\), which can be viewed as an archimedean counterpart to Fargues' conjecture.
We begin with the more familiar constructible archimedean local Langlands correspondence (for fixed infinitesimal character), which is a theorem of Adams-Barbasch-Vogan (ABV) on the level of Grothendieck groups and a conjecture of Soergel on the level of derived categories. In Section 4.1 we describe the automorphic side, realizing the relevant categories of representations of pure inner forms as categories of equivariant constructible sheaves on flag varieties using Kashiwara-Schmid's variant of Beilinson-Bernstein localization. In Section 4.2 we present (following [1]) a simple stacky description of the spectral side, the ABV geometric parameter spaces [1] (see Mason-Brown's article [1, 2] in these proceedings). The comparison of the two, Soergel's conjecture, is described in Section 4.3.
We then proceed to present a "families" version of archimedean local Langlands, in which we allow the infinitesimal character to vary. In Section 4.4 we describe the automorphic (representation theory) side, and in Section 4.5 the spectral side. Namely, we introduce a stack \(\mathbb{L}^{\eta}\) of Langlands parameters which is the archimedean counterpart of the stacks encountered in the nonarchimedean setting. This stack has an elementary and explicit description reminiscent of the stack of unipotent Langlands parameters, but can also be realized as a stack of local systems on the twistor line \(\widetilde{\mathbb{P}}^{1}_{\mathbb{R}}\) with tame ramification at infinity, which provides it with a natural circle action as a twisted loop space. We explain how this stack smoothly interpolates between the ABV parameter spaces for varying infinitesimal character, by a
form of Jordan decomposition of loops. We then formulate a "families version" of Soergel's conjecture, identifying smooth representations of pure inner forms (with arbitrary infinitesimal character) with cyclic sheaves on \(\mathbb{L}^{\eta}\) (i.e., the Tate construction on \(S^{1}\)-equivariant coherent sheaves). This conjecture is closely parallel to the nonarchimedean picture described in the previous chapter.
Finally in Section 4.6 we explain the automorphic counterpart to the full category of coherent sheaves on \(\mathbb{L}^{\eta}\). We formulate the tamely ramified geometric Langlands conjecture for \(\widetilde{\mathbb{P}}^{1}_{\mathbb{R}}\) (following [1, 16]) and explain how it recovers Soergel's conjecture as its periodic cyclic deformation. The underlying geometric mechanism on the automorphic side is the realization of the equivariant flag varieties appearing as the automorphic side of Soergel's conjecture (via the Kashiwara-Schmid description of representations) as the semistable locus (and \(S^{1}\)-fixed points) of the stack of parabolic bundles on \(\widetilde{\mathbb{P}}^{1}_{\mathbb{R}}\).
### Archimedean Local Langlands: Automorphic side
Let \((G,\theta)\) be a complex reductive group equipped with a quasisplit real form. The real form \(\theta\) gives rise to a collection \(\Theta\) of pure inner forms, which arise geometrically in the following way [1, 16]: consider the Galois-fixed points \(BG^{\Gamma}\) of \(BG\) where \(\Gamma\) acts by the conjugation \(\theta\): we have a decomposition
\[BG^{\Gamma}\simeq\coprod_{\tau\in\Theta}BG_{\tau}\]
where \(\Theta\) is the set of equivalence classes of pure inner forms of \(\theta\) and \(G_{\tau}\) is the corresponding real form (which may appear with multiplicities).
The local Langlands correspondence parametrizes representations for the groups \(G_{\tau}\) as \(\tau\) varies over \(\Theta\). Thus an ultimate goal of the real local Langlands program is to describe the entire dg category
\[\mathcal{HC}_{\Theta}=\bigoplus_{\tau\in\Theta}\mathcal{HC}_{\tau}\]
of Harish-Chandra modules for real groups in the pure inner class \(\Theta\) in Langlands dual terms. For each \([\lambda]\in\mathfrak{h}^{*}/W\), we write \(\mathcal{HC}_{\tau,[\lambda]}\) for the dg category of Harish Chandra modules for the real form \(G_{\tau}\) with pro-completed generalized infinitesimal character \([\lambda]\), and
\[\mathcal{HC}_{\Theta,[\lambda]}=\bigoplus_{\tau\in\Theta}\mathcal{HC}_{\tau,[ \lambda]}.\]
We now assume that \(\lambda\) is regular (see Remark 4.3 for comments on the singular case), and let \(K_{\tau}\) denote the complexification of a maximal compact subgroup of \(G_{\tau}\). Then we can apply Beilinson-Bernstein localization to describe the category \(\mathcal{HC}_{\tau,[\lambda]}\) of Harish-Chandra \((\mathfrak{g},K_{\tau})\)-modules for the real groups \(G_{\tau}\) in terms of \(\lambda\)-twisted \(\mathcal{D}\)-modules on the flag variety \(G/B\), equivariant for the corresponding complex symmetric subgroup \(K_{\tau}\):
\[\mathcal{HC}_{\tau,[\lambda]}\simeq\mathcal{D}_{\lambda}(K_{\tau}\backslash G /B).\]
Applying the Riemann-Hilbert correspondence, this de Rham realization has a Betti counterpart as \(\alpha=\exp(2\pi i\lambda)\)-twisted constructible sheaves on \(K_{\tau}\backslash G/B\) (see [10]).
Kashiwara and Schmid [10, 11] introduced another identification of derived categories of representations with equivariant derived categories on flag varieties, which is more directly related to admissible representations of the groups
(for example by natural globalization functors):
\[\mathcal{HC}_{\tau,[\lambda]}\simeq\mathcal{S}hv_{\lambda}(G_{\tau}\backslash G/B).\]
In this realization we replace \(K_{\tau}\)-equivariance by \(G_{\tau}\)-equivariance, and the identification of the two realizations is provided by the Matsuki correspondence for sheaves [10].
We can describe all of these categories for varying \(\tau\) simultaneously using homotopy fixed points: we have a natural identification
\[\coprod_{\tau\in\Theta}G_{\tau}\backslash G/B\simeq(B\backslash G/B)^{\Gamma}\]
where \(\Gamma\) acts on \(B\backslash G/B\simeq G\backslash(G/B\times G/B)\) by switching the factors composed with the real form \(\theta\). In other words, representations of the entire pure inner class \(\Theta\) are naturally realized as
\[\mathcal{HC}_{\Theta,[\lambda]}\simeq\mathcal{S}hv_{\lambda}((B\backslash G/ B)^{\Gamma}).\]
### Archimedean Local Langlands: Spectral side
Let \((\check{G},\eta)\) be the Langlands dual group to \((G,\theta)\) with its dual algebraic involution. Introduce the geometric \(L\)-group \(G^{L}\) associated to \(G\) and the conjugation \(\theta\) via the Galois-equivariant Satake equivalence [22, 23]. It is an extension \(\check{G}\to G^{L}\to\Gamma\) by the Galois group \(\Gamma=\operatorname{Gal}(\mathbb{C}/\mathbb{R})\) (though not necessarily a semi-direct product). To avoid subtleties with the definition of the L-group we will assume \(G\) is of adjoint type.
On the spectral side, the involution \(\eta\) likewise gives rise to a collection of involutions in the inner class of \(\eta\) parameterized by the set \(\Sigma(\eta):=\{\sigma\in\check{G}\mid\sigma\eta(\sigma)=e\}/\check{G}\), and likewise a decomposition
\[B\check{G}^{\Gamma}\simeq\operatorname{Hom}(\Gamma,\check{G}\rtimes\Gamma)/( \check{G}\rtimes\Gamma)\simeq\{\sigma\in\check{G}\mid\sigma\eta(\sigma)=e\}/ \check{G}\simeq\coprod_{\sigma\in\Sigma(\eta)}BK_{\sigma}\]
where \(K_{\sigma}:=\check{G}^{u}\subset\check{G}\) is the corresponding symmetric subgroup for the involution \(\iota(g)=\tilde{\sigma}\eta(g)\tilde{\sigma}^{-1}\) attached to a representative \([\tilde{\sigma}]=\sigma\in\Sigma(\eta)\).
Let \(X(G^{L})\) denote the ABV space of geometric parameters. Since the ABV local Langlands correspondence only concerns _equivariant_ sheaves on \(X(G^{L})\), it is natural to consider instead the quotient stack \(\mathcal{X}(G^{L}):=X(G^{L})/\check{G}\). The stack \(\mathcal{X}(G^{L})\) is a disjoint union of stacks \(\mathcal{X}(\mathbb{O}_{\lambda},G^{L})\) over semisimple orbits \(\mathbb{O}_{\lambda}=\check{G}\cdot\lambda\subset\check{\mathfrak{g}}\). We write \(e(\mathbb{O}_{\lambda})=\exp(2\pi i\mathbb{O}_{\lambda})=\check{G}\cdot \alpha\subset\check{G}\) with \(\alpha=\exp(2\pi i\lambda)\), and \(\check{P}(\lambda)\subset\check{G}(\alpha)\) the associated parabolic in the centralizer. We write the symmetric subgroup \(K_{\sigma}(\alpha):=P(\lambda)^{\sigma}\) for
\[\sigma\in\Sigma(\eta,\alpha):=\{\sigma\in\check{G}\mid\sigma\eta(\sigma)= \alpha\}/\check{G}(\alpha). \tag{4.1}\]
The description of the \(\check{G}\)-orbits [22] combined with simple observations about homotopy fixed points gives rise to the following succinct description of the equivariant ABV spaces from [22]:
**Proposition 4.1**.: _The ABV stack \(\mathcal{X}(\mathbb{O}_{\lambda},G^{L})\) for fixed \(\lambda\) is identified with the disjoint union_
\[\coprod_{\sigma\in\Sigma(\eta,\alpha)}K_{\sigma}(\alpha)\backslash\check{G}( \alpha)/\check{P}(\lambda)\]
_which in turn is identified with the fixed point stack_
\[(\check{P}(\lambda)\backslash\check{G}(\alpha)/\check{P}(\lambda))^{\Gamma}\]
_where \(\Gamma\) is acting by exchanging the factors composed with \(\eta\)._
### Soergel's conjecture
We will now assume that \(\mathbb{O}\) is a _regular_ semisimple orbit, i.e., that we are parametrizing representations with regular infinitesimal character; see Remark 4.3 for discussion of singular characters. In particular the associated parabolic \(P(\lambda)\) in the centralizer \(\check{G}(\alpha)\) of \(\alpha\) is a Borel \(\check{B}(\alpha)\). In this case the ABV parametrization is succinctly expressed as describing the Grothendieck group of the category of sheaves on \((\check{B}(\alpha)\backslash\check{G}(\alpha)/\check{B}(\alpha))^{\Gamma}\).
**Conjecture 4.2** (Soergel's conjecture, [10] formulation).: There is a Koszul duality
\[\mathcal{S}hv_{\alpha}((B\backslash G/B)^{\Gamma})\underset{Kos}{\sim} \mathcal{S}hv((\check{B}(\alpha)\backslash\check{G}(\alpha)/\check{B}(\alpha)) ^{\Gamma})\]
between the category of Harish-Chandra modules for the pure inner class \(\Theta\) with regular infinitesimal character \(\lambda\) with exponential \(\alpha\) and the category of sheaves on the ABV geometric parameter stacks.
By _Koszul duality_\(\mathcal{C}\underset{Kos}{\sim}\mathcal{D}\) we mean that the categories \(\mathcal{C}\) and \(\mathcal{D}\) admit graded lifts \(\mathcal{C}_{gr},\mathcal{D}_{gr}\) (also known as "mixed versions"), and that we have an equivalence of graded categories
\[\mathcal{C}_{gr}\simeq\mathcal{D}_{gr}^{\not{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
monoidal categories identified here, and the equivalence is expected to respect this structure.
**Remark 4.3** (Singular infinitesimal character).: For \(\lambda\) singular, we need to modify the statement of Conjecture 4.2 as follows. On the spectral side, we replace the Borel \(\hat{B}(\alpha)\) by the parabolic \(\hat{P}(\lambda)\) determined by \(\lambda\) as in [10]. On the automorphic side, the category of Harish-Chandra modules for \(\lambda\) singular is a quotient of the corresponding category of equivariant sheaves on the flag variety, described e.g. in [11]. Equivalently, there is a natural projector (an idempotent monad) acting on the category of sheaves whose modules are identified with representations. On the level of Hecke categories (or in the case of a complex group) these matching modifications are the parabolic-singular Koszul duality of [1, 2].
### Representations of real groups in families
We now recast the automorphic side of Soergel's conjecture, i.e., the category of Harish-Chandra modules \(\mathcal{HC}_{\Theta,[\lambda]}\) for real forms \(G_{\tau}\) in the pure inner class of \(\theta\) with regular infinitesimal character \(\lambda\). As we described, this is realized by Kashiwara-Schmid localization [11, 12] as \(\alpha\)-twisted sheaves (for \(\alpha=\exp(2\pi i\lambda)\)) on
\[\coprod_{\tau\in\Theta}G_{\tau}\backslash G/B\simeq(B\backslash G/B)^{\Gamma}.\]
In order to incorporate variation with \(\alpha\) we first reformulate \(\alpha\)-twisted sheaves on \(G/B\) as sheaves on the torus bundle \(G/N\to G/B\) which are locally constant with generalized monodromy \(\alpha\) along the fibers. To let \(\alpha\) vary we simply drop the monodromicity condition and allow arbitrary sheaves which are locally constant along the fibers. This local constancy can be reformulated as not allowing semisimple directions in the singular support of our sheaves - i.e., we are considering \(\mathcal{S}hv_{\mathcal{N}}(G/N)\), sheaves on \(G/N\) with nilpotent singular support.
Thus a families version of the automorphic side of Conjecture 4.2 is provided by a direct sum over the pure inner class \(\tau\in\Theta\) of categories of nilpotent sheaves \(\mathcal{S}hv_{\mathcal{N}}(G_{\tau}\backslash G/N)\).
#### 4.4.1 From sheaves to representations
It is natural to wonder how the categories \(\mathcal{S}hv_{\mathcal{N}}(G_{\tau}\backslash G/N)\) relate to our original motivation, namely, representations of the real groups \(G_{\tau}\). For this we need a form of Kashiwara-Schmid localization with varying central character.
In the de Rham setting of \(\mathcal{D}\)-modules, the article [1] introduced a version of Beilinson-Bernstein localization for varying infinitesimal characters, replacing twisted \(\mathcal{D}\)-modules \(\mathcal{D}_{\alpha}(G/B)\) on \(G/B\) by weakly \(H\)-equivariant \(\mathcal{D}\)-modules \(\mathcal{D}_{H}(G/N)\) on \(G/N\). The result was an identification
\[U\mathfrak{g}\text{-mod}\simeq\mathcal{D}_{H}(G/N)^{\mathbb{D}}\]
of \(U\mathfrak{g}\)-modules with modules for the Weyl-Demazure monad \(\mathbb{D}\), an explicit algebra acting on \(\mathcal{D}_{H}(G/N)\) enforcing Weyl-group invariance - i.e., accounting for the descent data from \(\mathcal{D}\)-modules, which depend on \(\lambda\in\mathfrak{h}^{*}\), to representations, which depend on \([\lambda]\in\mathfrak{h}^{*}/W\). Specializing to singular infinitesimal character this recovers the quotient from \(\mathcal{D}\)-modules to representations discussed in Remark 4.3. If we introduce equivariance for symmetric subgroups, we obtain a similar description
\[\mathcal{HC}_{\Theta}\simeq\bigoplus_{\tau\in\Theta}\mathcal{D}_{H}(K_{\tau }\backslash G/N)^{\mathbb{D}}\]
for the entire category of Harish-Chandra modules for the pure inner class \(\Theta\).
The Betti category \(\mathcal{S}hv_{\mathcal{N}}(G/N)\) differs from its de Rham counterpart \(\mathcal{D}_{H}(G/N)\) by discarding the choice of logarithm \(\lambda\in\mathfrak{h}^{*}\) of the monodromy \(\alpha\in\tilde{H}\) - this accounts for the difference between local systems and flat connections on a torus
\[\mathcal{L}oc(H)\simeq\mathrm{QC}(\check{H})\quad\quad\text{vs.}\quad\quad \mathcal{C}onn(H)\simeq\mathrm{QC}(\mathfrak{h}^{*}/X_{\bullet}(\check{H})).\]
However this distinction can be removed by keeping track of an extra lattice grading, so that in particular we can recover \(U\mathfrak{g}\)-mod as algebras over an "affine Weyl" monad \(\mathbb{D}_{\mathrm{aff}}\) on \(\mathcal{S}hv_{\mathcal{N}}(G/N)\). Likewise we can recover
\[\mathcal{H}\mathcal{C}_{\Theta}\simeq\bigoplus_{\tau\in\Theta}\mathcal{S}hv_{ \mathcal{N}}(G_{\tau}\backslash G/N)^{\mathbb{D}_{\mathrm{aff}}},\]
providing a direct link between the automorphic side of Conjecture 4.5 and representations of real groups.
### Twistor Geometric Langlands: Spectral side
A natural families version of the spectral side of Soergel's conjecture comes from considering Langlands parameters on the twistor line. We introduced the _twistor Langlands parameter stack_\(\mathbb{L}^{\eta}\) in the following equivalent ways.
(1) Letting \(\mathcal{B}:=\check{G}/\check{B}\) denote the flag variety, it is the stack
\[\mathbb{L}^{\eta}:=\{(g,\check{B}^{\prime})\in\check{G}\times\mathcal{B}\mid g \eta(g)\in\check{B}^{\prime}\}/\check{G}\]
where \(\check{G}\) acts by \(\eta\)-twisted conjugation, i.e. \(h\cdot(g,\check{B}^{\prime})=(hg\eta(h^{-1}),h\check{B}^{\prime}h^{-1})\). In other words, \(\mathbb{L}^{\eta}\) is the fiber product
of the square map on the second component of \(G^{L}\) with the Grothendieck-Springer resolution of \(\check{G}/\check{G}\). There is a monodromy map to the universal Cartan
\[\chi:\mathbb{L}^{\eta}\to\check{B}/[\check{B},\check{B}]=\check{H},\qquad g \mapsto g\eta(g)\pmod{[\check{B},\check{B}]}.\]
(2) We have a description in terms of loop spaces (see Theorem 4.6 of [1]
\[\mathbb{L}^{\eta}\simeq\mathcal{L}((\check{B}\backslash\check{G}/\check{B})^ {\Gamma})\simeq\mathcal{L}(\check{B}\backslash\check{G}/\check{B})^{\Gamma}\]
where \(\eta\in\Gamma=\mathbb{Z}/2\mathbb{Z}\) acts by \(\eta\) and swapping the two factors, and an \(S^{1}\)-action on \(\mathbb{L}^{\eta}\) from this description. The monodromy map is encoded by the \(S^{1}\)-equivariant map
\[\chi:\mathcal{L}(\check{B}\backslash\check{G}/\check{B})^{\Gamma}\to\mathcal{ L}(B\check{B}\times B\check{B})^{\Gamma}\simeq\check{B}/\check{B}\to\check{B} //\check{B}\simeq\check{H}.\]
We also have a description as the \(\eta\)-twisted loop space
\[\mathcal{L}((\check{B}\backslash\check{G}/\check{B})^{\Gamma})\simeq\mathcal{ L}_{\eta}((\mathcal{B}\times\mathcal{B})/\check{G})=\mathcal{L}((\mathcal{B} \times\mathcal{B})/G^{L})\times_{\mathcal{L}(B\Gamma)}\{\eta\}\]
though this presentation does not a priori inherit an \(S^{1}\)-action from the loop space \(\mathcal{L}((\mathcal{B}\times\mathcal{B})/G^{L})\) (see Definition 2.8).10
Footnote 10: If we impose \(\Gamma\)-equivariance, i.e. take the specialized loop space \(\mathcal{L}^{\prime}_{\eta}\), there is a circle action, but this action is “half” the degree of the action we consider. The \(\Gamma\)-equivariance allows for a well-defined notion of half-degree.
(3) We also recall (see e.g. [1, 3.2]) that \(\mathbb{L}^{\eta}\) may be identified with \(\eta\)-twisted parabolic \(\check{G}\)-local systems on the twistor \(\widetilde{\mathbb{P}}^{1}_{\mathbb{R}}\), i.e. the real form defined
by \(\mathbb{P}^{1}_{\mathbb{C}}\) modulo the \(\Gamma\)-action by antipodal map \(z\mapsto-1/\bar{z}\). Namely, we consider \(G^{L}\)-local systems on \(\tilde{\mathbb{P}}^{1}_{\mathbb{R}}\backslash\infty\) whose induced \(\Gamma\)-local system is identified with the orientation double cover, and moreover are equipped with a Borel containing the monodromy. Equivalently, these are \(\check{G}\)-local systems on \(\mathbb{P}^{1}_{\mathbb{C}}\backslash 0,\infty\) with invariant flags at the two poles, and equipped with a \(\Gamma\)-equivariant structure respecting the flags. The description of \(\mathbb{L}^{\eta}\) makes evident an action of \(S^{1}\), coming from the geometric action of \(U(1)\) as symmetries of \((\tilde{\mathbb{P}}^{1}_{\mathbb{R}},\infty)\) (equivalently, rotation of \(\mathbb{P}^{1}_{\mathbb{C}}\) along the axis through the poles, which commutes with the aitipodal map) - in fact the stack of parabolic local systems on \((\mathbb{P}^{1}_{\mathbb{C}},0,\infty)\), the group version of the Steinberg stack, is naturally identified as the loop space \(\mathcal{L}(\check{B}\backslash\check{G}/\check{B})\). The monodromy map is given by exactly by the monodromy around \(\infty\).
For an element \(\alpha\in\check{H}\) of the universal Cartan, we define the monodromic twistor parameter space \(\mathbb{L}^{\eta}_{\alpha}\) to be the formal completion of \(\mathbb{L}^{\eta}\) along the fiber over \(\alpha\). A key observation of [10] is an identification of \(\mathbb{L}^{\eta}_{\alpha}\) as the unipotent loop space of the corresponding ABV space (recall Proposition 4.1):11
Footnote 11: Recall from (4.1) the meaning of \(\Sigma(\eta,\alpha)\).
\[\mathbb{L}^{\eta}_{\alpha}\simeq\coprod_{\sigma\in\Sigma(\eta,\alpha)}\mathcal{ L}^{u}(K_{\sigma}(\alpha)\backslash\check{G}(\alpha)/\check{B}(\alpha))\simeq \mathcal{L}^{u}((\check{B}(\alpha)\backslash\check{G}(\alpha)/\check{B}( \alpha))^{\Gamma})\]
This identification may be viewed as a special case of equivariant localization for the group \(B\times B\), discussed in Example A.2.5, i.e. it obtained from the loop space \(\mathcal{L}((B\backslash\check{G}/B)^{\Gamma})\) by completing at \(\alpha\in B/B=H\). In particular, \(\mathbb{L}^{\eta}_{\alpha}\) comes equipped with a \(\mathbb{G}_{m}\)-action contracting it to the ABV space \(\mathcal{X}_{\alpha}\). This identification respects circle actions up to an explicit central twist (see Section A.2.3), and as a result the categories of cyclic sheaves are identified. Moreover, applying Theorem 2.7 identifying Tate sheaves on formal and unipotent loops, as a result of the contracting \(\mathbb{G}_{m}\)-action, one can identify cyclic sheaves on \(\mathbb{L}^{\eta}_{\alpha}\) directly with filtered \(\mathcal{D}\)-modules (and, by Riemann-Hilbert on finite orbit stacks, sheaves) on the ABV spaces, i.e., with the spectral side of Soergel's conjecture:12
Footnote 12: See Section A.1 for a discussion of the renormalized Tate construction.
**Theorem 4.4**.: _There is a \(k[u,u^{-1}]\)-linear equivalence of categories_
\[\mathrm{QC}^{!}(\mathbb{L}^{\eta}_{\alpha})^{\mathrm{Tate}}\simeq\mathcal{S}hv ((\check{B}(\alpha)\backslash\check{G}(\alpha)/\check{B}(\alpha))^{\Gamma}) \otimes_{k}k[u,u^{-1}]\]
_between periodic cyclic sheaves on the monodromic twistor Langlands parameter space and the category of sheaves on the \(\alpha\)-ABV parameter spaces (base changed to \(k[u,u^{-1}]\))._
Thus we have a families version of the spectral side of Soergel's Conjecture to compare with the automorphic counterpart given by nilpotent sheaves on \(G/N\). We encode this expectation in the following conjecture (postponing for the moment natural compatibilities with Hecke actions):
**Conjecture 4.5** (Families Soergel Conjecture).: There is an equivalence of categories
\[\mathcal{S}hv_{\mathcal{N}}((N\backslash\check{G}/N)^{\Gamma})\otimes_{k}k[u,u ^{-1}]\simeq\mathrm{QC}^{!}(\mathbb{L}^{\eta})^{\mathrm{Tate}}.\]
### Twistor Geometric Langlands: Automorphic side
The stack \((B\backslash G/B)^{\Gamma}\) appearing on the automorphic side of Conjecture 4.2 (or its \(N\)-version appearing in Conjecture 4.5) has a natural geometric interpretation in terms of the twistor line, discovered in [1].
**Definition 4.6**.: The topological stack of real \(G\)-bundles on \(\widehat{\mathbb{P}}^{1}_{\mathbb{R}}\)
\[\operatorname{Bun}_{G,\theta}(\widehat{\mathbb{P}}^{1}_{\mathbb{R}};\infty):= \operatorname{Bun}_{G}(\mathbb{P}^{1};0,\infty)^{\Gamma}\]
is the fixed point stack of \(\Gamma\) acting on (the Betti realization of) the stack of \(G\)-bundles on \(\mathbb{P}^{1}\) equipped with decorated flags (i.e., \(N\)-reductions) at \(0,\infty\) by composition of the antipodal map and the involution \(\theta\) on \(G\).
The Betti form of the geometric Langlands correspondence [1] seeks to describe the categories of nilpotent sheaves on stacks of \(G\)-bundles \(\mathcal{S}hv_{\mathcal{N}}(\operatorname{Bun}_{G})\) in terms of algebraic geometry of stacks of local systems. In the case of parabolic bundles on the twistor line, we have already identified the corresponding space of local systems with \(\mathbb{L}^{\eta}\), so that we have the following:
**Conjecture 4.7** (Twistor Geometric Langlands [1, 1]).: There is an equivalence
\[\mathcal{S}hv_{\mathcal{N}}(\operatorname{Bun}_{G,\theta}(\widetilde{\mathbb{ P}}^{1}_{\mathbb{R}},\infty))\simeq\operatorname{QC}^{!}(\mathbb{L}^{\eta})\]
intertwining natural affine Hecke symmetries.
**Remark 4.8**.: There are two variants ("standard" and "renormalized") of the large categories of sheaves on stacks, corresponding to the two Koszul dual variants of sheaves on \(BG\) (modules for chains on \(G\) vs. modules for cochains on \(BG\)). These correspond to imposing or not imposing nilpotent singular support of coherent sheaves on the spectral side.
The open locus in \(\operatorname{Bun}_{G,\theta}(\widetilde{\mathbb{P}}^{1}_{\mathbb{R}};\infty)\) where the underlying \(G\)-bundle on \(\mathbb{P}^{1}\) is trivial is identified with the fixed point stack \((N\backslash G/N)^{\Gamma}\) of the Galois group on the stack \(N\backslash G/N\), which parametrizes pairs of decorated flags on the trivial bundle. Thus the category \(\mathcal{S}hv_{\mathcal{N}}((N\backslash G/N)^{\Gamma})\) and its variants appearing in Soergel's conjecture is identified with sheaves on an open locus in \(\operatorname{Bun}_{G,\theta}(\widehat{\mathbb{P}}^{1}_{\mathbb{R}};\infty)\), and thus fits in a semi-orthogonal decomposition of the category. The circle action respects this subcategory and is trivialized on it; indeed we have the following description of the periodic cyclic form of the twistor automorphic category, which follows from the equivariant localization theorem of [1] applied to Hom spaces of \(S^{1}\)-equivariant sheaves on \(\operatorname{Bun}_{G,\theta}(\widetilde{\mathbb{P}}^{1}_{\mathbb{R}},\infty)\):
**Proposition 4.9**.: _[_1_]_ _The periodic cyclic category of automorphic sheaves is identified as_
\[\mathcal{S}hv_{\mathcal{N}}(\operatorname{Bun}_{G,\theta}(\widetilde{\mathbb{ P}}^{1}_{\mathbb{R}},\infty))^{\operatorname{Tate}}\simeq\mathcal{S}hv_{ \mathcal{N}}((N\backslash G/N)^{\Gamma})\otimes_{k}k[u,u^{-1}].\]
_Thus the families Soergel conjecture, Conjecture 4.5, is identified with the periodic cyclic form of twistor geometric Langlands, Conjecture 4.7._
In other words, the \(u\)-deformation of the twistor geometric Langlands conjecture (the coherent local Langlands correspondence over \(\mathbb{R}\)) picks out only the subcategory associated to representations of pure inner forms of \(G\), and on this subcategory produces the constructible form of the categorical local Langlands correspondence.
**Remark 4.10**.: For non-semisimple groups the trivial bundle (or \(S^{1}\)-fixed) locus - whose sheaves participate in the cyclic deformation to representations of pure inner forms - can be strictly smaller than the semistable locus, whose sheaves index representations of groups associated to basic isocrystals.
## Appendix A Foundations
In this section we establish technical foundations for the representation theoretic applications discussed in previous sections. We will begin with a general set-up: let \(X\) be a smooth Artin stack with affine diagonal over a field \(k\) of characteristic zero.
### Circle and \(B\mathbb{G}_{a}\)-actions
We make a digression on categorical \(G\)-actions for the non-affine groups stacks \(G=S^{1},B\mathbb{G}_{a},B\mathbb{G}_{a}\rtimes\mathbb{G}_{m}\), since they may be alien to a reader accustomed to working with actions by affine algebraic groups. Extensive discussion on this subject may be found in Section 6 of [11]; we also refer the reader to Section 3.1 of [12] for the case \(G=B\mathbb{G}_{a}\). We wish to draw attention to the following two phenomena which do not present when \(G\) is an affine algebraic group.
**Lack of (de)-equivariantization correspondence:** Since \(BG=BS^{1}\) is not 1-affine, not all objects in a QC\((G)\)-module category \(\mathbf{C}\) can be generated by invariant ones, i.e. the natural functor
\[\mathbf{C}^{\mathrm{QC}(G)}\otimes_{\mathrm{QC}(BS^{1})}\mathbf{Vect} \xleftrightarrow{\mathbf{C}}\]
is fully faithful (Proposition A.6) but no longer an equivalence. On the other hand, when \(G=B\mathbb{G}_{a}\) (and simialrly for \(G=B\mathbb{G}_{a}\rtimes\mathbb{G}_{m}\)), by Theorem 2.5.7 of [13] we have that \(BG=B^{2}\mathbb{G}_{a}\) is 1-affine, thus the functor is an equivalence:
\[\mathbf{C}^{B\mathbb{G}_{a}}\otimes_{\mathrm{QC}(B^{2}\mathbb{G}_{a})}\mathbf{ Vect}\xleftrightarrow{\mathbf{C}}.\]
This is discussed in Section A.1.1.
**Renormalized invariants:** For \(G=S^{1},B\mathbb{G}_{a}\), and \(B\mathbb{G}_{a}\rtimes\mathbb{G}_{m}\), the stacks \(BG\) are not compactly generated by perfect objects, i.e. QC\((BG)\neq\mathrm{Ind}(\mathrm{Perf}(BG))\), and the operations of ind-completion and \(G\)-invariants do not commute. We will explain why we prefer to take the \(G\)-invariants of small categories and then ind-complete (which we call the _category of compactly renormalized (weak) \(G\)-invariants_), rather than the other way around in Section A.1.2.
#### a.1.1. Linearization and affinization of circle actions
Let \(G\) be an affine algebraic group acting on a scheme \(X\). There are two module categories for different monoidal categories one can attach to this set-up.
1. The category QC\((G)\) is monoidal under pushforward along group multiplication \(m:G\times G\to G\), and acts on QC\((X)\) via pushforward along the action map \(a:G\times X\to X\). Alternatively (and more naturally), we may view QC\((G)\) as a comonoidal category under pullback, and consider comodule categories.
2. The category QC\((BG)\) is monoidal under tensor product, and acts on QC\((X/G)\) via pullback and tensoring.
Furthermore, these two categories are related via the invariants and coinvariants constructions:
\[\operatorname{QC}(X)^{\operatorname{QC}(G)}\simeq\operatorname{QC}(X/G),\qquad \operatorname{QC}(X/G)\otimes_{\operatorname{QC}(BG)}\operatorname{\mathbf{Vect}}_ {k}\simeq\operatorname{QC}(X).\]
These operations are sometimes referred to as _equivariantization_ and _de-equivariantization_ respectively.
In fact, these functors can be made sense of in a general context, where they arise as the adjoint invariants and reconstruction functors from Section 10.2 of [1]
\[\mathbf{Mod}(\operatorname{QC}(BG))\]
One can pass from the less familiar comodules to modules by the canonoical equivalence \(\mathbf{Comod}(\operatorname{QC}(G))\simeq\mathbf{Mod}(\operatorname{QC}(G) ^{\vee})\); if \(\operatorname{QC}(G)\) is self-dual, then in addition we have \(\mathbf{Mod}(\operatorname{QC}(G)^{\vee})\simeq\mathbf{Mod}(\operatorname{ QC}(G))\).13 We are interested in conditions under which these adjoints are in fact equivalences. Following the discussion in _loc. cit._, this occurs when \(BG\) is _1-affine_, and by Proposition 10.4.4 of [1], \(BG\) is 1-affine when \(G\) is an affine algebraic group. This is by no means the only case; by Theorem 2.5.7 of _op. cit._ we see that \(BG\) is 1-affine (and self-dual) when \(G=B\mathbb{G}_{a},B\mathbb{G}_{a}\rtimes\mathbb{G}_{m}\).
Footnote 13: In many situations, \(\operatorname{QC}(G)\) is self-dual, e.g. by D.1.2 of [1] when \(\operatorname{QC}(G)\) is rigid for any monoidal structure, which is satisfied by any perfect stack by Proposition 3.4.2 of [1].
As mentioned, \(B^{2}\mathbb{G}_{a}\) is 1-affine, and we will now see that \(BS^{1}\) is not. We are interested in the monoidal category \(\operatorname{QC}(S^{1})\), where the monoidal structure is given by pushforward along multiplication \(m:S^{1}\times S^{1}\to S^{1}\) and the unit is the skyscraper at the identity. We recall the following standard calculation.
**Proposition A.1**.: _The category \(\operatorname{QC}(S^{1})\) is equivalent to the (derived) category of chain complex valued local systems on \(S^{1}\). Under Cartier duality we have monoidal equivalences and identifications_
\[\operatorname{QC}(S^{1})\xrightarrow{\simeq}\operatorname{Mod}(k\mathbb{Z}) \xrightarrow{\simeq}\operatorname{QC}(\mathbb{G}_{m})\]
_where \(p:\operatorname{Spec}k\to B\mathbb{Z}=S^{1}\) and \(q:\mathbb{G}_{m}\to\operatorname{Spec}k\), and where we take for monoidal structure the tensor product on \(\operatorname{QC}(S^{1})\) and \(\operatorname{QC}(\mathbb{G}_{m})\) and the convolution product on \(\operatorname{Mod}(k\mathbb{Z})\).14_
Footnote 14: This is not important for us, but note that \(\operatorname{QC}(S^{1})^{\omega}\supsetneq\operatorname{Perf}(S^{1})\), so \(S^{1}\) is not perfect.
Remark A.2.: To add to the confusion, there is another possible notion of what one might mean by an \(S^{1}\)-action on a category. Namely, any topological group \(G\) may be made internal to \(\infty\)-categories, and thus one may formulate in entirely abstract terms what it means for \(G\) to act on an \(\infty\)-category.15 In the \(k\)-linear setting these notions are equivalent. In the case where \(G=S^{1}\), such an action is
given by a map \(\mathbb{Z}\to HH^{\bullet}(\mathbf{C})\) to the Hochschild cohomology complex, which is equivalent to a \(\mathcal{O}(\mathbb{G}_{m})\)-linear structure on \(\mathbf{C}\). We refer the reader to Section 6.1 of [11] for details.
We now make calculations on the equivariantized side, beginning with the \(1\)-affine \(BG=B^{2}\mathbb{G}_{a}\). We choose \(u\in\mathcal{O}(B^{2}\mathbb{G}_{a})\), thus identifying once and for all \(k[u]\simeq\mathcal{O}(B^{2}\mathbb{G}_{a})\), and let \(\mathbb{B}=\{0\}\times_{\mathbb{A}^{1}}\{0\}\).
**Proposition A.3**.: _The stack \(B^{2}\mathbb{G}_{a}\) is \(1\)-affine and self-dual, thus the equivariantization and de-equivariantization functors are equivalences. Furthermore, letting \(p:\operatorname{Spec}k\to B^{2}\mathbb{G}_{a}\), we have monoidal identifications_
_for the tensor monoidal structure on \(\operatorname{QC}(B^{2}\mathbb{G}_{a})\), the convolution structure on \(\operatorname{QC}(\mathbb{B})\), and the \(!\)-tensor product on \(\operatorname{Mod}_{u-\operatorname{tors}}(k[u])\), the full subcategory of locally \(u\)-torsion modules. In particular, \(\operatorname{QC}(B^{2}\mathbb{G}_{a})^{\omega}\subsetneq\operatorname{Perf}( B^{2}\mathbb{G}_{a})\), and \(B^{2}\mathbb{G}_{a}\) is not perfect.16 Similar statements hold for the group \(G=B\mathbb{G}_{a}\rtimes\mathbb{G}_{m}\)._
Footnote 16: E.g. the module \(k[u,u^{-1}]/uk[u]\in\operatorname{Mod}_{u}(\operatorname{Sym}^{\bullet}V^{ \bullet}[-2])\) is not compact but is perfect.
Proof.: Let us, for the reader's sake, give a sense of how the calculation of \(\operatorname{QC}(B^{2}\mathbb{G}_{a})\) goes (following [10]). First, by the usual descent arguments, one has that \(\operatorname{QC}(B^{2}\mathbb{G}_{a})\) is equivalent to comodules for the coalgebra \(\mathcal{O}(B\mathbb{G}_{a})\simeq\operatorname{Ext}_{\mathbb{G}_{a}}^{\bullet} (k,k)\simeq C^{\bullet}(S^{1};k)\), with comultiplication given by pullback along group multiplication. By dualizing, this is equivalent to modules for \(C_{\bullet}(S^{1};k)\). By Koszul duality, this is equivalent to locally nilpotent modules for \(C^{\bullet}(BS^{1};k)\) (with multiplication by cup product), i.e. the augmentation module \(k\in\operatorname{Mod}(C^{\bullet}(BS^{1};k))\) is a compact generator, and \(\operatorname{End}_{C^{\bullet}(BS^{1};k)}(k,k)\simeq C_{\bullet}(S^{1};k)\).
The case of \(BG=BS^{1}\) is similar (see Lemma 3.10 of [1]), and in fact there is no difference between the categories of quasi-coherent sheaves on \(B^{2}\mathbb{G}_{a}\) and \(BS^{1}\). In particular, \(B^{2}\mathbb{G}_{a}\) is the \(1\)-affinization of \(BS^{1}\).
**Proposition A.4**.: _The affinization map \(a:BS^{1}\to B^{2}\mathbb{G}_{a}\) induces a monoidal equivalence_
\[a^{\ast}:\ \operatorname{QC}(B^{2}\mathbb{G}_{a})\xrightarrow{\simeq}\operatorname{ QC}(BS^{1}).\]
Passing through Cartier duality, we have equivalences of monoidal categories on the de-equivariantized side
\[\operatorname{QC}(B\mathbb{G}_{a})\simeq\operatorname{QC}(\widehat{\mathbb{ G}}_{a}),\hskip 28.452756pt\operatorname{QC}(S^{1})=\operatorname{QC}(B \mathbb{Z})\simeq\operatorname{QC}(B\mathbb{G}_{m}).\]
These are evidently different; letting \(a:S^{1}\to B\mathbb{G}_{a}\) denote the affinization map and \(\iota:\widehat{\mathbb{G}}_{a}\hookrightarrow\mathbb{G}_{m}\), we have an identification of the adjoint pair \((a^{\ast},a_{\ast})\) with the adjoint pair \((\iota_{\ast},\iota^{!})\) under Cartier duality. Since the corresponding categories on the equivariantized side are equivalent by Proposition A.4, we may conclude that \(BS^{1}\) is not \(1\)-affine and that equivariantization and de-equivariantization are not inverse equivalences.
**Example A.5** (\(BS^{1}\) is not \(1\)-affine).: Consider the regular QC(\(S^{1}\))-comodule category QC(\(S^{1}\)). There is a tautological identification of its invariants with the augmentation QC(\(BS^{1}\))-module category \(\mathbf{Vect}_{k}\), and by Proposition A.4 we have \(\mathbf{Vect}_{k}\otimes_{\mathrm{QC}(BS^{1})}\mathbf{Vect}_{k}\simeq\mathrm{ QC}(B\mathbb{G}_{a})\), thus the functor
\[\mathrm{QC}(S^{1})^{\mathrm{QC}(S^{1})}\underset{\mathrm{QC}(BS^{1})}{ \otimes}\mathbf{Vect}_{k}\simeq\mathbf{Vect}_{k}\underset{\mathrm{QC}(BS^{1})}{ \otimes}\mathbf{Vect}_{k}\simeq\mathrm{QC}(B\mathbb{G}_{a})\hookrightarrow \mathrm{QC}(S^{1})\]
is fully faithful and induced by pullback along the affinization map \(a:S^{1}\to B\mathbb{G}_{a}\), with essential image the full subcategory of locally nilpotent \(k\mathbb{Z}\)-modules. In particular, this functor is not an equivalence, so \(BS^{1}\) is not \(1\)-affine.
The full faithfulness of the functor in the above examples holds in greater generality; we call the essential image of the functor in the next proposition the full subcategory of \(S^{1}\)_-invariant_ (or \(S^{1}\)-equivariantizable) _objects_. We note that this proposition has a renormalized counterpart in Proposition A.9.
**Proposition A.6**.: _Let \(\mathbf{C}\) be a QC(\(S^{1}\))-comodule category. Then,_
\[\mathbf{C}^{\mathrm{QC}(S^{1})}\otimes_{\mathrm{QC}(BS^{1})}\mathbf{Vect} \hookrightarrow\mathbf{C}\]
_is fully faithful._
Proof.: The case of the right regular representation \(\mathbf{C}=\mathrm{QC}(S^{1})\) was established in Example A.5. In general, tensor the above case with \(\mathbf{C}\):
\[\mathbf{C}\otimes_{\mathrm{QC}(S^{1})}\mathrm{QC}(S^{1})^{S^{1}}\otimes_{ \mathrm{QC}(BS^{1})}\mathbf{Vect}\rightarrow\mathbf{C}.\]
The operation \(\mathbf{C}\otimes_{\mathrm{QC}(S^{1})}-\) can be expressed as a limit by passing to right adjoints, thus commutes with the \(S^{1}\)-invariants operation and preserves full faithfulness.
#### a.1.2. Renormalization of circle and \(B\mathbb{G}_{a}\)-actions
We now discuss the need to renormalize the invariants operation (as raised in [11]). We summarize the discussion of this subsection as follows.
1. A QC(\(S^{1}\))-module category \(\mathbf{C}\) equivariantizes to a cyclic deformation \(\mathbf{C}^{S^{1}}\) over the \(2\)-shifted formal affine line \(\widehat{\mathbb{A}}^{1}\)[2], whose special fiber is the full subcategory of \(\mathbf{C}\) generated by \(S^{1}\)-invariant objects.
2. A QC(\(B\mathbb{G}_{a}\))-module category \(\mathbf{C}\) equivariantizes to a cyclic deformation \(\mathbf{C}^{B\mathbb{G}_{a}}\) over the \(2\)-shifted formal affine line \(\widehat{\mathbb{A}}^{1}\)[2], whose special fiber recovers \(\mathbf{C}\).
3. A QC(\(B\mathbb{G}_{a}\rtimes\mathbb{G}_{m}\))-module category \(\mathbf{C}\) equivariantizes to a cyclic deformation \(\mathbf{C}^{B\mathbb{G}_{a}\rtimes\mathbb{G}_{m}}\) over the graded formal affine line \(\widehat{\mathbb{A}}^{1}/\mathbb{G}_{m}\), whose special fiber recovers \(\mathbf{C}\).
The generic fiber in all the cases above vanishes. The goal of renormalization is to replace the formal affine line with the usual affine line, thus allowing for a meaningful generic fiber.
We begin with some generalities. Let \(G\) be a group prestack acting on a prestack \(X\). Then, there is a tautological \(\mathrm{Perf}(G)\)-coaction on \(\mathrm{Perf}(X)\), as well as a QC(\(G\))-coaction on QC(\(X\)). Furthermore, we have
\[\mathrm{Perf}(X)^{\mathrm{Perf}(G)}\simeq\mathrm{Perf}(X/G)\subset\mathrm{QC} (X/G)\simeq\mathrm{QC}(X)^{\mathrm{QC}(G)}.\]
If \(X/G\) is a perfect stack in the sense of [1], i.e. \(\operatorname{QC}(X/G)\) is compactly generated by \(\operatorname{Perf}(X/G)\), then taking \(G\)-invariants of small categories and then ind-completing is the same as taking \(G\)-invariants of large categories. By Corollary 3.22 of _op. cit._ this occurs, for example, in characteristic zero when \(X\) is quasi-projective and \(G\) is an affine algebraic group. However, our groups \(G=S^{1},B\mathbb{G}_{a}\) take us outside of this setting, since \(BG\) is not perfect as we have seen in Example A.5. Since the stacks \(BS^{1}\) and \(B^{2}\mathbb{G}_{a}\) are not perfect, the following equivariantizations give different answers:
1. taking \(\operatorname{QC}(G)\)-invariants of large categories, as we have done in the previous section, and
2. taking \(\operatorname{Perf}(G)\)-invariants of small categories, and then ind-completing to obtain a large category.
We now take the second approach and define the following renormalized equivariantization.
**Definition A.7**.: Let \(G\) be a group object in prestacks. We will denote the \(\operatorname{QC}(G)\)-invariants of a comodule category by \(\mathbf{C}^{G}\). Suppose that \(\mathbf{C}=\operatorname{Ind}(\mathbf{C}_{0})\) is a compactly generated category such that \(\mathbf{C}_{0}\) has a \(\operatorname{Perf}(G)\)-coaction. We define the _category of compactly-renormalized \(G\)-invariants_ to be the \(\operatorname{IndPerf}(BG)\)-module category
\[\mathbf{C}^{\omega G}:=\operatorname{Ind}(\mathbf{C}_{0}^{\operatorname{Perf}(G )}).\]
There is a tautological functor \(\mathbf{C}^{\omega G}\to\mathbf{C}^{G}\) induced by the functor \(\mathbf{C}_{0}^{\operatorname{Perf}(G)}\to\mathbf{C}^{\operatorname{QC}(G)}\).
Then, the compactly renormalized equivariantization in our cases of interest \(G=S^{1},B\mathbb{G}_{a},B\mathbb{G}_{a}\rtimes\mathbb{G}_{m}\) take on the following forms.
1. An \(\operatorname{IndPerf}(S^{1})\)-module category \(\mathbf{C}\) compactly equivariantizes to a cyclic deformation \(\mathbf{C}^{\omega S^{1}}\) over the 2-shifted affine line \(\mathbb{A}^{1}\)[2].
2. An \(\operatorname{IndPerf}(B\mathbb{G}_{a})=\operatorname{QC}(B\mathbb{G}_{a})\)-module category \(\mathbf{C}\) compactly equivariantizes to a cyclic deformation \(\mathbf{C}^{\omega B\mathbb{G}_{a}}\rtimes\mathbb{G}_{m}\) over the 2-shifted affine line \(\mathbb{A}^{1}\)[2].
3. An \(\operatorname{IndPerf}(B\mathbb{G}_{a}\rtimes\mathbb{G}_{m})=\operatorname{ QC}(B\mathbb{G}_{a}\rtimes\mathbb{G}_{m})\)-module category \(\mathbf{C}\) compactly equivariantizes to a cyclic deformation \(\mathbf{C}^{\omega B\mathbb{G}_{a}\rtimes\mathbb{G}_{m}}\) over the graded affine line \(\mathbb{A}^{1}/\mathbb{G}_{m}\).
In light of this calculation we can make the following definition.
**Definition A.8**.: Let \(\mathbf{C}=\operatorname{Ind}(\mathbf{C}_{0})\) be a compactly generated category with a \(\operatorname{QC}(S^{1})\)-action restricting to compact objects. We define the _Tate construction_ to be the 2-periodic category
\[\mathbf{C}^{\operatorname{Tate}}:=\mathbf{C}^{\omega S^{1}}\otimes_{ \operatorname{Mod}(k[u])}\operatorname{Mod}(k[u,u^{-1}]).\]
One can make a similar definition for a category with a \(\operatorname{QC}(B\mathbb{G}_{a})\)-action. If the category has a \(\operatorname{QC}(B\mathbb{G}_{a}\rtimes\mathbb{G}_{m})\)-action, we may define the _graded Tate construction_
\[\mathbf{C}^{\operatorname{Tate}\rtimes\mathbb{G}_{m}}:=\mathbf{C}^{\omega B \mathbb{G}_{a}\rtimes\mathbb{G}_{m}}\otimes_{\operatorname{QC}(\mathbb{A}^{1} /\mathbb{G}_{m})}\operatorname{QC}(\mathbb{G}_{m}/\mathbb{G}_{m})\]
which is a (non-periodic) \(k\)-linear category.
In [1] we show that the compactly renormalized invariant category is compatible with the usual invariants at the \(u=0\) fiber.
**Proposition A.9**.: _Let \(\mathbf{C}_{0}\) be a small stable \(k\)-linear \(\infty\)-category with a \(\operatorname{Perf}(S^{1})\)-coaction, and let \(\mathbf{C}=\operatorname{Ind}(\mathbf{C}_{0})\). The canonical functor_
\[\mathbf{C}^{\omega S^{1}}\otimes_{k[u]}\operatorname{\mathbf{Vect}}_{k} \xrightarrow{\ \simeq\ }\mathbf{C}^{S^{1}}\otimes_{k[u]}\operatorname{\mathbf{Vect}}_{k}\]
_is an equivalence. In particular, by Proposition A.6 the functor_
\[\mathbf{C}^{\omega S^{1}}\otimes_{k[u]}\operatorname{\mathbf{Vect}} \xrightarrow{\ \ }\mathbf{C}\]
_is fully faithful. Similar statements hold for the compactly renormalized \(B\mathbb{G}_{a}\) and \(B\mathbb{G}_{a}\rtimes\mathbb{G}_{m}\)-invariants as well._
#### a.1.3 Koszul duality and graded lifts
As we have seen in Definition A.8, the Tate construction gives rise to a 2-periodic category, while the graded Tate construction provides a lift to a non-periodic category. These phenomena are typical to Koszul duality [1, 13] and we now give formalizations of these notions.
**Definition A.10**.: Let \(\mathbf{C}\) be a \(k\)-linear category; a _graded lift_ of \(\mathbf{C}\) is a QC\((B\mathbb{G}_{m})\)-module category \(\mathbf{C}_{gr}\) and an equivalence \(\mathbf{C}\simeq\mathbf{C}_{gr}\otimes_{\operatorname{QC}(B\mathbb{G}_{m})} \operatorname{\mathbf{Vect}}_{k}\). The category QC\((B\mathbb{G}_{m})\) of \(\mathbb{Z}\)-graded vector spaces has an automorphism by _Tate shearing_:
\[M^{\!\!\beta}:=\bigoplus_{n\in\mathbb{Z}}M_{n}[-2n],\qquad\quad M^{\!\!\beta}: =\bigoplus_{n\in\mathbb{Z}}M_{n}[2n].\]
For any QC\((B\mathbb{G}_{m})\)-module category \(\mathbf{C}\) we denote by \(\mathbf{C}^{\!\!\beta}\) (resp. \(\mathbf{C}^{\!\!\beta}\)) the QC\((B\mathbb{G}_{m})\)-module category where QC\((B\mathbb{G}_{m})\) now acts through the Tate shearing (resp. unshearing). For any QC\((\mathbb{G}_{m})\)-module category \(\mathbf{C}\), we denote by \(\mathbf{C}^{\!\!\beta}\) (resp. \(\mathbf{C}^{\!\!\beta}\)) the category obtained by equivariantizing, shearing (resp. unshearing), and de-equivariantizing. Finally, we say two categories are _Koszul equivalent_
\[\mathbf{C}_{Kos}\mathbf{D}\]
if they admit graded lifts \(\mathbf{C}_{gr},\mathbf{D}_{gr}\) and an equivalence of categories \(\mathbf{C}_{gr}\simeq\mathbf{D}_{gr}^{\!\!\beta}\).
It is sometimes convenient to work with the a coarser notion of Koszul equivalence induced by the refined version above by passing to 2-periodic categories.
**Definition A.11**.: Let \(\mathbf{C}\) be a \(k\)-linear category; the _2-periodicization_ of \(\mathbf{C}\) is the 2-periodic category
\[\mathbf{C}^{per}:=\mathbf{C}\otimes_{\operatorname{Mod}(k)}\operatorname{Mod }(k[u,u^{-1}])\]
where \(|u|=2\). There is a canonical functor \((-)^{per}:\mathbf{C}\to\mathbf{C}^{per}\) such that \(\operatorname{Hom}_{\mathbf{C}^{per}}(X^{per},Y^{per})\) is the 2-periodicization of the chain complex \(\operatorname{Hom}_{\mathbf{C}}(X,Y)\).
Since the Tate shearing is the identity in 2-periodic categories, it becomes clear that a Koszul equivalence induces an equivalence on 2-periodic categories
\[\mathbf{C}^{per}\simeq\mathbf{D}^{per}.\]
**Example A.12**.: The 2-periodicization operation can introduce many new objects into a category. For example, take the category \(\tilde{\mathcal{D}}(BT)\) of ind-coherent \(\mathcal{D}\)-modules on a torus, which is equivalent to the category \(\operatorname{Mod}(H^{\bullet}(BT;k))\) of modules for the cohomology of \(BT\), in turn equivalent to \(\operatorname{Mod}(\operatorname{Sym}^{\bullet}\mathfrak{t}^{\bullet}[-2])\). Roughly speaking, this category only has a skyscraper object at 0. However, the 2-periodicization is equivalent to the category \(\operatorname{Mod}((\operatorname{Sym}^{\bullet}\mathfrak{t}^{\bullet}[-2])[u,u ^{-1}])\), which has skyscraper objects for every point \(t\in\mathfrak{t}\) by the evaluation maps \(u^{-1}\mathrm{ev}_{t}:\mathfrak{t}^{\bullet}[-2]u^{-1}\simeq\mathfrak{t}^{ \bullet}\to k\).
We also observe that the standard graded lift of this category gives \(\mathfrak{t}^{*}\) weight \(-1\), which the Tate shearing moves from degree \(2\) to degree \(0\); by contrast, the Rees parameter \(t\) has weight \(1\), which the Tate shearing turns into the degree \(2\)\(S^{1}\)-deformation parameter \(u\).
#### a.1.4. Examples
We present the following very computable examples of \(S^{1}\)-equivariant categories. Namely, let \(G\) be an abelian affine algebraic (possibly formal) group with Cartier dual \(\widehat{G}\). Then, passing through Cartier duality, we have identifications
\[\operatorname{Coh}(\mathcal{L}(BG))=\operatorname{Coh}(G\times BG)\simeq \operatorname{Coh}(G\times\widehat{G}).\]
Furthermore, Cartier duality provides a monoidal equivalence
\[(\operatorname{QC}(B\mathbb{Z}),\circ)\simeq(\operatorname{QC}(\mathbb{G}_{m} ),\otimes)\]
i.e. an \(S^{1}\)-action on a category may be described as a \(\operatorname{QC}(\mathbb{G}_{m})\)-linear structure. In this setting, both arise geometrically, i.e. come from pullback along the canonical pairing
\[G\times\widehat{G}\to\mathbb{G}_{m},\qquad\quad(g,\chi)\mapsto\chi(g)\]
Applying the discussion of [11, SS3], we have
\[\operatorname{Coh}(\mathcal{L}(BG))^{S^{1}}\simeq\operatorname{Coh}((G\times \widehat{G})\times_{\mathbb{G}_{m}}\{1\}),\qquad\quad\operatorname{Coh}( \mathcal{L}(BG))^{\operatorname{Tate}}\simeq\operatorname{MF}(G\times\widehat{ G},f)\]
where \(\operatorname{MF}\) denotes the \(2\)-periodic category of matrix factorizations. We work out the above in two specific examples.
**Example A.13**.: Let us take \(X=BT\) where \(T\) is an algebraic torus; then \(\mathcal{L}(BT)\simeq T\times BT\). We have an identification of the category
\[\operatorname{Coh}(\mathcal{L}(BT))\simeq\operatorname{Coh}(T\times BT) \simeq\operatorname{Coh}(T\times X^{\bullet}(T))\simeq\bigoplus_{\lambda\in X ^{\bullet}(T)}\operatorname{Coh}(T),\]
i.e. a decomposition into isotypic components for \(\operatorname{Rep}(T)\). The \(S^{1}\)-action is given (see Remark A.2) by multiplication by \(t^{\lambda}\) at \((t,\lambda)\in T\times X^{\bullet}(T)\), i.e. multiplication by the monomial \(t^{\lambda}\in\mathcal{O}(T)\) on \(T\times\{\lambda\}\). We now compute the \(S^{1}\)-equivariant, Tate localization, and \(S^{1}\)-invariant objects (see Proposition A.6). Let \(T_{\lambda}\) denote the derived zero locus of the polynomial \(1-t^{\lambda}\). Explicitly,
\[T_{\lambda}=(\mathcal{O}(T)[\epsilon],d(\epsilon)=1-t^{\lambda})\simeq \begin{cases}\mathcal{O}(T)[\epsilon]&\lambda=1,\\ \mathcal{O}(T)/t^{\lambda}-1&\lambda\neq 1.\end{cases}\]
By Remark A.2 the \(S^{1}\)-invariants for an \(S^{1}\)-action given by automorphism \(\alpha\) may be described as imposing the equation \(1-\alpha\) in a derived way. Thus, we have
\[\operatorname{Coh}(\mathcal{L}(BT))^{S^{1}}\simeq\bigoplus_{\lambda\in X^{ \bullet}(T)}\operatorname{Coh}(T_{\lambda}).\]
Koszul dually [11, Con. 3.1.5], we may view \(\operatorname{Coh}(\mathcal{L}(BT))^{S^{1}}\) as a \(k[u]\)-module category where \(|u|=2\) acts by cohomological operators. The derived zero locus \(T_{\lambda}\) is smooth when \(\lambda\neq 1\), thus any coherent sheaf has a finite resolution, thus \(u\) acts by torsion on \(\operatorname{Coh}(T_{\lambda})\), i.e. \(\operatorname{Coh}(T_{\lambda})\otimes_{k[u]}k(u)\simeq 0\). When \(\lambda=1\) then \(u\) acts freely, and we have (see also [11, Prop. 3.4.1]):
\[\operatorname{Coh}(\mathcal{L}(BT))^{\operatorname{Tate}}\simeq\operatorname{ Coh}(T)\otimes\operatorname{Mod}(k[u,u^{-1}]).\]
In particular, we have
\[\operatorname{Coh}(\widehat{\mathcal{L}}(BT))^{\operatorname{Tate}}=\operatorname{ Coh}(\mathcal{L}^{u}(BT))^{\operatorname{Tate}}\subsetneq\operatorname{Coh}(\mathcal{L}(BT))^{\operatorname{Tate}}\]
where the first equality arises since \(T\) has no unipotent elements, i.e. \(\mathcal{L}^{u}(BT)=\widehat{\mathcal{L}}(BT)\). Specializing at \(u=0\) we have instead [11, Cor. 3.2.4]
\[\operatorname{Coh}(\mathcal{L}(BT))^{S^{1}}\otimes_{k[u]}k\simeq\bigoplus_{ \lambda\in X^{\bullet}(T)}\operatorname{Coh}_{T_{\lambda}}(T).\]
In particular, not every object of \(\operatorname{Coh}(\mathcal{L}(BT))\) is \(S^{1}\)-equivariantiazble.
**Example A.14**.: We consider a somewhat orthogonal example. Consider \(X=B\mathbb{G}_{a}\). Then, we have via Cartier duality
\[\operatorname{Coh}(\mathcal{L}(B\mathbb{G}_{a}))\simeq\operatorname{Coh}( \mathbb{G}_{a}\times B\mathbb{G}_{a})\simeq\operatorname{Coh}(\mathbb{G}_{a} \times\widehat{\mathbb{G}}_{a}).\]
Fix coordinates \(x,y\) on the two \(\mathbb{G}_{a}\) respectively; we view \(x\) as coming from the action of \(\mathcal{O}(\mathbb{G}_{a})\), and \(y\) as the action of \(1\in\operatorname{Lie}(\mathbb{G}_{a})\), i.e. the action map on \(V\in\operatorname{Coh}(B\mathbb{G}_{a})\) is \(t\mapsto\exp(ty)\). Then, the \(S^{1}\)-action is given by the automorphism of the identity \(\exp(xy)\). Taking \(S^{1}\)-invariants imposes the equation \(\exp(xy)=1\), i.e. \(xy=0\) (since \(y\) is in an infinitesimal neighborhood of \(0\), we can ignore other logarithms of \(1\)), in a derived way. We have
\[\operatorname{Coh}(\mathcal{L}(B\mathbb{G}_{a}))^{S^{1}}\simeq\operatorname{ Mod}_{\operatorname{f.g.},y\operatorname{-nil}}(k[x,y]/xy).\]
The Tate localization is the category of matrix factorizations on \(k[x,y]\) for the equation \(xy=0\) (where \(y\) acts nilpotently). We have an equivalences
\[\operatorname{Coh}(\widehat{\mathcal{L}}(B\mathbb{G}_{a}))^{\operatorname{ Tate}}\simeq\operatorname{Coh}(\mathcal{L}^{u}(B\mathbb{G}_{a}))^{\operatorname{ Tate}}=\operatorname{Coh}(\mathcal{L}(B\mathbb{G}_{a}))^{\operatorname{ Tate}}\simeq\operatorname{Mod}(k(u))\]
where the first isomorphism arises since the equation \(xy=0\) has singular locus at the origin (thus completing at \(x=0\) doesn't change the category of matrix factorizations), and the second arises geometrically (i.e. \(\mathcal{L}^{u}(B\mathbb{G}_{a})=\mathcal{L}(B\mathbb{G}_{a})\) since \(\mathbb{G}_{a}\) consists only of unipotent elements). The rightmost identification of the categories is a calculation, see e.g. [12, Sec. 5.3]. Alternatively, one can pass through Koszul duality (Theorem 2.6), where \(\operatorname{Coh}(\widehat{\mathcal{L}}(B\mathbb{G}_{a}))^{\operatorname{ Tate}}\simeq\mathcal{D}(B\mathbb{G}_{a})\otimes\operatorname{Mod}(k(u))\simeq \operatorname{Mod}(k(u))\). All objects of \(\operatorname{Coh}(\mathcal{L}(B\mathbb{G}_{a}))\) are \(S^{1}\)-invariant, i.e.
\[\operatorname{Coh}(\mathcal{L}(B\mathbb{G}_{a}))^{S^{1}}\otimes_{k[u]}k= \operatorname{Coh}(\mathcal{L}(B\mathbb{G}_{a})).\]
Since all loops are unipotent, the \(S^{1}\)-action factors through a \(B\mathbb{G}_{a}\)-action, which is \(1\)-affine. This example is easily generalized to the case where \(X=BV\), where \(V\) is a commutative additive group.
### Koszul duality and equivariant localization
We now turn our attention to various technical details that arise in the discussion of Koszul duality and equivariant localization.
#### a.2.1. Renormalized categories
The category \(\mathcal{D}(X)\) of \(\mathcal{D}\)-modules on a smooth stack \(X\) is defined via descent. Let \(F\mathcal{D}(X)\) denote the category of filtered \(\mathcal{D}\)-modules on a stack \(X\), also defined via descent. We let \(F\mathcal{D}_{c}(X)\) denote the full subcategory of coherent \(\mathcal{D}\)-modules on \(X\) (with good filtrations). Note that these are not the compact objects in \(F\mathcal{D}(X)\) in general; we let \(F\breve{\mathcal{D}}(X)=\operatorname{Ind}(F\mathcal{D}_{c}(X))\) denote the category of _ind-coherent filtered \(D\)-modules_, i.e. we renormalize the category with respect to coherent \(D\)-modules.
We require a similar renormalization for the odd tangent bundle; see Section 2 of [1] for a discussion. For any closed substack \(Z\subset X\) we define the category \(\widehat{\operatorname{Coh}}(\widehat{X}_{Z})\) (sometimes denoted \(\widehat{\operatorname{Coh}}_{Z}(X)\)) of _ind-continuous coherent sheaves_ to be the full subcategory consisting of objects \(\mathcal{F}\in\operatorname{QC}^{!}(\widehat{X}_{Z})\simeq\operatorname{QC}^{! }_{Z}(X)\) such that \(\mathcal{F}\) is \(t\)-bounded and almost \(!\)-perfect, i.e. such that for any closed substack \(i:Z^{\prime}\hookrightarrow X\) set-theoretically supported on \(Z\subset X\), the \(!\)-restriction \(i^{!}\mathcal{F}\) has coherent cohomology. This category has the following properties (see [1]).
1. It contains the usual category of coherent sheaves supported along \(Z\subset X\), i.e. \(\operatorname{Coh}_{Z}(X)\subset\widehat{\operatorname{Coh}}_{Z}(X)\). In particular, we have \(\operatorname{Coh}(\widehat{\mathcal{L}}X)=\operatorname{Coh}_{X}(\mathcal{ L}X)\subset\widehat{\operatorname{Coh}}(\widehat{\mathcal{L}}X)\).
2. Letting \(\widehat{i}:\widehat{X}_{Z}\hookrightarrow X\) be the inclusion of the formal completion, the functor \(\widehat{i}^{!}:\operatorname{QC}^{!}(X)\to\operatorname{QC}^{!}(\widehat{X}_ {Z})\) takes \(\operatorname{Coh}(X)\) to \(\widehat{\operatorname{Coh}}(\widehat{X}_{Z})\). In particular, taking \(\widehat{z}:\widehat{\mathcal{L}}X\to\mathcal{L}X\) to be the inclusion of formal loops, the functor \(\widehat{z}^{!}:\operatorname{QC}^{!}(\mathcal{L}X)\to\operatorname{QC}^{!}( \widehat{\mathcal{L}}X)\) restricts to a functor \(\operatorname{Coh}(\mathcal{L}X)\to\widehat{\operatorname{Coh}}(\widehat{ \mathcal{L}}X)\).
3. If \(Z\subset X\) is defined by a nilpotent ideal, then \(\operatorname{Coh}(\widehat{X}_{Z})=\widehat{\operatorname{Coh}}(\widehat{X}_ {Z})=\operatorname{Coh}(X)\). In particular, if \(X\) is a scheme, then \(\operatorname{Coh}(\widehat{\mathcal{L}}X)=\widehat{\operatorname{Coh}}( \widehat{\mathcal{L}}X)=\operatorname{Coh}(\mathcal{L}X)\).
4. If \(Z\subset X\) is a derived local complete intersection, then \(\mathcal{F}\in\widehat{\operatorname{Coh}}(\widehat{X}_{Z})\) if and only if its \(!\)-restriction to \(Z\) is in \(\operatorname{Coh}(Z)\). In particular, if \(p:U\to X\) is a smooth atlas, we have that \(\mathcal{F}\in\widehat{\operatorname{Coh}}(\widehat{\mathcal{L}}X)\) if and only if \(\widehat{\mathcal{L}}p^{!}\mathcal{F}\in\operatorname{Coh}(\mathcal{L}U)\).
We denote its ind-completion by \(\widehat{\operatorname{QC}}^{!}(\widehat{X}_{Z}):=\operatorname{Ind}( \widehat{\operatorname{Coh}}(\widehat{X}_{Z}))\). This renormalization is necessary to have a good restriction functor to completions at parameters. In a general setting, if \(F:\mathbf{C}\to\mathbf{D}\) is a colimit-preserving functor which preserves compact objects (i.e. a left adjoint with a colimit-preserving right adjoint) between compactly generated categories, then for \(X\in\mathbf{C}\) compact we have a commuting diagram:
Commutativity follows by checking the tautological commutativity of right adjoints, while compactness of \(X\) guarantees that the left adjoint to \(\operatorname{Hom}(X,-)\) is fully faithful17 (and similarly for \(F(X)\)). Unfortunately, the \(!\)-restriction on ind-coherent sheaves is a right adjoint and does not preserve compact objects. On the other hand, under renormalization \(\widehat{\ell}_{\alpha}:\operatorname{QC}^{!}(\mathcal{L}(X/G))\to\widehat{ \operatorname{QC}}^{!}(\widehat{\mathcal{L}}_{\alpha}(X(\alpha)/G(\alpha)))\) becomes a _left adjoint_, and compact objects by construction. In particular, we have a commuting square:
Footnote 17: I.e. the unit of the adjunction \(M\to\operatorname{Hom}(X,X\otimes_{\operatorname{End}(X)}M)\) is an equivalence when \(\operatorname{Hom}(X,-)\) commutes with colimits.
\[\operatorname{Mod}(\mathcal{H}) \simeq\langle\mathcal{S}\rangle\xleftarrow{\raisebox{-1.0pt}{ \includegraphics[height=14.0pt]{
#### a.2.2. Homotopy fixed points
Our equivariant localization statement in Theorem 2.9 uses a notion of homotopy fixed points; we give a brief overview of this notion and refer the reader to the appendix of [1] for details.
**Definition A.15**.: Let \(G\) be a group prestack acting on a prestack \(X\). We define the _homotopy fixed points_\(X^{G}\) by the (derived) fiber product
In other words, \(X^{G}\) is the prestack of sections of the map \(X/G\to BG\).
We discuss examples of this construction, since it has a different flavor for different inputs \(G\).
1. When \(G\) is linearly reductive and \(X\) a locally Noetherian Artin stack, then the cotangent complex of \(X^{G}\) has the same lower bound on Tor amplitude (i.e. "level of singularity") as the cotangent complex of \(X\) by Corollary A.35 of [1]. In particular, if \(X\) is a smooth scheme, then \(X^{G}\subset X\) is a smooth closed subscheme by Proposition A.23 of _op. cit._ and thus \(X^{G}\) is the classical \(G\)-fixed points of \(X\). This observation, combined with the observation that \((-)^{hG}\) commutes with Zariski localization and fiber products, gives a method for computing homotopy fixed points for derived schemes with given presentations.
2. If \(G\) is not reductive, \(X^{G}\) may have derived structure. For example, taking \(G=\mathbb{G}_{a}\) acting trivially on \(X\), we have \(X^{G}=\mathcal{L}^{u}X\) is the unipotent loop space, which is just the usual loop space \(\mathcal{L}X\) when \(X\) is a scheme.
3. Assuming that \(G\) is reductive, but without the assumption that \(X\) is smooth, it is possible for \(X^{G}\) to have nontrivial derived structure. For example, the nilpotent cone \(\mathcal{N}=\mathfrak{g}\times_{\mathfrak{g}/G}\{0\}\) is a non-derived complete intersection. However, taking \(G\)-invariants we have \(\mathcal{N}^{G}=\mathfrak{z}(\mathfrak{g})\times_{\mathfrak{g}/G}\{0\}\), which has derived structure unless \(\mathfrak{g}\) is commutative, i.e. \(G\) is a torus. When \(G\) is a torus, one can argue that \(X^{G}\) is classical if \(X\) is quasi-smooth and classical.18 Footnote 18: One may adapt the argument in Theorem 2.9 to reduce to the case where \(X\) is a fiber product of smooth \(G\)-schemes, which one may then reduce to the case of a derived intersection of smooth \(G\)-schemes, and then argue via tangent spaces.
4. When \(G\) is a topological group acting trivially on \(X\), the homotopy fixed points \(X^{G}=\operatorname{Map}(BG,X)=\mathcal{L}oc_{G}(X)\) is the derived moduli stack of \(G\)-local systems on \(X\). This stack often has nontrivial derived structure, even when \(X\) does not.
5. When \(G=\mathbb{Z}\) acts on \(X\) via an automorphism \(\phi:X\to X\), then we obtain the derived fixed points \(\mathcal{L}_{\phi}X\) of \(\phi\) on \(X\). In particular, even when \(X\) is a smooth scheme this will be derived unless the fixed points have dimension \(0\).
6. More generally, if \(G\) is a discrete cyclic (thus commutative) group with generator \(\eta\in G\), then there is a canonical equivalence \(X^{G}\simeq\mathcal{L}_{\eta}(X/G)\), e.g. by
observing that the outer and left squares are Cartesian in the diagram: \[\begin{CD}X^{G}@>{}>{}>\operatorname{Map}(BG,X/G)@>{}>{}>\mathcal{L}(X/G)\\ @V{}V{}V@V{}V{}V\\ \{e\}@>{}>{}>\operatorname{Hom}(BG,BG)@>{}>{}>\mathcal{L}(BG)\end{CD}\] and noting that the bottom composition is exactly the map \(\{\eta\}\to G/G\).
7. Suppose \(G=\mathbb{Z}/n\mathbb{Z}\) acts on a scheme \(X\) trivially. Then, \(X^{G}\) is the relative kernel (over \(X\)) of the map \(\mathbb{T}_{X}[-1]\to\mathbb{T}_{X}[-1]\) induced by the degree \(n\) map on \(S^{1}\), i.e. multiplication by \(n\), which is an isomorphism if \(n\) is invertible in \(k\). Note that \(G=\mathbb{Z}/n\mathbb{Z}\) is linearly reductive over \(k\) if and only if \(n\) is invertible in \(k\).
#### a.2.3. Central shifting and twisting
The localization maps in Theorem 2.9 reduce the study of the loop space \(\mathcal{L}(X/G)\) over \([z]\in G//G\) to the study of \(\mathcal{L}(X(\alpha)/G(\alpha))\) over \([\alpha]\in G(\alpha)//G(\alpha)\). The advantage of the latter is that \(\alpha\in G(\alpha)\) is central and acts on \(X(\alpha)\) trivially; thus we can perform a central shifting that translates \(\alpha\) to the identity. This shifting is _not_\(S^{1}\)-equivariant for the loop rotations.19 Our goal in this section is to describe this shifting and a twisted \(S^{1}\)-action for which it is equivariant.
Footnote 19: For example, the loop rotation on \(\mathcal{L}(BG)=G/G\) acts over \(g\in G/G\) via the automorphism which conjugates by \(g\), and this is not preserved by central shifting.
We begin by describing the shifting. The localization maps constructed in Theorem 2.9 factor into two steps:
\[\mathcal{L}(X(\alpha)/G(\alpha))\longrightarrow\mathcal{L}(X/G(\alpha)) \longrightarrow\mathcal{L}(X/G)\]
with the first map exhibiting the localization, and the second map a simple etale base change in a neighborhood of \(\alpha\in G//G\). We will assume we have already performed this base change, focusing exclusively on the first map, and so may assume that \(\alpha\in Z(G)\) is a central element. We need to introduce the notion of \(\alpha\)-trivializations.
**Definition A.16**.: Let \(G\) be a linear algebraic group with reductive neutral component acting on a prestack \(X\), and \(H\subset G\) be a normal subgroup, and denote \(G^{\prime}=G/H\). An _\(H\)-trivialization_ of the \(G\) action on \(X\) is a \(G^{\prime}\)-action on \(X\) and an equivalence
\[X/G\simeq X/G^{\prime}\times_{BG^{\prime}}BG.\]
When \(H=\langle\alpha\rangle\) for \(\alpha\in Z(G)\) central, we call an \(H\)-trivialization an _\(\alpha\)-trivialization_.
**Proposition A.17**.: _Let \(G\) act on a derived scheme \(X\), and let \(A\subset G\) be a central subgroup. The homotopy fixed points \(X^{A}\) comes equipped with a canonical \(A\)-trivialization._
Proof.: First, note that letting \(i:BA\to BG\) denote the map defined by the inclusion of \(A\) into \(G\), we have a canonical equivalence
\[X^{A}\simeq\{i\}\times_{\operatorname{Map}(BA,BG)}\operatorname{Map}(BA,X/G).\]
On the right, since \(A\) is central and both are reductive, the inclusion \(\{i\}\hookrightarrow\operatorname{Map}(BA,BG)\simeq\operatorname{Hom}_{grp}( A,G)/G\) lifts to a map \(\{i\}/G\hookrightarrow\operatorname{Map}(BG,BG)\). Since
\(A\) acts trivially on \(\operatorname{Map}(BA,BG)\), this map also descends to a map \(\{i\}/G^{\prime}\to\operatorname{Hom}_{grp}(A,G)/G\). This data furnishes \(X^{A}\) with the desired structure.
Again letting \(\alpha\in G\) be central, and taking \(A=\langle\alpha\rangle\), suppose that \(X\) is a derived scheme equipped with an \(A\)-trivialization. We now define a _shift by \(\alpha\)_ map on \(\mathcal{L}(X/G)\)
\[sh_{\alpha}:\mathcal{L}(X/G)\to\mathcal{L}(X/G)\]
compatible with the multiplication by \(\alpha\) map \(\mu_{\alpha}:\mathcal{L}(BG)=G/G\to\mathcal{L}(BG)=G/G\) as follows. The canonical \(\alpha\)-trivialization \(X^{\alpha}/G\simeq X^{\alpha}/G^{\prime}\times_{BG^{\prime}}BG\) induces a canonical equivalence \(\mathcal{L}(X^{\alpha}/G)\simeq\mathcal{L}(X^{\alpha}/G^{\prime})\times_{G^{ \prime}/G^{\prime}}G/G\), and the automorphism is given on the right-hand side by the multiplication by \(z\) map on \(G/G\) and the identity elsewhere. In the setting of Theorem 2.9, we have a composition which we call the (unipotent) _\(\alpha\)-shifted localization map_
The shifting map on the left is _not_\(S^{1}\)-equivariant for the loop rotation; we introduce the twisting of the \(S^{1}\) action now. For \(\alpha\in Z(G)\), the multplication by \(\alpha\) defines a map of groups \(\mathbb{Z}\times G\to G\), which gives rise to an action map \(S^{1}\times BG\to BG\). If \(G\) acts on an \(\alpha\)-trivialized scheme \(X\), then this \(S^{1}\)-action gives rise to an \(S^{1}\)-action on \(X/G\simeq X/G^{\prime}\times_{BG^{\prime}}BG\), where we take \(G^{\prime}=G/\langle\alpha\rangle\) and the trivial \(S^{1}\)-action on \(X/G^{\prime}\) and \(BG^{\prime}\). We call the resulting \(S^{1}\)-action the _\(\alpha\)-twisting \(S^{1}\)-action \(\sigma_{\alpha}\)_ on \(X/G\). Since the \(S^{1}\)-actions \(\mathcal{L}(\sigma_{\alpha})\) and the loop rotation \(\rho\) commute, we define the _\(\alpha\)-shifted loop rotation_, denoted \(\rho(\alpha)\) or \(S^{1}(\alpha)\), to be the diagonal to the \(S^{1}\times S^{1}\)-action \(\rho\times\mathcal{L}(\sigma_{\alpha})\). The shifted localization map is equivariant with respect to this twisting.
**Corollary A.18**.: _The shifted localization map defines an equivalence_
\[s\ell_{\alpha}^{u}:\mathcal{L}^{u}(X(\alpha)/G(\alpha))\xrightarrow{\simeq} \mathcal{L}_{\alpha}^{u}(X/G)\]
_which is \(S^{1}\)-equivariant with respect to \(\rho(\alpha)\) on the source and \(\rho\) on the target, and likewise for the shifts of the completed and specialized localization maps \(s\ell_{\alpha}^{\wedge}\) and \(s\ell_{\alpha}^{\prime}\)._
#### a.2.4. Trivial blocks for twisted actions
In order to apply the Koszul duality discussed in Section 2.2, which involves \(S^{1}\)-equivariance for the untwisted action, we are interested in identifying a subcategory of sheaves on derived loop spaces over semisimple parameter \(\alpha\) on which the \(\alpha\)-twisting is trivial. This is useful since the \(\rho\) circle action on unipotent loop spaces factors through an action of \(B\mathbb{G}_{a}\), but the twisted \(\rho(\alpha)\) action does not (since it has nontrivial semisimple part). This problem is an obstacle to applying Koszul duality to obtain an identification of \(\operatorname{Coh}(\widehat{\mathcal{L}}_{\alpha}(X/G))^{S^{1}}\) with some kind of category of \(\mathcal{D}\)-modules as is done in Section 2.3.2.
To define this subcategory, we give a categorical interpretation of the geometric \(\alpha\)-twisting \(S^{1}\)-action \(\sigma(\alpha)\) discussed above.
**Definition A.19**.: Let \(G\) be an affine algebraic group, \(A\subset Z(G)\) a central subgroup with \(G^{\prime}=G/A\), and \(\mathbf{C}\) a QC\((BG)\)-module category. An _\(A\)-trivialization
of \(\mathbf{C}\) consists of the data of a \(\operatorname{QC}(BG^{\prime})\)-module category \(\mathbf{C}^{\prime}\) along with an identification \(\mathbf{C}\simeq\mathbf{C}^{\prime}\otimes_{\operatorname{QC}(BG^{\prime})} \operatorname{QC}(BG)\). We define the subcategory of \(A\)_-trivial objects_\(\mathbf{C}_{A}\subset\mathbf{C}\) to be the essential image of the natural functor \(\mathbf{C}^{\prime}\to\mathbf{C}\). When \(A=\langle\alpha\rangle\) for \(\alpha\in Z(G)\), we write \(\mathbf{C}_{\alpha}\).
Furthermore if \(G\) is reductive, then the \(\alpha\)-trivial objects form a summand (not just a subcategory) of \(\mathbf{C}\). Since \(G\) is reductive, the center is semisimple and letting \(X^{\bullet}(A)\) denote the group of characters of \(A\), we have a decomposition \(\operatorname{QC}(BG)=\operatorname{Rep}(G)=\bigoplus_{\chi\in X^{\bullet}(A)} \operatorname{Rep}(G)_{\chi}\), with the trivial isotype corresponding to \(\operatorname{Rep}(G^{\prime})\). Thus an \(\alpha\)-trivialization of \(\mathbf{C}\) defines a direct sum decomposition
\[\mathbf{C}\simeq\bigoplus_{\chi\in X^{\bullet}(A)}\mathbf{C}_{\chi}\simeq \bigoplus_{\chi\in X^{\bullet}(A)}\mathbf{C}^{\prime}\otimes_{\operatorname{ Rep}(G^{\prime})}\operatorname{Rep}(G)_{\chi}.\]
We will consider the following \(\alpha\)-trivialized categories which arise in nature.
1. If \(X/G\) is a global quotient stack, then an \(\alpha\)-trivialization of the stack gives rise to a \(z\)-trivialization of the category \(\operatorname{QC}(X)\), \(\operatorname{QC}^{l}(X)\), et cetera.
2. If \(\alpha\in G\) is central, then there is a canonical \(\alpha\)-trivialization of \(\mathcal{L}(BG)=G/G\).
3. Combining the two above, we have an \(\alpha\)-trivialization on \(\mathcal{L}(X/G)\).
**Example A.20**.: We return to Example A.13. Recall that \(T_{\lambda}=\ker(\lambda)\) is the (derived) kernel of \(\lambda\in X^{\bullet}(T)\) for \(\alpha\in T\). We have descriptions
\[\operatorname{Coh}(\mathcal{L}_{\alpha}(BT))^{S^{1}}=\bigoplus_{ \lambda\in X^{\bullet}(T)}\operatorname{Coh}(T_{\lambda}),\] \[\operatorname{Coh}(\mathcal{L}(BT))^{S^{1}(\alpha)}=\bigoplus_{ \lambda\in X^{\bullet}(T)}\operatorname{Coh}(\alpha^{-1}T_{\lambda})\]
and the \(\alpha\)-trivial blocks correspond to those where \(\alpha\in T_{\lambda}\). In particular, applying the Tate construction, we have that the inclusion of the \(\alpha\)-trivial block induces an equivalence
\[\operatorname{Coh}(\mathcal{L}(BT))^{\operatorname{Tate}}\simeq\operatorname{ Coh}(\mathcal{L}(BT))^{\operatorname{Tate}(\alpha)}\]
for all twists \(\alpha\).
#### a.2.5. Localization for non-reductive groups
The equivariant localization in Theorem 2.9 can be extended to non-reductive groups \(K\) as follows: given a quotient stack \(X/K\), we can consider instead the quotient stack \((X\times^{K}G)/G\), and do equivariant localization over \(G//G\). In general, the map \(K//K\to G//G\) is neither injective nor surjective (e.g. when \(K=U\) is a unipotent subgroup).
We pay special attention to the case where \(K=B\subset G\) is a Borel subgroup of \(G\). In this case, we have the diagram
For given \(\alpha\in H\) and \([\alpha]=\nu(\alpha)\in H//W\), we can have an \(W_{G(\alpha)}\)-equivariant identification of the \(\alpha\)-fixed points of \(G/B\) with \(\nu^{-1}([\alpha])\times G(\alpha)/B(\alpha)\), where here \(G(\alpha)\) is the centralizer of \(\alpha\) and \(B(\alpha)\subset G(\alpha)\) is the Borel subgroup. Thus,
\[\mathcal{L}_{[\alpha]}(BB)=\mathcal{L}_{[\alpha]}(G\backslash G/B)=\mathcal{L} _{[\alpha]}(G(\alpha)\backslash(G/B)^{\alpha})=\coprod_{\nu^{-1}([\alpha])} \mathcal{L}_{[\alpha]}(G(\alpha)\backslash G(\alpha)/B(\alpha)).\]
We define \(\mathcal{L}_{\alpha}(BB)\) to be the connected component corresponding to \(\alpha\in\nu^{-1}([\alpha])\), and then may define for \(\alpha\in B//B=H\)
\[\mathcal{L}_{\alpha}(X/B):=\mathcal{L}_{[\alpha]}((X\times^{B}G)/G)\times_{ \mathcal{L}_{[\alpha]}(BB)}\mathcal{L}_{\alpha}(BB).\]
For details, see Example 1.0.6 of [10].
|
2309.15020 | GWSpace: a multi-mission science data simulator for space-based
gravitational wave detection | Space-based gravitational wave detectors such as TianQin, LISA, and TaiJi
have the potential to outperform themselves through joint observation. To
achieve this, it is desirable to practice joint data analysis in advance on
simulated data that encodes the intrinsic correlation among the signals found
in different detectors that operate simultaneously. In this paper, we introduce
\texttt{GWSpace}, a package that can simulate the joint detection data from
TianQin, LISA, and TaiJi. The software is not a groundbreaking work that starts
from scratch. Rather, we use as many open-source resources as possible,
tailoring them to the needs of simulating the multi-mission science data and
putting everything into a ready-to-go and easy-to-use package. We shall
describe the main components, the construction, and a few examples of
application of the package. A common coordinate system, namely the Solar System
Barycenter (SSB) coordinate system, is utilized to calculate spacecraft orbits
for all three missions. The paper also provides a brief derivation of the
detection process and outlines the general waveform of sources detectable by
these detectors. | En-Kun Li, Han Wang, Hong-Yu Chen, Huimin Fan, Ya-Nan Li, Zhi-Yuan Li, Zheng-Cheng Liang, Xiang-Yu Lyu, Tian-Xiao Wang, Zheng Wu, Chang-Qing Ye, Xue-Ting Zhang, Yiming Hu, Jianwei Mei | 2023-09-26T15:40:53Z | http://arxiv.org/abs/2309.15020v1 | # GWSpace: a multi-mission science data simulator for space-based gravitational wave detection
###### Abstract
Space-based gravitational wave detectors such as TianQin, LISA, and TaiJi have the potential to outperform themselves through joint observation. To achieve this, it is desirable to practice joint data analysis in advance on simulated data that encodes the intrinsic correlation among the signals found in different detectors that operate simultaneously. In this paper, we introduce GWSpace, a package that can simulate the joint detection data from TianQin, LISA, and TaiJi. The software is not a groundbreaking work that starts from scratch. Rather, we use as many open-source resources as possible, tailoring them to the needs of simulating the multi-mission science data and putting everything into a ready-to-go and easy-to-use package. We shall describe the main components, the construction, and a few examples of application of the package. A common coordinate system, namely the Solar System Barycenter (SSB) coordinate system, is utilized to calculate spacecraft orbits for all three missions. The paper also provides a brief derivation of the detection process and outlines the general waveform of sources detectable by these detectors.
###### Contents
* I Introduction
* II Coordinate systems
* III Detectors
* A. TianQin: geocentric orbit
* B. LISA and TaiJi: heliocentric orbit
* IV Detector response
* A. The general waveform and mode decomposition
* B. Single arm response in time domain
* C. Single arm response in frequency domain
* D. Response for the mildly chirping signals
* V Time Delay Interference
* V.1 General Time Delay Interferometry (TDI) combination
* V.2 Instrument noise
* VI Waveform
* V.2 Galaxy Compact Binary
* V.3 Black Hole Binary
* V.4 Extreme Mass Ratio Inspirals
* V.5 Stochastic Gravitational Waves Background
* VII Example data-set
* VIII Summary
## I Introduction
Several space-based gravitational wave (GW) detectors, including TianQin [1], the Laser Interferometer Space Antenna (LISA) [2; 3], and TaiJi [4; 5], are eyeing for launch around mid-2030s. These detectors will for the first time open the unexplored milli-Hertz (mHz) frequency band of GW spectrum. In complement to the current ground-based GW detectors (GBDs) [6], space-based GW detectors (SBDs) enjoy a plethora of new types of sources, including the Galaxy Compact Binary (GCB) [7; 8], the Massive Black Hole Binary (MBHB) [9], the Stellar-mass Black Hole Binary (SBHB) [10; 11], the Extreme Mass Ratio Inspirals (EMRI) [12; 13], the Stochastic Gravitational Wave Background (SGWB) [14; 15], and etc [16; 17; 18; 19; 20; 21].
Unlike GBDs that mainly capture GW events in their short-lived merger phases, SBDs detect GW events mostly during their long-lasting inspiral phases, resulting in complex data sets with overlapping signals in time and frequency. Consequently, this poses significant challenges to data analysis [22; 23; 11; 24]. So several mock data challenges, such as mock LISA data challenge (MLDC) [22; 25; 26], which is now replaced by LISA data challenge (LDC) [23], and TaiJi data challenge (TDC) [24], have been set up to help develop the necessary tools need for space-based GW data analysis.
It is possible that more than one of the detectors, TianQin, LISA and TaiJi, will be observing concurrently during the mid-2030s, enabling a network approach to detect some GW signals. These detectors can then observe the same GW signals from different locations in the solar system, effectively forming a virtual detector with a much larger size [27], leading to significant improvements in sky localization accuracy [28; 29; 30; 31; 32], allowing for the discovery of more sources and a deeper understanding of physics [33; 34]. More examples showing how joint detection can improve over individual detectors can be found in [7; 10; 12; 29; 32; 35]. A comprehensive study of how the joint detection with TianQin and LISA can improve over each detector can be found in [36]. What's more, the difficulties faced by space-based GW data analysis are partially due to parameter degeneracy [37], and it has been shown that joint detection can also be helpful here by breaking some of the degeneracies [38]. So it is important to seriously consider the possibility of doing joint data analysis from different SBD combinations.
There are challenges to doing data analysis for joint observation with more than one detectors. For example, due to the significant differences in arm lengths and orbits, approximations and optimized algorithms developed for geocentric and heliocentric cannot be directly applied interchangeably. What's more, variations in the separations among the detectors will affect the correlation of the singles and this requires comprehensive consideration in the calculation of the likelihood and covariance matrices. To facilitate the study of problems involved in the analysis of joint observational data, we introduce in this paper GWSpace, which is a package that can simulate the joint detection data from all three SBDs mentioned above.
Although MLDC, LDC and TDC have already achieved simulating data for individual detectors like LISA and Taiji, there are new problems to be solved when one wants to simulate data for all three detectors operating together. For example, due to the shorter arm-length of TianQin, its sensitivity frequency band is more shifted toward the higher frequency end, ranging from \(10^{-4}\) to 1 Hz [1], as compared to about \([2\times 10^{-5},10^{-1}]\) Hz for LISA and TaiJi [3; 5]. Because of this, the response model derived using the low-frequency limit method [39] that works for LISA and TaiJi is not always valid for TianQin. Therefore, it is necessary to consider the full-frequency response models to accurately describe the behaviour of all the detectors across the entire frequency spectrum [40; 41]. Another issue is that one needs to study the response of the three SBDs by using the same coordinate system to correctly reveal the correlation among them. The solar system barycenter (SSB) coordinate system is identified as the most straightforward choice for this purpose.
The paper is structured as follows. Section II specifies the coordinate systems used in this paper. Section III specify the orbits of the three SBDs involved, namely TianQin, LISA and Taiji. Section IV, V and VI detail the response, TDI combinations and source waveforms used in GWSpace. Some example data-sets are described in section VII. A short summary is in section VIII.
## II Coordinate systems
Two basic coordinate systems will be used in this paper: the astronomical ecliptic coordinate system used to describe the detector, hence called the detector frame, and the coordinate system adapted to the description of gravitational wave (GW) sources, hence called the source frame.
The detector frame, as illustrated in Fig. 1 (Left), is defined with the origin at the solar system barycenter (SSB). In this frame, the \(z\)-axis is oriented perpendicular to the ecliptic and points towards the north, while the \(x\)-axis points towards the March equinox. The \(y\)-axis is obtained as \(\mathbf{y}=\mathbf{z}\times\mathbf{x}\,\). The direction to a GW source is indicated with the unit vector \(\hat{n}=\hat{n}(\lambda,\beta)\,\), where \(\lambda\) and \(\beta\) are the celestial longitudes and celestial latitude of the source, respectively. To describe the polarization of GWs propagating along \(\hat{k}=-\hat{n}\,\), two additional auxiliary unit vectors are introduced 1
Footnote 1: [https://lisa-ldc.lal.in2p3.fr/static/data/pdf/LDC-manual-002.pdf](https://lisa-ldc.lal.in2p3.fr/static/data/pdf/LDC-manual-002.pdf)
\[\hat{u}=\frac{\hat{n}\times\hat{z}}{|\hat{n}\times\hat{z}|}=\frac{\hat{z}\times \hat{k}}{|\hat{z}\times\hat{k}|}\,,\quad\hat{v}=\hat{u}\times\hat{n}=\hat{k} \times\hat{u}\,, \tag{1}\]
so that the trio, \((\hat{u},\hat{v},\hat{k})\,\), forms a right-handed orthogonal basis. Then, one can obtain that
\[\hat{u}=[\sin\lambda,\ -\cos\lambda,\ 0]\,, \tag{2}\] \[\hat{v}=[-\sin\beta\cos\lambda,\ -\sin\beta\sin\lambda,\ \cos \beta]\,,\] (3) \[\hat{k}= =-\hat{n}=[-\cos\beta\cos\lambda,\ -\sin\lambda\cos\beta,\ -\sin \beta]\,. \tag{4}\]
The source frame is illustrated in Fig. 1 (Right). Exactly how the origin and the axes, \((\hat{x}_{S},\hat{y}_{S},\hat{z}_{S})\), are chosen for the source dynamics will be determined on a case-by-case basis. The direction to the GW detector is indicated with the unit vector \(\hat{k}=\hat{k}(\theta_{S},\phi_{S})\,\). To describe the polarization of GWs propagating along \(\hat{k}\,\), two auxiliary unit vectors are also introduced,
\[\hat{q}=\frac{\hat{z}\times\hat{k}}{|\hat{z}\times\hat{k}|}\,,\quad\hat{p}= \hat{q}\times\hat{k}\,, \tag{5}\]
so that the trio, \((\hat{p},\hat{q},\hat{k})\,\), forms a right-handed orthogonal basis. Then, one can obtain that
\[\hat{p}= \left[\cos\iota\cos\varphi,\ \sin\varphi\cos\iota,\ -\sin\iota \right], \tag{6}\] \[\hat{q}= \left[-\sin\varphi,\ \cos\varphi,\ 0\right],\] (7) \[\hat{k}= \left[\sin\iota\cos\varphi,\ \sin\iota\sin\varphi,\ \cos\iota \right]. \tag{8}\]
Since \(\hat{n}=-\hat{k}\,\), the planes spanned by \((\hat{u},\hat{v})\) and \((\hat{p},\hat{q})\) are parallel to each other (see Fig. 2). As a result,
\[\hat{p}= \cos\psi\,\hat{u}+\sin\psi\,\hat{v}\,,\quad\hat{q}=-\sin\psi\, \hat{u}+\cos\psi\,\hat{v}\,,\] \[\hat{u}= \cos\psi\,\hat{p}-\sin\psi\,\hat{q}\,,\quad\hat{v}=\sin\psi\,\hat{ p}+\cos\psi\,\hat{q}\,. \tag{9}\]
Thus, the polarization angle can be computed as
\[\psi=\arctan_{2}\left[\hat{p}\cdot\hat{u},\hat{p}\cdot\hat{v}\right]. \tag{10}\]
Using the basis vectors, one can define the polarization tensors in the source frame and SSB frame. In the source frame
\[e^{+}_{ij}=(\hat{p}\otimes\hat{p}-\hat{q}\otimes\hat{q})_{ij},\quad e^{\times}_{ ij}=(\hat{p}\otimes\hat{q}+\hat{q}\otimes\hat{p})_{ij}. \tag{11}\]
Similarly, in the SSB frame
\[\epsilon^{+}_{ij}=(\hat{u}\otimes\hat{u}-\hat{v}\otimes\hat{v})_{ij},\quad \epsilon^{\times}_{ij}=(\hat{u}\otimes\hat{v}+\hat{v}\otimes\hat{u})_{ij}. \tag{12}\]
After some calculation, the relation between the polarization tensors can be rewritten as
\[e^{+}= \epsilon^{+}\cos 2\psi+\epsilon^{\times}\sin 2\psi, \tag{13}\] \[e^{\times}= -\epsilon^{+}\sin 2\psi+\epsilon^{\times}\cos 2\psi. \tag{14}\]
In the source frame, the GW strain in a transverse-traceless gauge takes the form
\[h^{TT}_{ij}=e^{+}_{ij}h_{+}+e^{\times}_{ij}h_{\times}, \tag{15}\]
where \(h_{+,\times}\) are the plus and cross mode of GW. The corresponding representation of the strain in the SSB frame is
\[h^{TT}_{ij}=h_{+}(\epsilon^{+}\cos 2\psi+\epsilon^{\times}\sin 2\psi)_{ij}+h_{ \times}(\epsilon^{\times}\cos 2\psi-\epsilon^{+}\sin 2\psi)_{ij}. \tag{16}\]
## III Detectors
The three SBDs, TianQin, LISA, and TaiJi, all consist of three identical spacecraft that form a nearly equilateral triangle. The main difference is in their orbits: TianQin is placed on nearly identical nearly circular geocentric orbits with a radius of about \(10^{5}\) km [1]. The detector plane of TianQin is directed towards the calibration source RX J0806.3+1527. In contrast, LISA and TaiJi are placed on Earth-like heliocentric orbits with a semi-major axis of about 1 astronomical unit (AU) from the Sun [3; 5], and their detector plane rotates around in a yearly cycle. The arm length of TianQin is about \(1.7\times 10^{5}\) km, while those of LISA and TaiJi are about \(2.5\times 10^{6}\) km and \(3\times 10^{6}\) km, respectively. The centre of LISA is approximately 20 degrees behind the Earth, while that of TaiJi is approximately 20 degrees ahead of the Earth [27] (see Fig. 3). By selecting a geocentric orbit, TianQin is able to transmit data back to Earth in nearly real-time, making it more adapted to multi-messenger astronomy [42].
In the following subsections, we utilize the Keplerian orbit to approximate the motion of the spacecraft in the SSB.
### TianQin: geocentric orbit
In Fig. 4, we present a schematic of the spacecraft orbits for TianQin. The \(x\)-axis is defined as the direction from the Sun to the September equinox, while the \(z\)-axis represents the angular momentum direction of the Earth. For detailed information on the derivatives for the Keplerian orbit of TianQin, please refer to Ref. [43].
The following presents a simplified and non-realistic depiction of the orbit, focusing on the motion of the Earth's centre or guiding centre in the SSB frame:
\[X(t)= R\left[\cos(\alpha-\beta)-e(1+\sin^{2}(\alpha-\beta))-\frac{3}{2}e^{2} \cos(\alpha-\beta)\sin^{2}(\alpha-\beta)\right], \tag{17}\]
\[Y(t)= R\left[\sin(\alpha-\beta)+e\sin(\alpha-\beta)\cos(\alpha-\beta)+ \frac{1}{2}e^{2}\sin(\alpha-\beta)(1-3\sin^{2}(\alpha-\beta))\right], \tag{18}\] \[Z(t)= 0, \tag{19}\]
where \(\alpha=2\pi f_{m}t+\kappa_{0}\), \(f_{m}=1/(\text{one sidereal year})=3.14\times 10^{-8}\) Hz is the orbit modulation frequency, \(\kappa_{0}\) is the mean ecliptic longitude measured from the vernal equinox (or September equinox) at \(t=0\), and \(\beta\) denotes the angle measured from the vernal equinox to the perihelion.
In the context of the TianQin spacecraft's motion around the Earth, its orbits remain consistent with Eq. (19). However, when considering the SSB frame, TianQin assumes a specific orientation towards the direction of J0806 (\(\{\lambda_{s},\beta_{s}\}=\{120.5^{\circ},-4.7^{\circ}\}\), as shown in Fig. 5). Introducing a coordinate system rotation and disregarding eccentricity, the description of TianQin's orbits can be further refined [43]
\[x_{n}= \frac{L}{\sqrt{3}}\left[\sin\beta_{s}\cos\lambda_{s}\sin(\alpha_{ n}-\beta^{\prime})+\sin\lambda_{s}\cos(\alpha_{n}-\beta^{\prime})\right], \tag{20}\] \[y_{n}= \frac{L}{\sqrt{3}}\left[\sin\beta_{s}\sin\lambda_{s}\sin(\alpha_ {n}-\beta^{\prime})-\cos\lambda_{s}\cos(\alpha_{n}-\beta^{\prime})\right],\] (21) \[z_{n}= -\frac{L}{\sqrt{3}}\cos\beta_{s}\sin(\alpha_{n}-\beta^{\prime}), \tag{22}\]
where \(L=\sqrt{3}\,R_{tq}\) is the arm-length between the two spacecraft, \(\alpha(t)=2\pi f_{sc}t+\kappa_{n}\), \(\kappa_{n}=+\frac{2}{3}\pi(n-1)+\lambda\), \(\lambda\) is the initial orbit phase of the first (\(n=1\)) spacecraft measured from \(\tilde{x}\) axis, \(f_{sc}=1/\sqrt{GM_{Earth}/R_{tq}^{3}}\simeq 1/(3.65\,\text{day})\) is the modulation frequency due to the rotation of the detector around the guiding centre, \(\beta^{\prime}\) is the angle measured from the \(\tilde{x}\) axis to the perigee of the first spacecraft orbit. Here, we assume the
Figure 4: Schematic of the TianQin spacecraft’s orbit for TianQin in the SSB coordinate system.
Figure 3: Schematic of the spacecraft’s orbit in the SSB coordinate system.
three spacecraft are in circular orbits around the Earth, thus \(\beta^{\prime}\) will be some arbitrary number (one can just set it as 0).
### LISA and TaiJi: heliocentric orbit
According to the description in Ref. [44], when considering a constellation of spacecraft in individual Keplerian orbits with an inclination of \(\iota=\sqrt{e}\), the coordinates of each spacecraft can be elegantly expressed in the following form (this expression has been expanded up to the second order of eccentricity) [44]
\[x_{n}= a\cos(\alpha^{\prime\prime})+ae\left(\sin\alpha^{\prime\prime} \cos\alpha^{\prime\prime}\sin\beta^{\prime}_{n}-(1+\sin^{2}\alpha^{\prime \prime})\cos\beta^{\prime}_{n}\right)\] \[+\frac{1}{8}ae^{2}\left(3\cos(3\alpha^{\prime\prime}-2\beta^{ \prime}_{n})-10\cos\beta^{\prime}_{n}-5\cos(\alpha^{\prime\prime}-2\beta^{ \prime}_{n})\right), \tag{23}\] \[y_{n}= a\sin(\alpha^{\prime\prime})+ae\left(\sin\alpha^{\prime\prime} \cos\alpha^{\prime\prime}\cos\beta^{\prime}_{n}-(1+\cos^{2}\alpha^{\prime \prime})\sin\beta^{\prime}_{n}\right)\] \[+\frac{1}{8}ae^{2}\left(3\sin(3\alpha^{\prime\prime}-2\beta^{ \prime}_{n})-10\sin\alpha^{\prime\prime}+5\sin(\alpha^{\prime\prime}-2\beta^{ \prime}_{n})\right),\] (24) \[z_{n}= -\sqrt{3}ae\cos(\alpha^{\prime\prime}-\beta^{\prime}_{n})+\sqrt{ 3}ae^{2}\left[1+\sin^{2}(\alpha^{\prime\prime}-\beta^{\prime}_{n})\right]. \tag{25}\]
Here \(a=R_{LISA,TJ}=1\) AU is the radial distance to the guiding center for LISA and TaiJi, \(\alpha^{\prime\prime}=\alpha-\beta\mp 20^{\circ}\) for LISA and TaiJi, where \(\alpha\) and \(\beta\) is same as that in Earth orbit or in Eqs. (17)-(19). And \(\beta^{\prime}_{n}=\frac{2\pi}{3}(n-1)+\lambda^{\prime}\), \(\lambda^{\prime}\) is the initial orientation of the constellation, \(e\simeq L_{LISA,TJ}/(2a\sqrt{3})\) represent the orbital eccentricity, \(L_{LISA}=2.5\times 10^{6}\) km and \(L_{TJ}=3\times 10^{6}\) km is the arm-length between two spacecraft for LISA and TaiJi, respectively.
While the spacecraft orbits for LISA and TaiJi are situated in the ecliptic planes, their constellation's guiding centre follows a nearly circular trajectory. In the GWSpace code, the perihelion angle of the three spacecraft for LISA and TaiJi is set to be the same as that of Earth. However, TianQin's guiding centre coincides with Earth's, resulting in the changing angle between LISA and TaiJi over time. Figure 6 illustrates the relative angles between the different detectors. It can be observed that the angle between LISA or TaiJi and Earth varies between \(18^{\circ}\) and \(22^{\circ}\), while the angle between LISA and TaiJi is approximately \(40^{\circ}\), with a slight variation of around \(2.4\times 10^{-3}\). These findings are consistent with the proposed orbit described in Ref. [3; 5].
Figure 5: Schematic of the detector coordinate system \(\{\tilde{x},\tilde{y},\tilde{z}\}\) and the geocentric-ecliptic coordinate system \(\{x,y,z\}\). \(\tilde{x}\) point to the descending node, \(\tilde{z}\) axis points to J0806.
## IV Detector response
In a vacuum, propagating GWs induce a time-varying strain in the fabric of space-time. This strain can alter the proper distance between freely falling masses, providing a means to gather information about the GWs. One approach is to measure the variation in light travel time or optical path length between two test masses [45]. As a GW passes through, these separated masses will experience relative acceleration or tilting. Consequently, a GW detector is employed to monitor the separation between the test masses. There are two commonly used methods to monitor the distance between two objects: radar ranging or similar techniques, and measuring the Doppler shift in a signal transmitted from one object to the other [45]. However, a question arises regarding whether the GW affects the electromagnetic waves used for measuring distances [45]. In the following sections, we will provide a brief overview of how a GW detector responds to GW signals.
### The general waveform and mode decomposition
Assuming a universe consisting solely of vacuum and GW. Since GWs are very weak, the metric of the spacetime perturbated by a GW can be described as
\[ds^{2}=-c^{2}dt^{2}+\left[\delta_{ij}+h_{ij}(t)\right]dx^{i}dx^{j}, \tag{26}\]
where \(h_{ij}\) is the tensor perturbation, it is directly related to the GW itself, carrying information about its amplitude, frequency, and polarization. By analyzing the changes in the metric caused by the GW, we can extract valuable information about the GW signal. In the TT coordinate system (with coordinates \(x^{0}=ct,x^{1}=x,x^{2}=y,x^{3}=z\)), a weak GW can be described as a weak plane wave travelling in the \(+z\) direction. The line element describes the metric of spacetime in this scenario is given by
\[ds^{2}=-c^{2}dt^{2}+\left(1+h_{+}\left(t-\frac{z}{c}\right)\right)dx^{2}+ \left(1-h_{+}\left(t-\frac{z}{c}\right)\right)dy^{2}+2h_{\times}\left(t-\frac {z}{c}\right)dxdy+dz^{2}. \tag{27}\]
General waveformThe GW can be approximated as an arbitrary plane wave with wave vector \(\vec{k}\) and a tensorial 'amplitude', thus
\[\mathbf{h}(t,\mathbf{r})=\mathbf{h}_{0}e^{\mathrm{i}(2\pi ft-\vec{k}\cdot \mathbf{r}/c)}=\mathbf{h}_{0}e^{\mathrm{i}2\pi ft(t-\vec{k}\cdot\mathbf{r}/c)} =h(t-\hat{k}\cdot\mathbf{r})=h(\xi), \tag{28}\]
where \(\hat{k}=\frac{\vec{k}}{|\hat{k}|}=\frac{\vec{k}}{2\pi f}\) is the propagation direction of GW, \(\mathbf{r}\) is an arbitrary direction, \(\xi=t-\hat{k}\cdot\mathbf{r}\) is a surface of constant phase.
There is relative motion between the source frame and the detector frame. In the detector frame, the SSB is moving relative to the cosmic microwave background (CMB) with a peculiar velocity \(v\approx 370\) km/s \(\approx 0.0012\), along the direction \(\lambda\approx 172^{\circ}\), \(\beta\approx-11^{\circ}\)[46]. In the source frame, the velocities of the sources can be introduced as model parameters.
Figure 6: The relative angle between different detectors. Here, for improved visual clarity, the angle between LISA and TaiJi has been adjusted by subtracting 20 degrees.
Mode decompositionIn the source frame, the gravitational wave can be further decomposed using spin-weighted spherical harmonics [47]\({}_{-2}Y_{\ell m}(\iota,\varphi)\) as
\[h_{+}-\mathrm{i}h_{\times}=\sum_{\ell\geq 2}\sum_{m=-\ell}^{\ell}{}_{-2}Y_{\ell m }(\iota,\varphi)h_{\ell m}. \tag{29}\]
where \(\{\iota,\varphi\}\) represent the inclination and phase describing the orientation of emission. The primary harmonic is \(h_{22}\), while the others are called higher harmonics or higher modes. And each mode can be described as
\[h_{\ell m}=A_{\ell m}e^{-\mathrm{i}\Phi_{\ell m}}. \tag{30}\]
Based on this decomposition, we obtain
\[h_{+}= \frac{1}{2}\sum_{\ell,m}\left({}_{-2}Y_{\ell m}(\iota,\varphi)h_{ \ell m}+{}_{-2}Y^{*}_{\ell m}(\iota,\varphi)h^{*}_{\ell m}\right), \tag{31}\] \[h_{\times}= \frac{\mathrm{i}}{2}\sum_{\ell,m}\left({}_{-2}Y_{\ell m}(\iota, \varphi)h_{\ell m}-{}_{-2}Y^{*}_{\ell m}(\iota,\varphi)h^{*}_{\ell m}\right). \tag{32}\]
In particular, for the non-processing binary systems with a fixed equatorial plane of orbit, there exists an exact symmetry relation between modes
\[h_{\ell,-m}=(-1)^{\ell}h^{*}_{\ell m}. \tag{33}\]
With this symmetry, one has
\[h_{+,\times}=\sum_{\ell,m}K^{+,\times}_{\ell m}h_{\ell m}, \tag{34}\]
where
\[K^{+}_{\ell m}=\frac{1}{2}\left({}_{-2}Y_{\ell m}+(-1)^{\ell}{}_{-2}Y^{*}_{ \ell,-m}\right),\qquad K^{\times}_{\ell m}=\frac{\mathrm{i}}{2}\left({}_{-2}Y_ {\ell m}-(-1)^{\ell}{}_{-2}Y^{*}_{\ell,-m}\right). \tag{35}\]
It is convenient to introduce mode-by-mode polarization matrices
\[P_{\ell m}=e_{+}K^{+}_{\ell m}+e_{\times}K^{\times}_{\ell m}, \tag{36}\]
so that the GW signal in matrix form will be
\[\mathbf{h}^{TT}=\sum_{\ell,m}P_{\ell m}h_{\ell m}. \tag{37}\]
In the SSB frame, one can write
\[P_{+}+\mathrm{i}P_{\times}=e^{-\mathrm{i}2\psi}(\epsilon_{+}+\mathrm{i} \epsilon_{\times}). \tag{38}\]
With the above equations, \(P_{\ell m}\) will be
\[P_{\ell m}(\iota,\varphi,\psi)=\frac{1}{2}{}_{-2}Y_{\ell m}(\iota,\varphi)e^{ -\mathrm{i}2\psi}(\epsilon_{+}+\mathrm{i}\epsilon_{\times})+\frac{1}{2}(-1)^{ \ell}{}_{-2}Y^{*}_{\ell,-m}(\ell,\varphi)e^{+\mathrm{i}2\psi}(\epsilon_{+}- \mathrm{i}\epsilon_{\times}). \tag{39}\]
In this way, we can factor out explicitly all dependencies in the extrinsic parameters \((\iota,\varphi,\psi)\).
Suppose that the GW only has the main mode, i.e., the 22 mode, we have \(h_{22}=A_{22}e^{-\mathrm{i}\Phi_{22}}\) and \(h_{2,-2}=h^{*}_{22}=A_{22}e^{\mathrm{i}\Phi}\). The expressions of the spin-weighted spherical harmonics for the mode of \(\{2,\pm 2\}\) are
\[{}_{-2}Y_{22}(\iota,\varphi)=\frac{1}{2}\sqrt{\frac{5}{\pi}}\cos^{4}\frac{ \iota}{2}e^{\mathrm{i}2\varphi},\qquad{}_{-2}Y_{2,-2}(\iota,\varphi)=\frac{1}{ 2}\sqrt{\frac{5}{\pi}}\sin^{4}\frac{\iota}{2}e^{-\mathrm{i}2\varphi}, \tag{40}\]
and
\[K^{+}_{22}= \frac{1}{2}\big{(}{}_{-2}Y_{22}(\iota,\varphi)+{}_{-2}Y^{*}_{2,- 2}(\iota,\varphi)\big{)}=\frac{1}{4}\sqrt{\frac{5}{\pi}}\left(\cos^{4}\frac{ \iota}{2}+\sin^{4}\frac{\iota}{2}\right)e^{\mathrm{i}2\varphi}=\frac{1}{4} \sqrt{\frac{5}{\pi}}\frac{\left(1+\cos^{2}\iota\right)}{2}e^{\mathrm{i}2\varphi}, \tag{41}\] \[K^{\times}_{22}= \frac{\mathrm{i}}{2}\big{(}{}_{-2}Y_{22}(\iota,\varphi)-{}_{-2}Y^{ *}_{2,-2}(\iota,\varphi)\big{)}=\frac{\mathrm{i}}{4}\sqrt{\frac{5}{\pi}}\left( \cos^{4}\frac{\iota}{2}-\sin^{4}\frac{\iota}{2}\right)e^{\mathrm{i}2\varphi}=- \frac{\mathrm{i}}{4}\sqrt{\frac{5}{\pi}}\cos\iota e^{\mathrm{i}2\varphi}.\]
and so
\[K^{+}_{2,-2}=(K^{+}_{22})^{*},\qquad K^{\times}_{2,-2}=(K^{\times}_{22})^{*}. \tag{42}\]
Thus, one has
\[h_{+}= K^{+}_{22}h_{22}+K^{+}_{2,-2}h_{2,-2}=A_{22}\sqrt{\frac{5}{4\pi}} \frac{1+\cos^{2}\iota}{2}\cos(\Phi_{22}-2\varphi), \tag{43}\] \[h_{\times}= K^{\times}_{22}h_{22}+K^{\times}_{2,-2}h_{2,-2}=A_{22}\sqrt{\frac {5}{4\pi}}\cos\iota\sin(2\Phi_{22}-2\varphi). \tag{44}\]
For non-precessing systems, Eq. (33) will be translate to
\[\tilde{h}_{\ell,-m}(f)=(-1)^{\ell}\tilde{h}_{\ell m}(-f)^{*} \tag{45}\]
in the Fourier domain. For a given mode of GW waveform, one has \(h_{\ell m}\propto\exp[-\mathrm{i}m\phi_{orbit}]\), where \(\phi_{orbit}\) is the orbital phase of the GW systems, and it always verifying with \(\dot{\phi}_{orbit}>0\). Thus, for non-precessing systems or in the processing frame for a binary with misaligned spins, an approximation often applied as
\[\tilde{h}_{\ell m}(f)\simeq 0\quad\text{ for }m>0,\quad f>0,\] \[\tilde{h}_{\ell m}(f)\simeq 0\quad\text{ for }m<0,\quad f<0, \tag{46}\] \[\tilde{h}_{\ell 0}(f)\simeq 0.\]
In this way, for the positive frequencies \(f>0\), \(\tilde{h}_{+,\times}=\sum_{\ell}\sum_{m<0}K^{+,\times}_{\ell,m}\tilde{h}_{\ell m}\).
Eccentric mode decompositionEccentric waveforms also generate the harmonics, which act similarly to higher modes but are described by the mean orbital frequency. Under the stationary phase approximation (SPA), there is a relationship between the mean orbital frequency \(F\) and the Fourier frequency \(f\) for different eccentric harmonics:
\[f=j\cdot F(t_{0}). \tag{47}\]
Here we use the index \(j\) to distinguish eccentric harmonics from spin-weighted spherical harmonics above. \(t_{0}\) is the time which gives the stationary point of \(F\). The dominant eccentric harmonic is \(j=2\).
With \((\ell,m)=(2,2)\), a frequency domain eccentric waveform can be written as
\[\tilde{h}_{+,\times}=\sum_{j=1}^{10}\bar{\mathcal{A}}_{j}\xi^{+,\times}_{j}e^ {-\mathrm{i}\Psi_{j}}. \tag{48}\]
Here
\[\xi^{+,\times}_{j}=C^{(j)}_{+,\times}+\mathrm{i}S^{(j)}_{+,\times}, \tag{49}\]
which is a function of \((\iota,\varphi)\) and the eccentricity \(e(F)\). When \(e=0\),
\[\xi^{+}_{j=2} =C^{(2)}_{+}+\mathrm{i}S^{(2)}_{+}=4\cdot\frac{1+\cos^{2}\iota}{ 2}e^{\mathrm{i}\cdot 2\varphi}, \tag{50}\] \[\xi^{\times}_{j=2} =C^{(2)}_{\times}+\mathrm{i}S^{(2)}_{\times}=4\cdot(-\cos\iota) \,e^{\mathrm{i}\cdot 2\varphi},\] \[\xi^{+,\times}_{j\neq 2} =0,\]
which go back to the coefficients of the dominant mode \((\ell,m)=(2,2)\)[48]. But for a non-zero eccentricity, one cannot explicitly write \(P_{\ell m}\) as we shown in Eq. (39), and should directly use \(P_{+},P_{\times}\) in Eq. (38).
### Single arm response in time domain
The effect of GWs on matter can be described as a tidal deformation. To detect the GW, one method is to test the distance changes between two spatially separated free-falling test masses. Suppose that the photon travels along the direction of test mass 1 (\(S_{s}\)) to test mass 2 (\(S_{r}\)) as \(\hat{n}_{l}\), as shown in Fig. 7. It follows a null geodesic, i.e., \(ds^{2}=0\). Thus, the metric reads
\[cdt=\sqrt{(\delta_{ij}+h_{ij}(\xi))dx^{i}dx^{j}}, \tag{51}\]
where
\[\xi(l)=t(l)-\hat{k}\cdot\mathbf{r}(l)/c=t_{s}+l/c-\hat{k}\cdot\left[\mathbf{r}_{s}( t_{s})+\hat{n}(t_{s})\,l\right]/c, \tag{52}\]
\(\mathbf{r}_{s}\) is the position of \(S_{s}\), \(\mathbf{r}(l)\) is the position of photon at time \(t\), \(l=\sqrt{\sum_{i}(x^{i}-x_{s}^{i})^{2}}=\left|\mathbf{r}(l)-\mathbf{r}_{s}\right|\), \(\hat{n}=\frac{\mathbf{r}(l)-\mathbf{r}_{s}}{l}\), and
\[\frac{d\xi}{dl/c}=1-\hat{k}\cdot\hat{n}_{l}(t_{s}),\quad\frac{dx^{i}}{dl}=\hat {n}^{i},\quad\hat{n}^{i}\hat{n}_{i}=1\quad\Rightarrow\quad\frac{dx^{i}/c}{d\xi }=\frac{dx^{i}}{dl}\frac{dl/c}{d\xi}=\frac{\hat{n}^{i}}{1-\hat{k}\cdot\hat{n}} \tag{53}\]
With the above derivation, Eq. (51) can be rewritten as
\[\begin{split} dt=&\sqrt{(\delta_{ij}+h_{ij}(\xi)) \frac{dx^{i}/c}{d\xi}\frac{dx^{j}/c}{d\xi}}d\xi=\sqrt{1+h_{ij}\hat{n}^{i}\hat{ n}^{j}}\frac{d\xi}{1-\hat{k}\cdot\hat{n}}\\ \approx&\left(1+\frac{1}{2}h_{ij}\hat{n}^{i}\hat{n} ^{j}+\mathcal{O}(h^{2})\right)\frac{d\xi}{1-\hat{k}\cdot\hat{n}}.\end{split} \tag{54}\]
Then, from \(S_{s}\) to \(S_{r}\), the duration of the proper time will be
\[\begin{split}\int_{t_{s}}^{t_{r}}dt=&\int_{0}^{L_{ l}}\sqrt{1+h_{ij}\hat{n}^{i}\hat{n}^{j}}\frac{d\xi}{1-\hat{k}\cdot\hat{n}} \approx\int_{0}^{L_{l}}\left(1+\frac{1}{2}\hat{n}_{l}^{T}\cdot\mathbf{h}\cdot \hat{n}_{l}+\mathcal{O}(h^{2})\right)\frac{d\xi}{1-\hat{k}\cdot\hat{n}_{l}}\\ =&\int_{0}^{L_{l}}dl/c+\int_{0}^{L_{l}}\frac{\hat{n }_{l}^{T}\cdot\mathbf{h}\cdot\hat{n}_{l}}{2(1-\hat{k}\cdot\hat{n}_{l})}d\xi, \end{split} \tag{55}\]
where \(L_{l}\) is the length between \(S_{s}\) and \(S_{r}\), \(\hat{n}_{l}\) is the unit vector of the photon propagation.
Here, if one supposes that the position of \(S_{s}\) and \(S_{r}\) does not change or changes very little during the photon moving from \(S_{s}\) to \(S_{r}\), which means \(\hat{n}_{l}(t_{s})\approx\hat{n}_{l}(t_{r})=\hat{n}_{l}\). Then, for simplicity, the integral in Eq. (55) can be rewritten as
\[t_{r}=t_{s}+L_{l}/c+\frac{1}{2(1-\hat{k}\cdot\hat{n}_{l})}\hat{n}_{l}^{T}\cdot \left(\int_{\xi_{s}}^{\xi_{r}}\mathbf{h}(\xi)d\xi\right)\cdot\hat{n}_{l}. \tag{56}\]
From this equation, one can directly get the path length fluctuations due to the GW
\[\delta l_{sr}(t)=\frac{c}{2(1-\hat{k}\cdot\hat{n}_{l})}\hat{n}_{l}^{T}\cdot \left(\int_{\xi_{s}}^{\xi_{r}}\mathbf{h}(\xi)d\xi\right)\cdot\hat{n}_{l}. \tag{57}\]
Suppose the frequency of the photon is not changed during the photon travel from \(S_{s}\) to \(S_{r}\). Then the total phase change of the photon will be \(\phi_{\rm tot}=2\pi\nu_{0}(t_{r}-t_{s})\). If there is no GW, the phase change will be \(\phi_{\rm ori}=2\pi\nu_{0}L/c\). So, with the help of Eq. (56), the phase fluctuations measured under the GW will be
\[\Delta\phi(t)=\phi_{\rm tot}-\phi_{\rm ori}=2\pi\nu_{0}\delta l_{sr}(t)/c. \tag{58}\]
Figure 7: A radio signal send from the point \(S_{s}\) travels along the arm \(L_{l}\) in the direction of \(\hat{n}_{l}\) towards the receiver at \(S_{r}\). The coordinate origin is denoted by \(O\), while point \(S_{s}\) and point \(S_{r}\) are located at \(\mathbf{r}_{s}\) and \(S_{r}\), respectively.
To get the time of reception changes with respect to the time of emission, one can differentiate the above equation with respect to \(t_{s}\)
\[\begin{split}\frac{dt_{r}}{dt_{s}}=& 1+\frac{1}{2(1-\hat{k}\cdot\hat{n}_{l})} \hat{n}_{l}^{T}\cdot\left(\int_{\xi_{0}}^{\xi_{L}}\frac{d\mathbf{h}(\xi)}{d \xi}\frac{d\xi}{dt_{s}}d\xi\right)\cdot\hat{n}_{l}\\ =& 1+\frac{1}{2(1-\hat{k}\cdot\hat{n}_{l})}\hat{n}_{l}^{T} \cdot\left[\mathbf{h}(\xi_{r})-\mathbf{h}(\xi_{s})\right]\cdot\hat{n}_{l}.\end{split} \tag{59}\]
Here, we have used the assumption that the motion of \(S_{s}\) and \(S_{r}\) is much slower compared to the time of laser beam propagation, i.e., \(d\mathbf{r}_{s}/dt_{s}\approx 0\) and \(\hat{dn}_{l}/dt_{s}\approx 0\), so \(d\xi/dt_{s}=1\).
The interferometers used to detect GWs do not emit single photons but continuous lasers with frequency \(\nu(t)\). If the phase change of the photon at \(S_{s}\) and \(S_{r}\) are the same, we have \(d\phi/(2\pi)=\nu_{s}dt_{s}=\nu_{r}dt_{r}\). Then one can get the dimensionless fractional frequency deviation \(y^{GW}(t)\) as
\[\begin{split} y^{GW}_{sr}(t_{r})=&\frac{\nu_{r}- \nu_{s}}{\nu_{s}}=\frac{\nu_{r}}{\nu_{s}}-1=\frac{d\phi/dt_{r}}{d\phi/dt_{s}}- 1\\ =&\frac{1}{1+\frac{1}{2(1-\hat{k}\cdot\hat{n}_{l}) }\hat{n}_{l}^{T}\cdot\left[\mathbf{h}(\xi_{r})-\mathbf{h}(\xi_{s})\right] \cdot\hat{n}_{l}}-1\\ \approx&\frac{1}{2(1-\hat{k}\cdot\hat{n}_{l})}\hat{ n}_{l}^{T}\cdot\left[\mathbf{h}(\xi_{s})-\mathbf{h}(\xi_{r})\right]\cdot\hat{n}_{l} +\mathcal{O}(h^{2}).\end{split} \tag{60}\]
Hence
\[\xi_{s}= t_{s}-\hat{k}\cdot\mathbf{r}_{s}(t_{s})/c\approx t_{r}-L_{l}/c-\hat{k}\cdot[\mathbf{r}_{s}(t_{r}-L_{l}/c)]/c\] \[\approx t_{r}-L_{l}/c-\hat{k}\cdot[\mathbf{r}_{s}(t_{r})-\partial_{t} \mathbf{r}_{s}(t_{r})L_{l}/c]/c\] \[\approx t_{r}-L_{l}/c-\hat{k}\cdot\mathbf{r}_{s}(t_{r})/c, \tag{61}\] \[\xi_{r}= t_{r}-\hat{k}\cdot\mathbf{r}_{r}(t_{r})/c. \tag{62}\]
In the third line, we have assumed that \(\partial_{t}\mathbf{r}_{s}\ll c\). Finally, one has the relative frequency deviation at the time of \(t=t_{r}\) as [49; 50; 51]
\[y^{GW}_{slr}(t)=\frac{1}{2(1-\hat{k}\cdot\hat{n}_{l})}\,\hat{n}_{l}^{T}\cdot \left[\mathbf{h}(t-L_{l}/c-\hat{k}\cdot\mathbf{r}_{s}/c)-\mathbf{h}(t-\hat{k }\cdot\mathbf{r}_{r}/c)\right]\cdot\hat{n}_{l}. \tag{63}\]
When the photon reflected from \(S_{r}\) to \(S_{s}\), we have
\[y^{GW}_{rls}(t)=\frac{1}{2(1+\hat{k}\cdot\hat{n}_{l})}\,\hat{n}_{l}^{T}\cdot \left[\mathbf{h}(t-L_{l}/c-\hat{k}\cdot\mathbf{r}_{r}/c)-\mathbf{h}(t-\hat{k }\cdot\mathbf{r}_{s}/c)\right]\cdot\hat{n}_{l}. \tag{64}\]
Considering that the GW is described in the SSB coordinate, thus, one can redefine some parameters as
\[y^{GW}_{slr}(t)=\frac{1}{2(1-\hat{k}\cdot\hat{n}_{l})}\left[H(t-L/c-\hat{k} \cdot\mathbf{r}_{s}/c)-H(t-\hat{k}\cdot\mathbf{r}_{r}/c)\right], \tag{65}\]
where
\[\begin{split} H(t)=n_{l}^{i}h_{ij}(t)n_{l}^{j}=& n_{l}^{i}(h_{+}\epsilon_{ij}^{+}+h_{\times}\epsilon_{ij}^{\times})n_{l}^{j}=n_{l} ^{i}\left[h_{+}(u_{i}u_{j}-v_{i}v_{j})+h_{\times}(u_{i}v_{j}+v_{i}u_{j})\right] n_{l}^{j}\\ =& h_{+}(n_{l}^{i}u_{i}n_{j}^{j}-n_{l}^{i}v_{i}v_{j} n_{l}^{j})+h_{\times}(n_{l}^{i}u_{i}v_{j}n_{l}^{j}+n_{l}^{i}v_{i}u_{j}n_{l}^{j})\\ =& h_{+}\left[(\hat{n}_{l}\cdot\hat{u})^{2}-(\hat{n} _{l}\cdot\hat{v})^{2}\right]+h_{\times}\cdot 2(\hat{n}_{l}\cdot\hat{u})(\hat{n}_{l} \cdot\hat{v})\\ =& h_{+}\zeta_{l}^{+}+h_{\times}\zeta_{l}^{\times}, \end{split} \tag{66}\]
and
\[\zeta_{l}^{+}= \hat{n}_{l}\cdot\epsilon^{+}\cdot\hat{n}_{l}=(\hat{n}_{l}\cdot \hat{u})^{2}-(\hat{n}_{l}\cdot\hat{v})^{2}, \tag{67}\] \[\zeta_{l}^{\times}= \hat{n}_{l}\cdot\epsilon^{\times}\cdot\hat{n}_{l}=2(\hat{n}_{l} \cdot\hat{u})(\hat{n}_{l}\cdot\hat{v}). \tag{68}\]
For the two-way response, one can get [52]
\[\begin{split} y_{sls}^{GW}(t)&=\frac{\nu}{\nu_{0}}-1= \frac{\nu}{\nu^{\prime}}\frac{\nu^{\prime}}{\nu_{0}}-1=\big{[}y_{slr}^{GW}(t-L_ {l}/c)+1\big{]}\big{[}y_{rls}^{GW}(t)+1\big{]}-1\\ &\approx y_{slr}^{GW}(t-L/c)+y_{rls}^{GW}(t)+\mathcal{O}(h^{2})\\ &=\frac{1}{2}\bigg{\{}(1+\hat{k}\cdot\hat{n})\big{[}\Psi_{l}(t-2 L_{l}/c)-\Psi_{l}(t-L/c)\big{]}+(1-\hat{k}\cdot\hat{n})\big{[}\Psi_{l}(t-L_{l}/c)- \Psi_{l}(t)\bigg{\}}\\ &=\frac{1+\hat{k}\cdot\hat{n}}{2}\Psi_{l}(t-2L_{l}/c)-\hat{k} \cdot\hat{n}\,\Psi_{l}(t-L_{l}/c)-\frac{1-\hat{k}\cdot\hat{n}}{2}\Psi_{l}(t), \end{split} \tag{69}\]
where (using \(\hat{n}=\hat{n}(t)\) for convenient)
\[\Psi_{l}(t^{\prime})=\frac{\hat{n}_{l}^{T}\cdot\mathbf{h}(t^{\prime}-\hat{k} \cdot\mathbf{r}(t^{\prime})/c)\cdot\hat{n}_{l}}{1-(\hat{k}\cdot\hat{n}_{l})^{ 2}}. \tag{70}\]
However, one should note that the above derivation is based on the assumption that the positions of the spacecraft change very little between the photon send from \(S_{s}\) to \(S_{r}\).
### Single arm response in frequency domain
Adopting the Fourier transform, the GW in the frequency domain will be [53]
\[h_{0}(t,\vec{x})=h(t-d(t))=\int dfe^{\mathrm{i}2\pi f(t-d(t))}\tilde{h}(f)\ \text{or}\ h(\xi)=\int dfe^{\mathrm{i}2\pi f\xi}\tilde{h}(f). \tag{71}\]
With the Fourier transform, the path length fluctuations could be rewritten as
\[\begin{split}\delta l_{sr}(t)=&\frac{c}{2(1-\hat{k} \cdot\hat{n}_{l})}\hat{n}_{l}^{T}\cdot\left(\int_{\xi_{s}}^{\xi_{r}}d\xi\int df \,e^{\mathrm{i}2\pi f\xi}\,\tilde{\mathbf{h}}(f)\right)\cdot\hat{n}_{l}\\ =&\frac{c}{2(1-\hat{k}\cdot\hat{n}_{l})}\hat{n}_{l} ^{T}\cdot\left(\int df\,\left(e^{\mathrm{i}2\pi f\xi_{r}}-e^{\mathrm{i}2\pi f \xi_{r}}\right)\frac{\tilde{\mathbf{h}}(f)}{\mathrm{i}2\pi f}\right)\cdot\hat{ n}_{l}\\ =& L_{l}\,\hat{n}_{l}^{T}\cdot\left(\int df\,e^{ \mathrm{i}2\pi ft}\mathcal{T}_{sr}(\hat{k},f,t)\tilde{\mathbf{h}}(f)\right) \cdot\hat{n}_{l}\end{split} \tag{72}\]
where \(\mathcal{T}_{sr}(\hat{k},f,t)\) is the transfer function [53]
\[\begin{split}\mathcal{T}_{sr}(f,t)=&\frac{c/L}{2(1- \hat{k}\cdot\hat{n}_{l})}\,\frac{1}{\mathrm{i}2\pi f}\left(e^{-\mathrm{i}2 \pi f\hat{k}\cdot\mathbf{r}_{r}/c}-e^{-\mathrm{i}2\pi f(L_{l}+\hat{k}\cdot \mathbf{r}_{s})/c}\right)\\ =&\frac{c/L}{2(1-\hat{k}\cdot\hat{n}_{l})}\,\frac{1} {\mathrm{i}2\pi f}\left(e^{\mathrm{i}\pi f[L-\hat{k}\cdot(\mathbf{r}_{r}- \mathbf{r}_{s})]/c}-e^{-\mathrm{i}\pi f[L_{l}-\hat{k}\cdot(\mathbf{r}_{r}- \mathbf{r}_{s})]/c}\right)e^{-\mathrm{i}\pi f[L+\hat{k}\cdot(\mathbf{r}_{r}+ \mathbf{r}_{s})]/c}\\ =&\frac{c/L}{2(1-\hat{k}\cdot\hat{n})}\,\frac{ \mathrm{i}2\sin\left(\pi fL/c(1-\hat{k}\cdot\hat{n})\right)}{\mathrm{i}2\pi f }e^{-\mathrm{i}\pi f[L+\hat{k}\cdot(\mathbf{r}_{r}+\mathbf{r}_{s})]/c}\\ =&\frac{1}{2}\mathrm{sinc}\left(\pi fL/c(1-\hat{k} \cdot\hat{n})\right)\exp\left\{-\mathrm{i}\pi f[L+\hat{k}\cdot(\mathbf{r}_{r}+ \mathbf{r}_{s})]/c\right\}.\end{split} \tag{73}\]
Finally, one can define the one-arm detector tensor as
\[\mathbf{D}(\hat{k},f,t)=\frac{1}{2}\hat{n}(t)\otimes\hat{n}(t)\,\mathcal{T}( \hat{k},f,t), \tag{74}\]
and the path length fluctuation in the frequency domain will be
\[\frac{\delta\tilde{l}_{sr}(f)}{L_{l}}=\mathbf{D}(\hat{k},f,t)\,{:}\,\tilde{ \mathbf{h}}(f), \tag{75}\]
where \((\hat{n}\otimes\hat{n})_{ij}=\hat{n}_{i}\hat{n}_{j}\), \(\mathbf{A}{:}\mathbf{B}=A_{i}B_{i}\). Similarly, the phase fluctuation in the frequency domain will be
\[\Delta\tilde{\phi}(f)=\frac{2\pi\nu_{0}}{c}\delta\tilde{l}_{sr}(f). \tag{76}\]
On the other hand, we can derive the relative frequency derivation in the frequency domain directly through the Fourier transform. Fourier transforms the relative frequency deviation will be
\[\tilde{y}_{slr}^{GW}(f,t)= \int dt\,e^{-\mathrm{i}2\pi ft}y_{slr}^{GW}(t)\] \[= \frac{1}{2(1-\hat{k}\cdot\hat{n}_{l})}\,\hat{n}_{l}^{T}\cdot\left[ \int dte^{-\mathrm{i}2\pi ft}\left[\mathbf{h}(t-L_{l}/c-\hat{k}\cdot\mathbf{r }_{s}/c)-\mathbf{h}(t-\hat{k}\cdot\mathbf{r}_{r}/c)\right]\right]\cdot\hat{n}_{l}\] \[= \frac{1}{2(1-\hat{k}\cdot\hat{n}_{l})}\,\hat{n}_{l}^{T}\cdot\int dte ^{-\mathrm{i}2\pi ft}\left[\int df^{\prime}e^{\mathrm{i}2\pi f^{\prime}(t-L_{l }/c-\hat{k}\cdot\mathbf{r}_{s}/c)}\mathbf{h}(f^{\prime})-\int df^{\prime\prime }e^{\mathrm{i}2\pi f^{\prime\prime}(t-\hat{k}\cdot\mathbf{r}_{r}/c)}\mathbf{h} (f^{\prime\prime})\right]\cdot\hat{n}_{l}\] \[= \frac{1}{2(1-\hat{k}\cdot\hat{n}_{l})}\,\hat{n}_{l}^{T}\cdot\int dt \int df^{\prime}e^{-\mathrm{i}2\pi(f-f^{\prime})t}\tilde{\mathbf{h}}(f^{ \prime})\left[e^{-\mathrm{i}2\pi f^{\prime}(L_{l}+\hat{k}\cdot\mathbf{r}_{s})/ c}-e^{-\mathrm{i}2\pi f^{\prime}\cdot\hat{k}\cdot\mathbf{r}_{r}/c}\right] \cdot\hat{n}_{l}\] \[= \frac{1}{2(1-\hat{k}\cdot\hat{n}_{l})}\,\left[e^{-\mathrm{i}\pi f [L_{l}+\hat{k}\cdot(\mathbf{r}_{s}-\mathbf{r}_{r})]/c}-e^{-\mathrm{i}\pi f( \hat{k}\cdot\mathbf{r}_{r}-L_{l}-\hat{k}\cdot\mathbf{r}_{s})/c}\right]e^{- \mathrm{i}\pi f[L_{l}+\hat{k}\cdot(\mathbf{r}_{s}+\mathbf{r}_{r})]/c}\,\hat{n} _{l}^{T}\cdot\tilde{\mathbf{h}}(f)\cdot\hat{n}_{l}\] \[= -\frac{\mathrm{i}\sin\left[\pi fL_{l}/c(1-\hat{k}\cdot\hat{n}_{l} )\right]}{(1-\hat{k}\cdot\hat{n}_{l})}\,e^{-\mathrm{i}\pi f[L_{l}+\hat{k}\cdot (\mathbf{r}_{s}+\mathbf{r}_{r})]/c}\,\hat{n}_{l}^{T}\cdot\tilde{\mathbf{h}}(f )\cdot\hat{n}_{l}\] \[= -\frac{\mathrm{i}\pi fL_{l}}{c}\mathrm{sinc}\left[\pi fL_{l}/c(1- \hat{k}\cdot\hat{n}_{l})\right]\,e^{-\mathrm{i}\pi f[L_{l}+\hat{k}\cdot( \mathbf{r}_{s}+\mathbf{r}_{r})]/c}\,\hat{n}_{l}^{T}\cdot\tilde{\mathbf{h}}(f) \cdot\hat{n}_{l}\] \[= -\frac{\mathrm{i}2\pi fL_{l}}{c}\mathcal{T}_{sr}(f,t)\,(h_{+}\zeta _{l}^{+}+h_{\times}\zeta_{l}^{\times}). \tag{77}\]
Here \(\hat{n}_{l}\cdot\tilde{\mathbf{h}}(f)\cdot\hat{n}_{l}=\hat{n}_{l}\cdot(\tilde{ h}_{+}\epsilon^{+}+\tilde{h}_{\times}\epsilon^{\times})\cdot\hat{n}_{l}= \tilde{h}_{+}\zeta_{l}^{+}+\tilde{h}_{\times}\zeta_{l}^{\times}\). If the GW tensor can be decomposed into \(\tilde{\mathbf{h}}=\mathbf{P}(f)\tilde{h}(f)\), where \(\mathbf{P}=e^{+}+e^{\times}\). Thus, the transfer function for the relative frequency deviation can be written as [40; 41]
\[G_{slr}^{GW}(f,t)=-\frac{\mathrm{i}\pi fL_{l}}{c}\mathrm{sinc}\left[\pi fL_{l} /c(1-\hat{k}\cdot\hat{n}_{l})\right]\,e^{-\mathrm{i}\pi f[L+\hat{k}\cdot( \mathbf{r}_{s}+\mathbf{r}_{r})]/c}\,\hat{n}_{l}^{T}\cdot\mathbf{P}(f)\cdot\hat{ n}_{l}. \tag{78}\]
For the multiple modes, the transfer function \(G_{slr}^{\ell m}(f,t)\) has the same form as the above equation, where just \(\mathbf{r}\) should be changed to \(P_{\ell m}\), i.e.,
\[G_{slr}^{\ell m}(f,t)=-\frac{\mathrm{i}\pi fL_{l}}{c}\mathrm{sinc}\left[\pi fL_ {l}/c(1-\hat{k}\cdot\hat{n}_{l})\right]\,e^{-\mathrm{i}\pi f[L+\hat{k}\cdot( \mathbf{r}_{s}+\mathbf{r}_{r})]/c}\,\hat{n}_{l}^{T}\cdot P_{\ell m}\cdot\hat{ n}_{l}. \tag{79}\]
With help of \(\xi_{+}\) and \(\xi_{\times}\), the part of \(\hat{n}_{l}\cdot P_{\ell m}\cdot\hat{n}_{l}\) will be
\[\hat{n}_{l}\cdot P_{\ell m}\cdot\hat{n}_{l}= \frac{1}{2}{}_{-2}Y_{\ell m}(\iota,\varphi)e^{-\mathrm{i}2\psi}( \zeta_{l}^{+}+\mathrm{i}\zeta_{l}^{\times})+\frac{1}{2}(-1)^{\ell}{}_{-2}Y_{ \ell,-m}^{*}(\ell,\varphi)e^{+\mathrm{i}2\psi}(\xi_{l}^{+}-\mathrm{i}\xi_{l}^{ \times}). \tag{80}\]
One should note that in the previous equations, when the higher modes are considered, the time-frequency relationship should be considered. With the help of stationary phase approximation, the time-frequency relationship will be
\[t_{f}^{\ell m}=-\frac{1}{2\pi}\frac{d\Psi_{\ell m}}{df}, \tag{81}\]
for different modes.
As for GW with eccentricity, we could not simply calculate the response function using the formulae above, even if it only has the dominant spin-weighted spherical harmonic \((\ell,m)=(2,2)\). Different eccentric harmonics also have different time-frequency correspondence, so we need to write [11]
\[t_{f}^{j}=\frac{1}{2\pi}\frac{d\Psi_{j}}{df}. \tag{82}\]
Then we decompose \(\tilde{\mathbf{h}}\) into eccentric harmonics \(\tilde{\mathbf{h}}_{j}\), i.e.
\[\tilde{\mathbf{h}} =\sum_{j}\tilde{\mathbf{h}}_{j}, \tag{83}\] \[\tilde{\mathbf{h}}_{j} =P_{+}\tilde{h}_{j}^{+}+P_{\times}\tilde{h}_{j}^{\times},\]
and rewrite Eq. (77)-(79):
\[\ddot{y}_{slr}=\sum_{j}\mathcal{T}^{j}_{slr}(f):\tilde{h}_{j}, \tag{84}\] \[\mathcal{T}^{j}_{slr}(f)=G_{slr}\left(f,t^{j}_{f}\right),\] (85) \[G^{\ell m}_{slr}(f,t)=-\frac{\mathrm{i}\pi fL_{l}}{c}\mathrm{sinc }\left[\pi fL_{l}/c(1-\hat{k}\cdot\hat{n}_{l})\right]\;e^{-\mathrm{i}\pi f[L+ \hat{k}\cdot(\mathbf{r}_{r}+\mathbf{r}_{r})]/c}\,\hat{n}^{T}_{l}\otimes\hat{n} _{l}. \tag{86}\]
### Response for the mildly chirping signals
For mildly chirping binary sources that do not contain the Fourier integral, one can assume that the phase of the GW can be approximated as [54].
\[\Phi(\xi)=2\pi f_{0}\xi+\pi\dot{f}_{0}\xi^{2}+\varphi_{0}, \tag{87}\]
where \(f_{0},\dot{f}_{0}\) and \(\varphi_{0}\) are the initial frequency, frequency deviation and phase, respectively. Thus, the instantaneous frequency can be given as
\[\frac{1}{2\pi}\frac{\partial\Phi(\xi)}{\partial t}=\frac{1}{2\pi}\frac{ \partial\Phi(\xi)}{\partial\xi}\frac{\partial\xi}{\partial t}=\left(f_{0}+ \dot{f}_{0}\xi\right)\left(1-\hat{k}\cdot\frac{\partial\mathbf{r}(t)}{ \partial t}\right). \tag{88}\]
According to the equation, we may assume a fixed frequency at \(\xi_{0}\) as
\[f_{s}=f_{0}+\dot{f}_{0}\xi_{0}, \tag{89}\]
and the index \(s\) denotes the dependency of the approximated frequency on the time of emission \(\xi_{0}\). Here, assuming that the frequency of the GW changes very little, i.e., \(\dot{f}_{0}(\xi_{L}-\xi_{0})\ll f_{0}\). Then
\[\Phi(\xi)\approx\int dt\,2\pi f_{s}\left(1-\hat{k}\cdot\frac{\partial\mathbf{r }(t)}{\partial t}\right)=\int d\xi\,2\pi f_{s}=2\pi(f_{0}+\dot{f}_{0}\xi_{0}) \xi+C, \tag{90}\]
where \(C\) is some integration constant. Meanwhile, the amplitude of the wave also changes little. Then the plane wave can be described as
\[h(\xi)=A(\xi)e^{\mathrm{i}2\pi f_{s}\xi}\approx A(\xi_{0})e^{\mathrm{i}2\pi f _{s}\xi_{0}}e^{\mathrm{i}2\pi f_{s}(\xi-\xi_{0})}=h(\xi_{0})e^{\mathrm{i}2\pi f _{s}(\xi-\xi_{0})}. \tag{91}\]
In this way, the integration of the GW tensor fluctuation will be [54]
\[\int_{\xi_{0}}^{\xi_{L}}\mathbf{h}(\xi)d\xi= \mathbf{P}\int h(\xi_{0})e^{\mathrm{i}2\pi f_{s}(\xi-\xi_{0})}d \xi=\mathbf{P}\frac{1}{\mathrm{i}2\pi f_{s}}\left(h(\xi_{L})-h(\xi_{0}) \right)=\mathbf{P}\frac{1}{\mathrm{i}2\pi f_{s}}h(\xi_{0})\left(e^{\mathrm{i} 2\pi f_{s}(\xi_{L}-\xi_{0})}-1\right)\] \[= \mathbf{P}\frac{\sin\left[\pi f_{s}(\xi_{L}-\xi_{0})\right]}{\pi f _{s}}e^{\mathrm{i}\pi f_{s}(\xi_{L}-\xi_{0})}h(\xi_{0})\] \[= \mathbf{P}\frac{\sin\left[\pi f_{s}(\xi_{L}-\xi_{0})\right]}{\pi f _{s}}e^{\mathrm{i}\pi f_{s}(\xi_{L}+\xi_{0})}A(\xi_{0})\] \[= \mathbf{P}\frac{(1-\hat{k}\cdot\hat{n})L}{c}\mathrm{sinc}\left[ \frac{\pi f_{s}L}{c}\left(1-\hat{k}\cdot\hat{n}\right)\right]e^{-\mathrm{i} \pi f_{s}[L+\hat{k}\cdot(\mathbf{r}_{r}+\mathbf{r}_{s})]/c}A(\xi_{0})e^{\mathrm{ i}2\pi f_{s}t_{r}}\] \[= 2\,\mathbf{P}\frac{(1-\hat{k}\cdot\hat{n})L}{c}\mathcal{T}_{sr}( \hat{k},f_{s},t_{r})A(\xi_{0})e^{\mathrm{i}2\pi f_{s}t_{r}}, \tag{92}\]
where \(\mathbf{P}\) is the unit tensor matrix of GW. Here we have used
\[\xi_{L}-\xi_{0}= (t_{r}-\hat{k}\cdot\mathbf{r}_{r})-(t_{s}-\hat{k}\cdot\mathbf{r}_ {s})=(1-\hat{k}\cdot\hat{n})L/c, \tag{93}\] \[\xi_{L}+\xi_{0}= (t_{r}-\hat{k}\cdot\mathbf{r}_{r}/c)+(t_{s}-\hat{k}\cdot\mathbf{ r}_{s}/c)\approx 2t_{r}-L/c-\hat{k}\cdot(\mathbf{r}_{s}+\mathbf{r}_{r})/c.\]
If the amplitude of GW is some constant, then the path length variation defined in Eq. (57) will be
\[\frac{\delta l_{sr}}{L}(t)\approx\mathcal{T}_{sr}(\hat{k},f_{s},t)\,\hat{n} \cdot\mathbf{h}(t)\cdot\hat{n}. \tag{94}\]
And according to Eq. (60), one can find that
\[\frac{\delta\nu}{\nu_{0}}=-\frac{\mathrm{i}2\pi f_{s}L}{c}\frac{\delta l}{L}. \tag{95}\]
This is similar to the response in the frequency domain, and one should note that the above formula is valid only when the GW is some mildly chirping signals or some monochromatic signals.
Time delay interference
The signal transmitted from spacecraft \(s\) that is received at spacecraft \(r\) at time \(t_{r}\) has its phase compared to the local reference to give the output of the phase change \(\Phi_{sr}(t_{r})\)[54]. The phase difference has contributions from the laser phase noise \(C(t)\), optical path length variations, shot noise \(n^{s}(t)\) and acceleration noise \(\mathbf{n}^{a}(t)\)[54]
\[\Phi_{slr}(t_{r})=\boxed{C_{s}(t_{s})-C_{r}(t_{r})}+2\pi\nu_{0}(\boxed{\delta l _{l}(t_{s})}+\boxed{\Delta l_{l}(t_{s})})+\boxed{n^{s}_{sr}(t_{r})}-\hat{n}_{ l}(t_{s})\cdot\boxed{\mathbf{n}^{a}_{sr}(t_{r})-\mathbf{n}^{a}_{rs}(t_{s})}\, \boxed{,} \tag{96}\]
where \(t_{s}\) is given implicitly by \(t_{s}=t_{r}-\ell_{sr}(t_{s})\) and \(\nu_{0}\) is the laser frequency. The optical path length variations caused by gravitational waves is \(\delta l_{l}(t_{s})\), and those caused by orbital effects is \(\Delta l_{l}(t_{s})\). From Eq. (96), one can find that the space-based GW detection suffers from laser phase noise, which can be alleviated through TDI technology. TDI involves heterodyne interferometry with unequal arm lengths and independent phase-difference readouts [55]. By essentially constructing a virtually equal-arm interferometer, the laser phase noise cancels out exactly.
### General TDI combination
Before introducing the TDI, let's first introduce some definitions. In Fig. 8, the satellite numbers in space are defined clockwise. The definition of the laser path counterclockwise is the positive direction (\(\hat{n}_{i}\)), denoted as \(L_{i}\), and clockwise is the negative direction (\(-\hat{n}_{i}\)), denoted as \(L^{\prime}_{i}\). The arm length \(|L_{i}|\) is defined as the distance between the other two satellites facing the satellite \(i\), where \(i=1,2,3\).
As shown in Fig. 8, let \(\vec{X}_{i}\) as the \(i\)-th spacecraft, \(l_{ij}\) is the distance between the \(i\)-th and \(j\)-th spacecrafts, then
\[L_{1}=\vec{u}_{32}\,l_{32}=\vec{X}_{2}-\vec{X}_{3}\quad L_{2}= \vec{u}_{13}\,l_{13}=\vec{X}_{3}-\vec{X}_{1}\qquad L_{3}=\vec{u}_{21}\,l_{21}= \vec{X}_{1}-\vec{X}_{2} \tag{97}\] \[\hat{n}_{1}=\vec{u}_{32}=\frac{\vec{X}_{2}-\vec{X}_{3}}{|\vec{X}_ {2}-\vec{X}_{3}|}\qquad\hat{n}_{2}=\vec{u}_{13}=\frac{\vec{X}_{3}-\vec{X}_{1} }{|\vec{X}_{3}-\vec{X}_{1}|}\qquad\hat{n}_{3}=\vec{u}_{21}=\frac{\vec{X}_{1}- \vec{X}_{2}}{|\vec{X}_{1}-\vec{X}_{2}|} \tag{98}\]
Let \(s_{1}\) as the time-dependent phase change signals received by the 1-th spacecraft, which is sent from the 2-th spacecraft and propagates along the link \(L_{3}\). One can also sign it as \(s_{231}\). Similarly, let \(s^{\prime}_{1}\) as the signal received by spacecraft 1, which is sent from spacecraft 3 and propagates along \(L^{\prime}_{2}\), or recorded as \(s_{321}\). As shown in Fig. 8 there are six independent laser links.
The first generation TDI combination does not consider the rotation and flexing of the spacecraft constellation, which is only valid for a static constellation, i.e.,
\[L_{i}(t)=L_{i}=\text{const},\qquad L_{i}=L_{i^{\prime}}. \tag{99}\]
Figure 8: Illustration of detector constellation. Three satellites are marked as 1, 2, and 3. Laser paths are marked as \(L_{i}\) and \(L^{\prime}_{i}\), where \(L^{\prime}_{i}\) represents the direction opposite to \(L_{i}\). The direction of unit vector \(\hat{n}_{i}\) is the same as that of \(L_{i}\).
This means that all the arm lengths remain constant as time evolves, and the time duration of photon propagation along the arm is independent of the direction of photons. The 1.5 or modified TDI generation is valid for a rigid but rotating spacecraft constellation, i.e.,
\[L_{i}(t)=L_{i}=\text{const},\qquad L_{i}\neq L_{i^{\prime}}. \tag{100}\]
The propagation direction of photons should be considered. The second generation TDI combination is applied to consider a rotating and flexing constellation, i.e.,
\[L_{i}(t)=L_{i}+\dot{L}_{i}\,t,\qquad L_{i}\neq L_{i^{\prime}}. \tag{101}\]
The arm length changes linearly in time, and relative to the velocity of \(\dot{L}_{i}\). Here, define the time delay operator as \(\mathcal{D}_{i}\), where
\[\mathcal{D}_{i}x(t) \equiv x(t-L_{i}/c), \tag{102}\] \[\mathcal{D}_{i}\mathcal{D}_{j}x(t) = \mathcal{D}_{ij}\,x(t) \equiv x(t-L_{i}/c-L_{j}/c). \tag{103}\]
Then one can define the 1.5 generation unequal arm Michelson-like combination as (see Fig. 9) [50]
\[X_{1.5}= y_{32^{\prime}1}+\mathcal{D}_{2^{\prime}}\left[y_{123}+\mathcal{D}_{ 2}\left(y_{231}+\mathcal{D}_{3}y_{13^{\prime}2}\right)\right]-y_{231}- \mathcal{D}_{3}\left[y_{13^{\prime}2}+\mathcal{D}_{3^{\prime}}\left(y_{32^{ \prime}1}+\mathcal{D}_{2^{\prime}}y_{123}\right)\right] \tag{104}\] \[= y_{32^{\prime}1}+\mathcal{D}_{2^{\prime}}y_{123}+\mathcal{D}_{2 ^{\prime}23}y_{231}+\mathcal{D}_{2^{\prime}23}y_{13^{\prime}2}-y_{231}- \mathcal{D}_{33^{\prime}2}y_{32^{\prime}1}-\mathcal{D}_{33^{\prime}2^{\prime }}y_{123}.\]
For the generation 2.0, one have [50]
\[X_{2.0}= y_{32^{\prime}1}+\mathcal{D}_{2^{\prime}}y_{123}+\mathcal{D}_{2^{ \prime}2}y_{231}+\mathcal{D}_{2^{\prime}23}y_{13^{\prime}2} \tag{105}\] \[+\mathcal{D}_{2^{\prime}233^{\prime}}y_{231}+\mathcal{D}_{2^{ \prime}233^{\prime}3}y_{13^{\prime}2}+\mathcal{D}_{2^{\prime}233^{\prime}33^{ \prime}3^{\prime}}y_{32^{\prime}1}+\mathcal{D}_{2^{\prime}233^{\prime}33^{ \prime}2^{\prime}}y_{123}\] \[-y_{231}-\mathcal{D}_{33^{\prime}13^{\prime}2}-\mathcal{D}_{33^{ \prime}33^{\prime}2^{\prime}1}-\mathcal{D}_{33^{\prime}2^{\prime}3}y_{123}\] \[-\mathcal{D}_{33^{\prime}2^{\prime}23^{\prime}1}-\mathcal{D}_{33^ {\prime}2^{\prime}22^{\prime}}y_{123}-\mathcal{D}_{33^{\prime}2^{\prime}22^{ \prime}}y_{231}-\mathcal{D}_{33^{\prime}2^{\prime}2^{\prime}2^{\prime}23}y_{13 ^{\prime}2}.\]
The \(Y\) and \(Z\) channels can be generated by cyclic permutation of indices: \(1\to 2\to 3\to 1\).
Suppose that all the armlengths are equal, i.e., \(L_{i}=L\). Thus, in the time domain, the first generation of TDI Michelson-like \(X\) channel will be
\[X=[y_{32^{\prime}1}+\mathcal{D}y_{123}]+\mathcal{D}^{2}[y_{231}+\mathcal{D}y_{ 13^{\prime}2}]-[y_{231}+\mathcal{D}y_{13^{\prime}2}]-\mathcal{D}^{2}[y_{32^{ \prime}1}+\mathcal{D}y_{123}], \tag{106}\]
where \(\mathcal{D}=\mathcal{D}_{i}\) and \(\mathcal{D}^{2}=\mathcal{D}\mathcal{D}\). Simply, let \(y_{slr,nL}=y_{sr}(t-nL)\), its Fourier transform will be \(\tilde{y}_{slr,nL}=\mathcal{D}^{n}\tilde{y}_{sr}\), where \(\mathcal{D}\) is the time delay. Otherwise, one can easily get the Frequency domain TDI channel as
\[\tilde{X}= [\tilde{y}_{31}+\mathcal{D}\tilde{y}_{13}]+\mathcal{D}^{2}[\tilde{y }_{21}+\mathcal{D}\tilde{y}_{12}]-[\tilde{y}_{21}+\mathcal{D}\tilde{y}_{12}]- \mathcal{D}^{2}[\tilde{y}_{31}+\mathcal{D}\tilde{y}_{13}]\] \[= (1-\mathcal{D}^{2})\left[\tilde{y}_{31}+\mathcal{D}\tilde{y}_{13} -\tilde{y}_{21}-\mathcal{D}\tilde{y}_{12}\right], \tag{107}\]
However, different channels will use the same link, then the instrumental noises in different channels may be correlated with each other. Considering that all the satellites are identical, we can get one "optimal" combination by linear combinations of \(X\), \(Y\), and \(Z\)[56]:
\[A=\frac{1}{\sqrt{2}}(Z-X), \tag{108}\]
Figure 9: Michelson-like TDI-X channel of first generation TDI.
\[E = \frac{1}{\sqrt{6}}(X-2Y+Z), \tag{109}\] \[T = \frac{1}{\sqrt{3}}(X+Y+Z). \tag{110}\]
In the \(A\), \(E\), and \(T\) channels, the instrumental noise is orthogonal, and consequently, the noise correlation matrix of these three combinations is diagonal [56]. Combining the above equations, one can obtain
\[\tilde{A}= \frac{1}{\sqrt{2}}(1-\mathcal{D}^{2})\left(\tilde{y}_{23}+ \mathcal{D}\tilde{y}_{32}\boxed{\overline{-\tilde{y}_{13}-\mathcal{D}\tilde{y }_{31}-\tilde{y}_{31}-\mathcal{D}\tilde{y}_{13}}}+\tilde{y}_{21}+\mathcal{D} \tilde{y}_{12}\right)\] \[= \frac{1}{\sqrt{2}}(\mathcal{D}^{2}-1)\big{[}(1+\mathcal{D})( \tilde{y}_{31}+\tilde{y}_{13})-\tilde{y}_{23}-\mathcal{D}\tilde{y}_{32}- \tilde{y}_{21}-\mathcal{D}\tilde{y}_{12}\big{]} \tag{111}\] \[\tilde{E}= \frac{1}{\sqrt{6}}(\mathcal{D}^{2}-1)\big{[}(1-\mathcal{D})( \tilde{y}_{13}-\tilde{y}_{31})+(1+2\mathcal{D})(\tilde{y}_{21}-\tilde{y}_{23} )+(2+\mathcal{D})(\tilde{y}_{12}-\tilde{y}_{32})\big{]},\] (112) \[\tilde{T}= \frac{1}{\sqrt{3}}(\mathcal{D}^{2}-1)(1-\mathcal{D})\big{(} \tilde{y}_{13}-\tilde{y}_{31}+\tilde{y}_{21}-\tilde{y}_{12}+\tilde{y}_{32}- \tilde{y}_{23}). \tag{113}\]
### Instrument noise
We will focus on the case that the instrumental noise \(n(t)\) is assumed to be Gaussian stationary with a zero mean. Thus, the ensemble average of the Fourier components of the noise \(n(f)\) can be written in the following form
\[\langle\tilde{n}(f)\tilde{n}^{*}(f^{\prime})\rangle=\frac{1}{2} \delta(f-f^{\prime})S_{n}(f), \tag{114}\]
where \({}^{*}\) denotes complex conjugate, and \(S_{n}(f)\) is the single-sided noise power spectral density (PSD)2.
Figure 10: Space-time map of TDI 2.0 for the Michelson-like X channel.
For TianQin, the designed requirement for the acceleration noise of is \(\sqrt{S_{a}}=10^{-15}\mathrm{m\,s^{-2}Hz^{-1/2}}\) and the displacement noise is \(\sqrt{S_{x}}=1\mathrm{pm\,Hz^{-1/2}}\)[1]. For LISA, as reported in Ref. [57], the displacement noise is \(\sqrt{S_{x}}=15\mathrm{\,pm\,Hz^{1/2}}\) and for the acceleration noise is \(\sqrt{S_{a}}=3\times 10^{-15}\mathrm{m\,s^{-2}Hz^{-1/2}}\). For TaiJi, tablehe design goal for the displacement noise is \(\sqrt{S_{x}}=8\mathrm{\,pm\,Hz^{-1/2}}\) and for the acceleration noise is \(\sqrt{S_{a}}=3\times 10^{-15}\mathrm{m\,s^{-2}Hz^{-1/2}}\) at \(1\) mHz [29].
As discussed at the beginning of section V, when the laser noise is cancelled, the total can be described by two noises. One is displacement or position noise, which is dominated at high frequencies. The other one is the acceleration noise, which is dominated at low frequencies. Note that the noise parameters defined in the previous paragraph should convert to the same dimension, such as in the dimension of length (here, using the LISA noise as an example)
\[\sqrt{S_{\delta l}^{\textit{qms}}}(f)= \sqrt{S_{x}}\sqrt{1+\left(\frac{2\mathrm{mHz}}{f}\right)^{4}}, \tag{115}\] \[\sqrt{S_{\delta l}^{\textit{acc}}}(f)= \frac{\sqrt{S_{a}}}{(2\pi f)^{2}}\sqrt{1+\left(\frac{0.4\mathrm{ mHz}}{f}\right)^{2}}\sqrt{1+\left(\frac{f}{8\mathrm{mHz}}\right)}. \tag{116}\]
and in the dimension of the relative frequency, it will be
\[\sqrt{S_{\delta\nu/\nu}^{\textit{qms}}}= \sqrt{S_{x}}\frac{2\pi f}{c}\sqrt{1+\left(\frac{2\mathrm{mHz}}{f }\right)^{4}}, \tag{117}\] \[\sqrt{S_{\delta\nu/\nu}^{\textit{acc}}}(f)= \frac{\sqrt{S_{a}}}{2\pi fc}\sqrt{1+\left(\frac{0.4\mathrm{mHz}} {f}\right)^{2}}\sqrt{1+\left(\frac{f}{8\mathrm{mHz}}\right)}. \tag{118}\]
For different detectors, the difference is the value in front and the tail of frequency variation. For TianQin, the relative noise parameters will be [1]
\[\sqrt{S_{\delta l}^{\textit{qms}}}(f)=\sqrt{S_{x}},\qquad\sqrt{S_{ \delta l}^{\textit{acc}}}(f)=\frac{\sqrt{S_{a}}}{(2\pi f)^{2}}\sqrt{1+\frac{0. 1\mathrm{mHz}}{f}}, \tag{119}\] \[\sqrt{S_{\delta\nu/\nu}^{\textit{qms}}}=\sqrt{S_{x}}\frac{2\pi f} {c},\quad\sqrt{S_{\delta\nu/\nu}^{\textit{acc}}}(f)=\frac{\sqrt{S_{x}}}{2\pi fc }\sqrt{1+\frac{0.1\mathrm{mHz}}{f}}. \tag{120}\]
With the above definitions and the assumption that all the instrumental's noise parameters are the same, the PSD of noise of the TDI 1.0 type for the \(A,E,T\) channels will be [50]
\[\begin{split} S_{n}^{A,E}(f)=& 8\sin^{2}\left(2\pi fL/c \right)\left\{4[1+\cos\left(2\pi fL/c\right)+\cos^{2}\left(2\pi fL/c\right)]S^ {\textit{acc}}+[2+\cos\left(2\pi fL/c\right)]S^{\textit{qms}}\right\},\\ S_{n}^{T}(f)=& 32\cos^{2}\left(2\pi fL/c\right)\sin^{2} \left(\pi fL/c\right)^{4}\left[4\sin^{2}\left(\pi fL/c\right)S^{\textit{acc} }+S^{\textit{qms}}\right].\end{split} \tag{121}\]
In Fig. 11, we have shown the three noise PSD curves of TDI 1 generation \(A\) channel for LISA, TaiJi, and TianQin.
## VI Waveform
In order to extract information from the detector data, one should model the entire detection process. With the basic definition in section IV.1, one can build a model for some general GW signals. To know the type of GW source and more information about the GW systems, an exact waveform is needed. In this section, we review some waveforms we use for each type of the source.
### Galaxy Compact Binary
In the mHz frequency band, GW events are mainly composed of white dwarf binaries (WDBs) in the Milky Way (with the number \(\sim\mathcal{O}(10^{8})\)) [58], which are expected to be the most numerous GW sources for SBD. These GCBs are expected to exhibit relatively little frequency evolution. Thus, the GW strain emitted from a GCB can be safely approximated as (in the source frame) [51]
\[h_{+}(t)= A_{+}\cos\Phi(t)=h_{0}\frac{1+\cos^{2}\iota}{2}\cos\Phi(t), \tag{122}\]
\[h_{\times}(t)= A_{\times}\sin\Phi(t)=h_{0}\cos\iota\sin\Phi(t), \tag{123}\] \[h_{0}= \frac{4(G\mathcal{M}_{c})^{5/3}}{c^{4}D_{L}}(\pi f)^{2/3},\] (124) \[\Phi(t)= 2\pi ft+\pi\dot{f}t^{2}+\frac{\pi}{3}\ddot{f}t^{2}+\phi_{0}, \tag{125}\]
where \(\iota\) is the inclination angle of the quadruple rotation axis with respect to the line of sight (here the direction is from the source to the Sun), \(\mathcal{M}_{c}=(m_{1}m_{2})^{3/5}/(m_{1}+m_{2})^{1/5}\) is the chirp mass of the system (\(m_{1}\) and \(m_{2}\) are the individual masses of the components of the binary), \(D_{L}\) is the luminosity distance to the source, \(\phi_{0}\) is the initial phase at the start of the observation, \(f\), \(\dot{f}\) and \(\ddot{f}\) are the frequency of the source, frequency's derivative, and double derivative with respect to time, and \(\ddot{f}=\frac{11}{3}\frac{\dot{f}^{2}}{f^{2}}\).
Considering the motion of the detectors moving around the Sun, a Doppler modulation of the phase of the waveform should be taken into account, i.e.,
\[\Phi(t) \rightarrow\Phi(t)+\Phi_{D}(t), \tag{126}\] \[\Phi_{D}(t) =2\pi(f+\dot{f}t)\,\frac{R}{c}\cos\beta\cos(2\pi f_{m}t-\lambda), \tag{127}\]
where \(\Phi_{D}(t)\) is the Doppler modulation, \(f_{m}=1/\)year is the modulation frequency, \(\beta\) and \(\lambda\) are the latitudes and the longitude of the source in ecliptic coordinates, \(R\)=1AU is the semi-major axis of the guiding centre of the satellite constellation, respectively.
### Black Hole Binary
General Phenomenological waveformFor a black hole binariy (BHB) system, one can describe its waveform in the time domain or frequency domain with the help of stationary phase approximation. Here, we consider the frequency domain IMRPhenomD waveform, which assumes aligned spin so only two parameters are needed in describing the spin parameters [59; 60]. In this frame, a BHB system can be characterized by four intrinsic parameters: masses (\(m_{1},m_{2}\)) and dimensionless spins (\(\chi_{1},\chi_{2}\)); seven extrinsic parameters: luminosity distance \(D_{L}\), inclination angle \(\iota\), polarization angle \(\psi\), coalescence time and phase (\(t_{c},\phi_{c}\)) and the ecliptic longitude and ecliptic latitude (\(\lambda,\beta\)) in the SSB. In the IMRPhenomD waveform model, the waveform of plus and cross mode will be
\[\begin{split}\tilde{h}_{+}(f)=&\frac{\mathcal{M}_{c} ^{5/6}}{\pi^{2/3}D_{L}}\frac{1+\cos^{2}\iota}{2}f^{-7/6}\exp(\mathrm{i}\Psi(f )),\\ \tilde{h}_{\times}(f)=&-\mathrm{i}\frac{M_{c}^{5/6} }{\pi^{2/3}D_{L}}\cos\iota\,f^{-7/6}\exp(\mathrm{i}\Psi(f)).\end{split} \tag{128}\]
More details about the phase \(\Psi(f)\) can be seen in Khan et al. [59].
Figure 11: Noise power spectra density (PSD) of TDI \(A\) channel for LISA, TaiJi, and TianQin (four year data).
Eccentric waveformThe GW emission causes the circularization effect, which makes the binaries almost non-eccentric when they are in the GBD frequency band. But when the binaries are in the SBD frequency band the eccentricity should be taken into account. Many eccentric waveform models have been developed to date[61]. Here we use EccentricFD, which is a frequency-domain third post-Newtonian (3PN) waveform with initial eccentricity \(e_{0}\) valid up to 0.4 [48; 62], and has been included into LALSuite[63]. This analytic model only contains the inspiral process of a binary, however, it is sufficient for SBHBs, as they are likely to merge outside the sensitive frequency band of SBDs.
Note that, the BHB system can be divided into MBHB and SBHB systems according to their masses and origin. The heavier BHB systems have lower frequency bands. Though their origin or characterize are different, their waveform formulas are similar. When analysing the data, it is important to note the range of parameter values and the applicability of the waveform.
### Extreme Mass Ratio Inspirals
To expedite the generation of EMRI signals, we utilize the FastEMRIWaveform (FEW) package3. The FEW is optimized to generate gravitational wave signals efficiently with GPU acceleration [64]. A reduced-order-model technique is employed in actuality, reducing the number of harmonic modes needed by approximately 40 times and thereby significantly cutting down the time needed to generate the waveform for each source [64]. For example,\(l\in[2,10]\),\(m\in[0,l]\) and \(n\in[-30,30]\),which totals 3843 modes reduce to \(\sim 10^{2}\) modes. The fully relativistic FEW model is limited to eccentric orbits in the Schwarzschild spacetime.
Footnote 3: [https://github.com/BlackHolePerturbationToolkit/FastEMRIWaveforms](https://github.com/BlackHolePerturbationToolkit/FastEMRIWaveforms)
In specific, the time domain dimensionless strain of an EMRI source \(h(t)\) can be given by
\[h(t)=\frac{\mu}{d_{L}}\sum_{lmkn}A_{lmkn}(t)S_{lmkn}(t,\theta)e^{im\phi}e^{-i \Phi_{mn}(t)}, \tag{129}\]
where \(t\) is the time of arrival of the gravitational wave at the solar system barycenter, \(\theta\) is the source-frame polar viewing angle, \(\phi\) is the source-frame azimuthal viewing angle, \(d_{L}\) is the luminosity distance, and \(\{l,m,k,n\}\) are the indices describing the frequency-domain harmonic mode decomposition. The indices \(l,m,k\), and \(n\) label the orbital angular momentum, azimuthal, polar, and radial modes, respectively. \(\Phi_{mkn}=m\Phi_{\psi}+k\Phi_{\theta}+n\Phi_{r}\) is the summation of decomposed phases for each given mode. The amplitude \(A_{lmkn}\) is related to the amplitude \(Z^{\infty}_{lmkn}\) of the Teukolsky mode amplitude far from the source. It is given by \(A_{lmkn}=-2Z^{\infty}_{lmkn}/\omega^{2}_{mkn}\), where \(\omega_{mkn}=m\Omega_{\varphi}+k\Omega_{\theta}+n\Omega_{r}\) is the frequency of the mode, and \(\Omega_{r,\theta,\phi}\) describe the frequencies of a Kerr geodesic orbit.
### Stochastic Gravitational Waves Background
In addition to the aforementioned primary distinguishable GW sources, there is another important type of GW source that could potentially be detected by SBD, known as the SGWB. SGWB is composed by a huge number of independent and unresolved GW sources [53]. These stochastic signals are effectively another source of noise in GW detectors. A SGWB can be written as a superposition of plane waves with frequencies of \(f\) and coming from different directions \(\hat{k}\) on the sky
\[h_{ij}(t,\mathbf{x})=\sum_{P}\int_{-\infty}^{+\infty}df\int_{S^{2}}d\Omega_{ \tilde{k}}\tilde{h}_{P}(f,\tilde{k})e^{P}_{ij}(\tilde{k})e^{\mathrm{i}2\pi f[ t-\hat{k}\cdot\mathbf{x}(t)/c]}, \tag{130}\]
where \(P=\{+,\times\}\) denotes polarization. As a stochastic source, one can treat the complex amplitude \(\tilde{h}_{P}(f,\hat{k})\) as some random variable with zero mean value. Supposing the SGWB is stationary, Gaussian, isotropic, and unpolarized, the ensemble average of the two random amplitudes \(\tilde{h}_{P}(f,k)\) can be defined as [65; 53]
\[\langle\tilde{h}_{P}(f,\tilde{k})\tilde{h}^{*}_{P^{\prime}}(f^{\prime},\hat{ k}^{\prime})\rangle=\delta(f-f^{\prime})\frac{\delta^{2}(\hat{k},\hat{k}^{\prime})}{4 \pi}\delta_{PP^{\prime}}\frac{1}{2}S_{h}(f). \tag{131}\]
The function \(S_{h}(f)\) is the one-sided PSD of SGWB.
Note here that \(\delta^{2}(\hat{k},\hat{k}^{\prime})\) is a Dirac delta over the two-sphere, and it implies that the SGWB is independent of \(\hat{k}\). However, it is expected that the SBDs will detect millions of WDBss in the Milky Way and nearby universe [16], and the superposition of millions of unresolved WDBss will contribute to an SGWB [66] (often referred to as foreground due to its strength). Furthermore, due to our location at one end of the Milky Way, this SGWB is anisotropic. Of course, there may exist other anisotropic SGWBs as well [20]. In this case, the PSD of the anisotropic SGWB will depend on the frequency and direction as \(\mathcal{P}(f,\hat{k})\). If we assume SGWB is directional and frequency independent, the PSD can be factorized as [67]
\[\mathcal{P}(f,\hat{k})=H(f)\mathcal{P}_{h}(\hat{k}) \tag{132}\]
where the PSD of the SGWB is given by \(H(f)\), and the \(\mathcal{P}_{h}(\hat{k})\) describes the distribution of signal.
## VII Example data-set
In order to simulate the joint observation of certain GW signals, it is necessary to have precise knowledge of the relative positions of the three detector. The relative positions of guiding centers for each detector can be determined by the initial phase parameter \(\alpha-\beta\) or \(\kappa_{0}\) and \(\alpha^{\prime\prime}\) as defined in Eqs. (17)-(19) and Eqs. (23)-(25). Additionally, the relative position of the spacecrafts in different detectors can be determined by the initial phase of the spacecraft (here is the initial phase parameter \(\lambda\) and \(\lambda^{\prime}\)). Once the detector are launched, the relative pahse and positions are fixed. However, when simulating data for testing purposes, the initial phase parameters are some arbitrary values.
MBHB are the primary sources for SBDs, and its total inspiral-merger-ringdown phase can be detected in the mHz band. In Fig. 12, we have shown the MBHB event detected by TianQin, LISA and TaiJi and relative noise PSD. From the figure, it can be found that the length of the arms gives LISA and TaiJi an advantage in terms of the response intensity to signals, but at the same time, it also results in higher low-frequency noise levels.
The mass of SBHB systems is relatively lighter compared to MBHB, which leads to these systems predominantly producing signals in higher frequency ranges. In the Fig. 13, we can observe the performance of a SBHB signal across different detectors. Interestingly, when eccentricity is taken into account, the response waveform becomes considerably more intricate compared to the case where eccentricity is disregarded. This increased complexity in the waveform poses significant challenges for data processing and analysis. Furthermore, the figure demonstrates that the intersection point between the curve of the response signal from TianQin and the noise PSD is noticeably higher than in the case of LISA or TaiJi. This observation suggests that TianQin exhibits certain advantages in high-frequency detection.
Figure 12: TDI-A channel responsed MBHB signal and relative noise PSD detected by different detectors. The masses for the binary system are \((3.5\times 10^{6},2.1\times 10^{5})\)\(M_{\odot}\), spins are \((0.2,0.1)\), the luminosity distance is \(10^{3}\) Mpc, the position is \((\lambda,\beta)=(0.4,1.2)\), and \(\iota=0.3,t_{c}=0\). The total observation time is three months. Here the IMRPhenomD waveform is applied. The initial phase of TianQin’s and LISA’s first spacecraft is set to 0 for this figures.
In the low-frequency region of Fig. 12, the post-response signal of TianQin shows oscillations. Likewise, in Fig 13, the response signals from all three detectors demonstrate oscillatory behaviour. These oscillations arise as a result of the orbital motion of the detectors.
## VIII Summary
Around 2035, one may see more than one SBDs operating simultaneously, with potential candidates include TianQin, LISA and TaiJi. Apart from the huge prospect on the scientific return from the joint observation over single detectors [36], there are also challenges to doing data analysis for joint observation. In order to facilitate the study of problems involved in the joint data analysis, we have introduced GWSpace in this paper, which is a package that can simulate the joint detection data from three SBDs: TianQin, LISA and TaiJi.
GWSpace uses SSB as the common coordinate system for all detectors. It can simulate data for GCB, BHB, EMRI, SGWB, and simple burst signals. It supports injecting time-domain waveform functions and obtaining observed data through time-domain responses. For frequency-domain waveforms, it supports the frequency-domain responses of regular 22 mode, higher harmonic modes, and waveforms with eccentricity BHB. The TDI 1st combinations in time domain and frequency domain now are included. It includes the time-domain and frequency-domain responses of the 1st generation TDI combinations, and the corresponding TDI noise. We have also given a few example data set generated with the package. The package is open source and is free for downloading from this GWSpace. To clearly define all the notations and to eliminate possible misunderstanding, we have presented a detailed description of the coordinate system, the detector orbits, the detector responses, the TDI combinations, the instrumental noise models, and the waveforms for each source in this paper.
As the first work in this direction, GWSpace can be further improved in many ways. For example, we have only implemented the first generation TDI so far while second generation combinations are usually required, at least for LISA and Taiji. What's more, more robust response is needed for some sources with complex waveforms, such as BHB systems with eccentricity [11]. The package is still relaying on very idealistic assumption about the noise: the noises from all satellites in a detector are identical, while in reality no two spacecraft can be exactly the same [15].
One can improve on the last point by implementing more sophisticated noise models for each detectors, but the most precise noise model will have to come from people responsible for each detector. We are hopeful that this may happen one day, and then GWSpace can serve as the starting point for a serious multi-mission data challenge for space-based GW detection.
Figure 13: TDI-A channel responsed SBHB signal and relative noise PSD detected by different detectors. The masses for the binary system are \((35.6,30.6)\)\(M_{\odot}\), the luminosity distance is 100 Mpc, the position is \((\lambda,\beta)=(4.7,-1.5)\), and \(\iota=0.3,t_{c}=0\). The total observation time is three months. Here we have usde the EccentricFD waveform. The initial phase of TianQin’s and LISA’s first spacecraft is set to 0 for this figures.
###### Acknowledgements.
This work has been supported in part by the Guangdong Major Project of Basic and Applied Basic Research (Grant No. 2019B030302001), and the Natural Science Foundation of China (Grants No. 12173104 and No. 12261131504). Several figures were created using exclidraw4 (Figs. 1, 2, 3, 10).
Footnote 4: [https://excalidraw.com/](https://excalidraw.com/)
|
2309.15746 | Faster Relative Entropy Coding with Greedy Rejection Coding | Relative entropy coding (REC) algorithms encode a sample from a target
distribution $Q$ using a proposal distribution $P$ using as few bits as
possible. Unlike entropy coding, REC does not assume discrete distributions or
require quantisation. As such, it can be naturally integrated into
communication pipelines such as learnt compression and differentially private
federated learning. Unfortunately, despite their practical benefits, REC
algorithms have not seen widespread application, due to their prohibitively
slow runtimes or restrictive assumptions. In this paper, we make progress
towards addressing these issues. We introduce Greedy Rejection Coding (GRC),
which generalises the rejection based-algorithm of Harsha et al. (2007) to
arbitrary probability spaces and partitioning schemes. We first show that GRC
terminates almost surely and returns unbiased samples from $Q$, after which we
focus on two of its variants: GRCS and GRCD. We show that for continuous $Q$
and $P$ over $\mathbb{R}$ with unimodal density ratio $dQ/dP$, the expected
runtime of GRCS is upper bounded by $\beta D_{KL}[Q || P] + O(1)$ where $\beta
\approx 4.82$, and its expected codelength is optimal. This makes GRCS the
first REC algorithm with guaranteed optimal runtime for this class of
distributions, up to the multiplicative constant $\beta$. This significantly
improves upon the previous state-of-the-art method, A* coding (Flamich et al.,
2022). Under the same assumptions, we experimentally observe and conjecture
that the expected runtime and codelength of GRCD are upper bounded by $D_{KL}[Q
|| P] + O(1)$. Finally, we evaluate GRC in a variational autoencoder-based
compression pipeline on MNIST, and show that a modified ELBO and an
index-compression method can further improve compression efficiency. | Gergely Flamich, Stratis Markou, Jose Miguel Hernandez Lobato | 2023-09-27T16:01:05Z | http://arxiv.org/abs/2309.15746v1 | # Faster Relative Entropy Coding with
###### Abstract
Relative entropy coding (REC) algorithms encode a sample from a target distribution \(Q\) using a proposal distribution \(P\) using as few bits as possible. Unlike entropy coding, REC does not assume discrete distributions or require quantisation. As such, it can be naturally integrated into communication pipelines such as learnt compression and differentially private federated learning. Unfortunately, despite their practical benefits, REC algorithms have not seen widespread application, due to their prohibitively slow runtimes or restrictive assumptions. In this paper, we make progress towards addressing these issues. We introduce Greedy Rejection Coding (GRC), which generalises the rejection based-algorithm of Harsha et al. (2007) to arbitrary probability spaces and partitioning schemes. We first show that GRC terminates almost surely and returns unbiased samples from \(Q\), after which we focus on two of its variants: GRCS and GRCD. We show that for continuous \(Q\) and \(P\) over \(\mathbb{R}\) with unimodal density ratio \(dQ/dP\), the expected runtime of GRCS is upper bounded by \(\beta D_{\mathrm{KL}}[Q\|P]+\mathcal{O}(1)\) where \(\beta\approx 4.82\), and its expected codelength is optimal. This makes GRCS the first REC algorithm with guaranteed optimal runtime for this class of distributions, up to the multiplicative constant \(\beta\). This significantly improves upon the previous state-of-the-art method, A* coding (Flamich et al., 2022). Under the same assumptions, we experimentally observe and conjecture that the expected runtime and codelength of GRCD are upper bounded by \(D_{\mathrm{KL}}[Q\|P]+\mathcal{O}(1)\). Finally, we evaluate GRC in a variational autoencoder-based compression pipeline on MNIST, and show that a modified ELBO and an index-compression method can further improve compression efficiency.
## 1 Introduction and motivation
Over the past decade, the development of excellent deep generative models (DGMs) such as variational autoencoders (VAEs; Vahdat and Kautz, 2020; Child, 2020), normalising flows (Kingma et al., 2016) and diffusion models (Ho et al., 2020) demonstrated great promise in leveraging machine learning (ML) for data compression. Many recent learnt compression approaches have significantly outperformed the best classical hand-crafted codecs across a range of domains including, for example, lossless and lossy compression of images and video (Zhang et al., 2020, 2022).
**Transform coding.** Most learnt compression algorithms are _transform coding_ methods: they first map a datum to a latent variable using a learnt transform, and encode it using entropy coding (Balle et al., 2020). Entropy coding assumes discrete variables while the latent variables in DGMs are typically continuous, so most transform coding methods quantize the latent variable prior to entropy coding. Unfortunately, quantization is a non-differentiable operation. Thus, state-of-the-art DGMs trained with gradient-based optimisation must resort to some continuous approximation to quantisation during training and switch to hard quantisation for compression. Previous works have
argued that using quantisation within learnt compression is restrictive or otherwise harmful, and that a method which naturally interfaces with continuous latent variables is needed (Havasi et al., 2018; Flamich et al., 2020; Theis and Agustsson, 2021; Flamich et al., 2022).
**Relative entropy coding.** In this paper, we study _relative entropy coding_ (REC; Havasi et al., 2018; Flamich et al., 2020), an alternative to quantization and entropy coding. A REC algorithm uses a proposal distribution \(P\), and a public source of randomness \(S\), to produce a random code which represents a _single sample_ from a target distribution \(Q\). Thus REC does not assume discrete distributions and interfaces naturally with continuous variables. Remarkably, REC has fundamental advantages over quantization in lossy compression with realism constraints (Theis and Agustsson, 2022). More generally, it finds application across a range of settings including, for example, differentially private compression for federated learning (Shah et al., 2022).
**Limitations of existing REC algorithms.** While algorithms for solving REC problems already exist, most of them suffer from limitations that render them impractical. These limitations fall into three categories: prohibitively long runtimes, overly restrictive assumptions, or additional coding overheads. In this work, we study and make progress towards addressing these limitations.
**General-purpose REC algorithms.** On the one hand, some REC algorithms make very mild assumptions and are therefore applicable in a wide range of REC problems (Harsha et al., 2007; Li and El Gamal, 2018). Unfortunately, these algorithms have prohibitively long runtimes. This is perhaps unsurprising in light of a result by Agustsson and Theis (2020), who showed that without additional assumptions on \(Q\) and \(P\), the worst-case expected runtime of any general-purpose REC algorithm scales as \(2^{D_{\mathrm{KL}}[Q\|P]}\), which is impractically slow. There are also REC algorithms which accept a desired runtime as a user-specified parameter, at the expense of introducing bias in their samples (Havasi et al., 2018; Theis and Yosri, 2022). Unfortunately, in order to reduce this bias to acceptable levels, these algorithms require runtimes of an order of \(2^{D_{\mathrm{KL}}[Q\|P]}\), and are therefore also impractical.
**Faster algorithms with additional assumptions.** On the other hand, there exist algorithms which make additional assumptions in order to achieve faster runtimes. For example, dithered quantisation (Ziv, 1985; Agustsson and Theis, 2020) achieves an expected runtime of \(D_{\mathrm{KL}}[Q\|P]\), which is optimal since any REC algorithm has an expected runtime of at least \(D_{\mathrm{KL}}[Q\|P]\). However, it requires both \(Q\) and \(P\) to be uniform distributions, which limits its applicability. Recently, Flamich et al. (2022) introduced \(\mathrm{A}^{*}\) coding, an algorithm based on \(\mathrm{A}^{*}\) sampling (Maddison et al., 2014) which, under assumptions satisfied in practice, achieves an expected runtime of \(D_{\infty}[Q\|P]\). Unfortunately, this runtime is sub-optimal and is not always practically fast, since \(D_{\infty}[Q\|P]\) can be arbitrarily large for fixed \(D_{\mathrm{KL}}[Q\|P]\). Further, as discussed in Flamich et al. (2022) this runtime also comes at a cost of an additional, substantial, overhead in codelength, which limits the applicability of \(\mathrm{A}^{*}\) coding.
**Our contributions.** In this work, we address some of these limitations. First, we propose _greedy rejection coding_ (GRC), a REC algorithm based on rejection sampling. Then, inspired by A* coding (Flamich et al., 2022), we develop GRCS and GRCD, two variants of GRC that partition the sample space to dramatically speed up termination. Figure 1 illustrates the relations between GRC and its variants with existing algorithms. We analyze the correctness and the runtime of these algorithms and, in particular, prove that GRCS has an optimal codelength and order-optimal runtime on a wide class of one-dimensional problems. In more detail, our contributions are:
* We introduce Greedy Rejection Coding (GRC), which generalises the algorithm of Harsha et al. (2007) to arbitrary probability spaces and partitioning schemes. We prove that under mild conditions, GRC terminates almost surely and returns an unbiased sample from \(Q\).
Figure 1: An illustration of the relations between the variants of GRC, introduced in this work, and the variants of \(\mathrm{A}^{*}\) coding. Algorithms in purple are introduced in this work. The algorithms of Harsha et al. (2007) and Li and El Gamal (2018) are equivalent to GRCG and Global \(\mathrm{A}^{*}\) coding respectively.
* We introduce GRCS and GRCD, two variants of GRC for continuous distributions over \(\mathbb{R}\), which adaptively partition the sample space to dramatically improve their convergence, inspired by AS\({}^{*}\) and AD\({}^{*}\) coding (Flamich et al., 2022), respectively.
* We prove that whenever \(dQ/dP\) is unimodal, the expected runtime and codelength of GRCS is \(\mathcal{O}(D_{\mathrm{KL}}[Q\|P])\). This significantly improves upon the \(\mathcal{O}(D_{\infty}[Q\|P])\) runtime of AS\({}^{*}\) coding, which is always larger than that of GRCS. This runtime is order-optimal, while making far milder assumptions than, for example, ditered quantization.
* We provide clear experimental evidence for and conjecture that whenever \(dQ/dP\) is unimodal, the expected runtime and codelength of GRCD are \(D_{\mathrm{KL}}[Q\|P]\). This also significantly improves over the \(D_{\infty}[Q\|P]\) empirically observed runtime of AD\({}^{*}\) coding.
* We implement a compression pipeline with VAEs, using GRC to compress MNIST images. We propose a modified ELBO objective and show that this, together with a practical method for compressing the indices returned by GRC further improve compression efficiency.
## 2 Background and related work
Relative entropy coding.First, we define REC algorithms. Definition 1 is stricter than the one given by Flamich et al. (2022), as it has a stronger condition on the the expected codelength of the algorithm. In this paper, all logarithms are base 2, and all divergences are measured in bits.
**Definition 1** (REC algorithm).: _Let \((\mathcal{X},\Sigma)\) be a measurable space, let \(\mathcal{R}\) be a set of pairs of distributions \((Q,P)\) over \((\mathcal{X},\Sigma)\) such that \(D_{\mathrm{KL}}[Q\|P]<\infty\) and \(\mathcal{P}\) be the set of all distributions \(P\) such that \((Q,P)\in\mathcal{R}\) for some distribution \(Q\). Let \(S=(S_{1},S_{2},\dots)\) be a publicly available sequence of independent and fair coin tosses, with corresponding probability space \((\mathcal{S},\mathcal{F},\mathbb{P})\) and let \(\mathcal{C}=\{0,1\}^{*}\) be the set of all finite binary sequences. A REC algorithm is a pair of functions \(\mathsf{enc}:\mathcal{R}\times\mathcal{S}\to\mathcal{C}\) and \(\mathsf{dec}:\mathcal{C}\times\mathcal{P}\times\mathcal{S}\to\mathcal{X}\), such that for each \((Q,P)\in\mathcal{R}\), the outputs of the encoder \(C=\mathsf{enc}(Q,P,S)\) and the decoder \(X=\mathsf{dec}(P,C,S)\) satisfy_
\[X\sim Q\quad\text{and}\quad\mathbb{E}_{S}[|C|]=D_{\mathrm{KL}}[Q\|P]+\mathcal{ O}(\log D_{\mathrm{KL}}[Q\|P]), \tag{1}\]
_where \(|C|\) is the length of the string \(C\). We call \(\mathsf{enc}\) the encoder and \(\mathsf{dec}\) the decoder._
In practice, \(S\) is implemented with a pseudo-random number generator (PRNG) with a public seed. In the remainder of this section, we discuss relevant REC algorithms, building up to GRC in section 3.
Existing REC algorithms.While there are many REC algorithms already, they suffer from various issues limiting their applicability in practice. Our proposed algorithm, Greedy Rejection Coding (GRC), is based on and generalises the rejection-based algorithm of Harsha et al. (2007), by drawing inspiration from A\({}^{*}\) coding (Flamich et al., 2022). Specifically, A\({}^{*}\) coding can be viewed as a generalisation of an algorithm due to Li & El Gamal (2018). The former generalises the latter by introducing a partitioning scheme to speed up termination. In an analogous fashion, GRC generalises Harsha et al. (2007) by also introducing partitioning schemes, to speed up termination and achieve optimal runtimes. Here we discuss relevant algorithms, building up to GRC in section 3.
REC with rejection sampling.Harsha et al. (2007) introduced a REC algorithm based on rejection sampling, which we generalise and extend in this work. While this algorithm was originally presented for discrete \(Q\) and \(P\), we will show that it can be generalised to arbitrary probability spaces. In this section, we present this generalised version and in section 3 we further extend it to arbitrary partitioning schemes (see definition 5). The generalisation to arbitrary probability spaces relies on the Radon-Nikodym derivative \(dQ/dP\), which is guaranteed to exist since \(Q\ll P\) by definition 1. When \(Q\) and \(P\) both have densities, \(dQ/dP\) coincides with the density ratio.
At each step, the algorithm draws a sample from \(P\) and performs an accept-reject step, as illustrated in fig. 2. If it rejects the sample, it rules out part of \(Q\) corresponding to the acceptance region, adjusts
Figure 2: Example run of Harsha et al. (2007), for a pair of continuous \(Q\) and \(P\) over \([0,1]\). The green and red regions correspond to acceptance and rejection regions at each step. Here the algorithm rejects the first two samples and accepts the third one, terminating at the third step.
the proposal to account for the removed mass, and repeats until acceptance. More formally, define \(T_{0}\) to be the zero-measure on \((\mathcal{X},\Sigma)\), and recursively for \(d\in\mathbb{N}\), set:
\[T_{d+1}(S) \stackrel{{\text{def}}}{{=}}T_{d}(S)+A_{d+1}(S), A_{d+1}(S) \stackrel{{\text{def}}}{{=}}\int_{S}\alpha_{d+1}(x)\,dP(x), \tag{2}\] \[t_{d}(x) \stackrel{{\text{def}}}{{=}}\frac{dT_{d}}{dP}(x), \alpha_{d+1}(x) \stackrel{{\text{def}}}{{=}}\min\left\{\frac{dQ}{dP }(x)-t_{d}(x),(1-T_{d}(\mathcal{X}))\right\},\] (3) \[X_{d}\sim P,\ U_{d}\sim\text{Uniform}(0,1) \beta_{d+1}(x) \stackrel{{\text{def}}}{{=}}\frac{\alpha_{d+1}(x)}{1 -T_{d}(\mathcal{X})}, \tag{4}\]
for all \(x\in\mathcal{X},S\in\Sigma\). The algorithm terminates at the first occurrence of \(U_{d}\leq\beta_{d+1}(X_{d})\). The \(T_{d}\) measure corresponds to the mass that has been ruled off up to and including the \(d^{\text{th}}\) rejection: \(T_{1}(\mathcal{X}),T_{2}(\mathcal{X})\) and \(T_{3}(\mathcal{X})\) are the sums of the blue and green masses in the left, middle and right plots of fig. 2 respectively. The \(A_{d}\) measure corresponds to the acceptance mass at the \(d^{\text{th}}\) step: \(A_{1}(\mathcal{X}),A_{2}(\mathcal{X})\) and \(A_{3}(\mathcal{X})\) are the masses of the green regions in the left, middle and right plots of fig. 2 respectively. Lastly, \(t_{d},\alpha_{d}\) are the Radon-Nikodym derivatives i.e., roughly speaking, the densities, of \(T_{d},A_{d}\) with respect to \(P\), and \(\beta_{d+1}(X_{d})\) is the probability of accepting the sample \(X_{d}\). Here, the encoder enc amounts to keeping count of the number of rejections that occur up to the first acceptance, setting \(C\) equal to this count and returning \(X\) and \(C\). The decoder dec amounts to drawing \(C+1\) samples from \(P\), using the same seed as the encoder, and returning the last of these samples. While this algorithm is elegantly simple and achieves optimal codelengths, Flamich & Theis (2023) showed its expected runtime is \(2^{D_{\infty}[Q\|P]}\), where \(D_{\infty}[Q\|P]=\sup_{x\in\mathcal{X}}\log(dQ/dP)(x)\) is the Renyi \(\infty\)-divergence. Unfortunately, this is prohibitively slow in most practical cases.
**REC with Poisson & Gumbel processes.** Li & El Gamal (2018) introduced a REC algorithm based on Poisson processes, referred to as Poisson Functional Representation (PFR). PFR assumes that \(dQ/dP\) is bounded above, and relies on the fact that (Kingman, 1992), if \(T_{n}\) are the ordered arrival times of a homogeneous Poisson process on \(\mathbb{R}^{+}\) and \(X_{n}\sim P\), then
\[N\stackrel{{\text{def}}}{{=}}\operatorname*{arg\,min}_{n\in \mathbb{N}}\left\{T_{n}\frac{dP}{dQ}(X_{n})\right\}\implies X_{N}\sim Q, \tag{5}\]
Therefore, PFR casts the REC problem into an optimisation, or search, problem, which can be solved in finite time almost surely. The PFR encoder draws pairs of samples \(T_{n},X_{n}\), until it solves the search problem in eq. (5), and returns \(X=X_{N},C=N-1\). The decoder can recover \(X_{N}\) from \((P,C,S)\), by drawing \(N\) samples from \(P\), using the same random seed, and keeping the last sample. While, like the algorithm of Harsha et al. (2007), PFR is elegantly simple and achieves optimal codelengths, its expected runtime is also \(2^{D_{\infty}[Q\|P]}\)(Maddison, 2016).
**Fast REC requires additional assumptions.** These algorithms' slow runtimes are perhaps unsurprising considering Agustsson & Theis's result, which shows under the computational hardness assumption \(\mathrm{RP}\neq\mathrm{NP}\) that without making additional assumptions on \(Q\) and \(P\), there is no REC algorithm whose expected runtime scales _polynomially_ in \(D_{\mathrm{KL}}[Q\|P]\). Therefore, in order achieve faster runtimes, a REC algorithm must make additional assumptions on \(Q\) and \(P\).
**A\({}^{*}\) coding.** To this end, Flamich et al. (2022) proposed: (1) a set of appropriate assumptions which are satisfied by many deep latent variable models in practice and (2) a REC algorithm, referred to as A\({}^{*}\) coding, which leverages these assumptions to achieve a substantial speed-up over existing methods. In particular, A\({}^{*}\) coding generalizes PFR by introducing a partitioning scheme, which splits the sample space \(\mathcal{X}\) in nested partitioning subsets, to speed up the solution of eq. (5). Drawing inspiration from this, our proposed algorithm generalises eqs. (2) to (4) in an analogous manner (see fig. 1), introducing partitioning processes (definition 2) to speed up the algorithm's termination.
**Definition 2** (Partitioning process).: _A partitioning process is a process \(Z:\mathbb{N}^{+}\to\Sigma\) such that_
\[Z_{1}=\mathcal{X},\ \ Z_{2n}\cap Z_{2n+1}=\emptyset,\ \ Z_{2n}\cup Z_{2n+1}=Z_{n}. \tag{6}\]
In other words, a partitioning process \(Z\) is a process indexed by the heap indices of an infinite binary tree, where the root node is \(\mathcal{X}\) and any two children nodes \(Z_{2n},Z_{2n+1}\) partition their parent node \(Z_{n}\). In section 3 we present specific choices of partitioning processes which dramatically speed up GRC.
**Greedy Poisson Rejection Sampling.** Contemporary to our work, Flamich (2023) introduces a rejection sampler based on Poisson processes, which can be used as a REC algorithm referred to as Greedy Poisson Rejection Sampling (GPRS). Similar to GRC and A* coding, GPRS partitions the sample space to speed up the convergence to the accepted sample. Furthermore, a variant of GPRS also achieves order-optimal runtime for one-dimensional distribution pairs with a unimodal density ratio. However, the construction of their method is significantly different from ours, relying entirely on Poisson processes. Moreover, GPRS requires numerically solving a certain ODE, while our method does not, making it potentially more favourable in practice. We believe establishing a closer connection between GPRS and GRC is a promising future research direction.
## 3 Greedy Rejection Coding
**Generalising Harsha et al. (2007).** In this section we introduce Greedy Rejection Coding (GRC; definition 5), which generalises the algorithm of Harsha et al. (2007) in two ways. First, GRC can be used with distributions over arbitrary probability spaces. Therefore, it is applicable to arbitrary REC problems, including REC with continuous distributions. Second, similar to A* coding, GRC can be combined with arbitrary partitioning processes, allowing it to achieve optimal runtimes given additional assumptions on the REC problem, and an appropriate choice of partitioning process.
### Algorithm definition
**Overview.** Before specifying GRC, we summarise its operation with an accompanying illustration. On a high level, GRC interleaves accept-reject steps with partitioning steps, where the latter are determined by a partitioning process. Specifically, consider the example in figs. 2(d) to 2(f), where \(Q\) and \(P\) are distributions over \(\mathcal{X}=[0,1]\), and \(Z\) is the partitioning process defined by
\[Z_{n}=[L,R]\implies Z_{2n}=[L,M),Z_{2n+1}=[M,R],\text{ where }M=(L+R)/2. \tag{7}\]
In each step \(d=1,2,\dots\), GRC maintains a heap index \(I_{d}\) of an infinite binary tree, and an active subset \(S_{d}=Z_{I_{d}}\subseteq\mathcal{X}\) of the sample space, initialised as \(I_{0}=1\) and \(S_{1}=Z_{1}=\mathcal{X}\) respectively.
**Accept-reject step.** In each step, GRC draws a sample from the restriction of \(P\) to \(S_{d}\), namely \(P|_{S_{d}}/P(S_{d})\), and either accepts or rejects it. If the sample is accepted, the algorithm terminates. Otherwise, GRC performs a partitioning step as shown in fig. 2(d)
**Partitioning step.** In each partitioning step, GRC partitions \(S_{d}=Z_{I_{d}}\) into \(Z_{2I_{d}}\) and \(Z_{2I_{d}+1}\), as specified by the partitioning process \(Z\). It then samples a Bernoulli random variable \(b_{d}\), whose outcomes have probabilities proportional to the mass of \(Q\) which has not been accounted for, up to and including step \(d\), within the partitions \(Z_{2I_{d}}\) and \(Z_{2I_{d}+1}\) respectively. In fig. 2(e), these two masses correspond to the purple and orange areas, and the algorithm has sampled \(b_{d}=1\). Last, GRC
Figure 3: Illustrations of the two variants of GRC considered in this work. (a) to (c) show GRC with the _sample-splitting_ partitioning process (GRCS). (d) to (f) show GRC with the dyadic partition process (GRCD). GRC interleaves accept-reject steps with partitioning steps. In the former, it draws a sample and either accepts or rejects it. In the latter, it partitions the sample space and randomly chooses one of the partitions, ruling out large parts of the sample space and speeding up termination.
updates the heap index to \(I_{d+1}=2I_{d}+b_{d}\) and the active subset to \(S_{d+1}=Z_{I_{d+1}}\). GRC proceeds by interleaving accept-reject and partitioning steps until an acceptance occurs.
Algorithm specification.The aforementioned algorithm can be formalised in terms of probability measures over arbitrary spaces and arbitrary partitioning processes. Above, algorithms 1 and 2 describe Harsha et al.'s rejection sampler and our generalisation of it, respectively. For the sake of keeping the exposition lightweight, we defer the formal measure-theoretic definition of GRC to the appendix (see definition 5 in appendix A.1), and refer to algorithm 2 as a working definition here.
Comparison to Harsha et al.While algorithms 1 and 2 are similar, they differ in two notable ways. First, rather than drawing a sample from \(P\), GRC draws a sample from the restriction of \(P\) to an active subset \(S_{d}=Z_{d}\subseteq\mathcal{X}\), namely \(P|_{S_{d}}/P(S_{d})\). Second, GRC updates its active subset \(S_{d}=Z_{d}\) at each step, setting it to one of the children of \(Z_{d}\), namely either \(Z_{2d}\) or \(Z_{2d+1}\), by drawing \(b_{d}\sim\text{Bernoulli}\), and setting \(Z_{2d+b_{d}}\). This partitioning mechanism, which does not appear in algorithm 1, yields a different variant of GRC for each choice of partitioning process \(Z\). In fact, as shown in Proposition 1 below, algorithm 1 is a special case of GRC with \(S_{d}=\mathcal{X}\) for all \(d\). See appendix A.2 for the proof.
**Proposition 1** (Harsha et al. (2007) is a special case of GRC).: _Let \(Z\) be the global partitioning process over \(\Sigma\), defined as_
\[Z_{1}=\mathcal{X},\ \ Z_{2n}=Z_{n},\ \ Z_{2n+1}=\emptyset,\ \ \text{for all}\ \ n=1,2,\ldots. \tag{8}\]
_Harsha et al. (2007) is equivalent to GRC using this \(Z\) and setting \(C=D^{*}\) instead of \(C=I_{D^{*}}\). We refer to this algorithm as Global GRC, or **GRCG** for short._
Partitioning processes and additional assumptions.While Proposition 1 shows that Harsha et al.'s algorithm is equivalent to GRC with a particular choice of \(Z\), a range of other choices of \(Z\) is possible, and this is where we can leverage additional structure. In particular, we show that when \(Q\) and \(P\) are continuous distributions over \(\mathbb{R}\) with a unimodal density ratio \(dQ/dP\), we can dramatically speed up GRC with an appropriate choice of \(Z\). In particular, we will consider the sample-splitting and dyadic partitioning processes from Flamich et al. (2022), given in Definitions 3 and 4.
**Definition 3** (Sample-splitting partitioning process).: _Let \(\mathcal{X}=\mathbb{R}\cup\{-\infty,\infty\}\) and \(P\) a continuous distribution. The sample-splitting partitioning process is defined as_
\[Z_{n}=[a,b],a,b\in\mathcal{X}\ \Longrightarrow\ Z_{2n}=[a,X_{n}],\ \ Z_{2n+1}=[X_{n},b],\ \text{where}\ X_{n}\sim P|_{Z_{n}}/P(Z_{n}).\]
In other words, in the sample-splitting process, \(Z_{n}\) are intervals of \(\mathbb{R}\), each of which is partitioned into sub-intervals \(Z_{2n}\) and \(Z_{2n+1}\) by splitting at the sample \(X_{n}\) drawn from \(P|_{Z_{n}}/P(Z_{n})\). We refer to GRC with the sample-splitting partitioning process as **GRCS**.
**Definition 4** (Dyadic partitioning process).: _Let \(\mathcal{X}=\mathbb{R}\cup\{-\infty,\infty\}\) and \(P\) a continuous distribution. The dyadic partitioning process is defined as_
\[Z_{n}=[a,b],a,b\in\mathcal{X}\ \Longrightarrow\ Z_{2n}=[a,c],\ \ Z_{2n+1}=[c,b],\ \text{such that}\ P(Z_{2n})=P(Z_{2n+1}).\]
Similar to the sample-splitting process, in the dyadic process \(Z_{n}\) are intervals of \(\mathbb{R}\). However, in the dyadic process, \(Z_{n}\) is partitioned into sub-intervals \(Z_{2n}\) and \(Z_{2n+1}\) such that \(P(Z_{2n})=P(Z_{2n+1})\). We refer to GRC with the dyadic partitioning process as **GRCD**.
**GRC with a tunable codelength.** Flamich et al. presented a depth-limited variant of AD\({}^{*}\) coding, DAD\({}^{*}\) coding, in which the codelength \(|C|\) can be provided as a tunable input to the algorithm. Fixed-codelength REC algorithms are typically approximate because they introduce bias in their samples, but are nevertheless useful in certain contexts, such as for coding a group of random variables with the same fixed codelength. GRCD can be similarly modified to accept \(|C|\) as an input, by limiting the maximum steps of the algorithm by \(D_{\max}\) (see algorithm 2). Setting \(D_{\max}=\infty\) in algorithm 2 corresponds to exact GRC, while setting \(D_{\max}<\infty\) corresponds to depth-limited GRC.
### Theoretical results
**Correctness of GRC.** In theorem 1 we show that GRC terminates almost surely and produces unbiased samples from \(Q\), given interchangeable mild assumptions on \(Q,P\) and \(Z\). Assumption 1 is the most general, since it holds for any \(Q\) and \(P\) over arbitrary probability spaces, and can be used to apply GRC to arbitrary coding settings.
**Assumption 1**.: _GRC has a finite ratio mode if \(dQ/dP(x)<M\) for all \(x\in\mathcal{X}\), for some \(M\in\mathbb{R}\)._
Assumption 1 holds for GRCG, GRCS and GRCD, so long as \(dQ/dP\) is bounded. While this assumption is very general, in some cases we may want to consider \(Q,P\) with unbounded \(dQ/dP\). To this end, we show that it can be replaced by alternative assumptions, such as assumptions 2 and 3.
**Assumption 2**.: _GRC is single-branch if for each \(d\), \(b_{d}=0\) or \(b_{d}=1\) almost surely._
GRC with the global partitioning process (eq. 8) satisfies assumption 2. In addition, if \(Q\) and \(P\) are distributions over \(\mathbb{R}\) and \(dQ/dP\) is unimodal, GRCS also satisfies assumption 2.
**Assumption 3**.: _Suppose \(\mathcal{X}\subseteq\mathbb{R}^{N}\). GRC has nicely shrinking \(Z\) if, almost surely, the following holds. For each \(x\in\mathcal{X}\) which is in a nested sequence of partitions \(x\in Z_{1}\supseteq\cdots\supseteq Z_{k_{d}}\supseteq\ldots\) with \(P(Z_{k_{d}})\to 0\), there exist \(\gamma,r_{1},r_{2},...\in\mathbb{R}_{>0}\) such that_
\[r_{d}\to 0,\ Z_{k_{d}}\subseteq B_{r_{d}}(x)\text{ and }P(Z_{k_{d}}) \geq\gamma P(B_{r_{d}}(x)). \tag{9}\]
If \(Q\) and \(P\) are distributions over \(\mathbb{R}\), GRCD satisfies assumption 3. Theorem 1 shows that if any of the above assumptions hold, then GRC terminates almost surely and yields unbiased samples from \(Q\). We provide the proof of the theorem in appendix B.
**Theorem 1** (Correctness of GRC).: _Suppose \(Q,P\) and \(Z\) satisfy any one of assumptions 1 to 3. Then, algorithm 2 terminates with probability \(1\), and its returned sample \(X\) has law \(X\sim Q\)._
**Expected runtime and codelength of GRCS.** Now we turn to the expected runtime and codelength of GRCS. Theorem 2 shows that the expected codelength of GRCS is optimal, while Theorem 3 establishes that its runtime is order-optimal. We present the proofs of the theorems in appendix C.
**Theorem 2** (GRCS codelength).: _Let \(Q\) and \(P\) be continuous distributions over \(\mathbb{R}\) such that \(Q\ll P\) and with unimodal \(dQ/dP\). Let \(Z\) be the sample-splitting process, and \(X\) its returned sample. Then,_
\[\mathbb{H}[X|Z]\leq D_{\mathrm{KL}}[Q\|P]+2\log{(D_{\mathrm{KL}}[Q\|P]+1)}+ \mathcal{O}(1). \tag{10}\]
**Theorem 3** (GRCS runtime).: _Let \(Q\) and \(P\) be continuous distributions over \(\mathbb{R}\) such that \(Q\ll P\) and with unimodal \(dQ/dP\). Let \(Z\) be the sample-splitting process and \(D\) the number of steps the algorithm takes before accepting a sample. Then, for \(\beta=2/\log(4/3)\approx 4.82\) we have_
\[\mathbb{E}[D]\leq\beta\ D_{\mathrm{KL}}[Q\|P]+\mathcal{O}(1) \tag{11}\]
**Improving the codelength of GRCD.** In Theorem 2 we state the bound for the REC setting, where we make no further assumptions on \(Q\) and \(P\). However, we can improve the bound if we consider the _reverse channel coding_ (RCC) setting (Theis & Yosri, 2022). In RCC, we have a pair of correlated random random variables \(X,Y\sim P_{X,Y}\). During one round of communication, the encoder receives \(Y\sim P_{Y}\) and needs to encode a sample \(X\sim P_{X|Y}\) from the posterior using \(P_{X}\) as the proposal distribution. Thus, RCC can be thought of as the average-case version of REC, where the encoder sets \(Q\gets P_{X|Y}\) and \(P\gets P_{X}\). In this case, when the conditions of Theorem 2 hold for every \((P_{X|Y},P_{X})\) pair, in appendix C we show that the bound can be improved to \(\mathbb{I}[X;Y]+2\log(\mathbb{I}[X;Y]+1)+\mathcal{O}(1)\), where \(\mathbb{I}[X;Y]=\mathbb{E}_{Y\sim P_{Y}}\left[D_{\mathrm{KL}}[P_{X|Y}\|P_{Y}]\right]\) is the mutual information between \(X\) and \(Y\).
**GRCS runtime is order-optimal.** Theorem 3 substantially improves upon the runtime of A\({}^{*}\) coding, which is the current fastest REC algorithm with similar assumptions. In particular, AS\({}^{*}\) coding has \(\mathcal{O}(D_{\infty}[Q\|P])\) expected runtime, which can be arbitrarily larger than that of GRCS. Remarkably, the runtime of GRCS is optimal up to the multiplicative factor \(\beta\). This term arises from the fact the sample-splitting process may occasionally rule out a small part of the sample space at a given step.
## 4 Experiments
We conducted two sets of experiments: one on controlled synthetic REC problems to check the predictions of our theorems numerically, and another using VAEs trained on MNIST to study how the performance of GRC-based compression pipelines can be improved in practice. We conducted all our experiments under fair and reproducible conditions and make our source code public.2
Footnote 2: Source code to be published with the camera-ready version: [https://github.com/source-code](https://github.com/source-code).
### Synthetic Experiments
**Synthetic REC experiments.** First, we compare GRCS and GRCD, against AS\({}^{*}\) and AD\({}^{*}\) coding, on a range of synthetic REC problems. We systematically vary distribution parameters to adjust the difficulty of the REC problems. Figure 4 shows the results of our synthetic experiments.
**Partitioning processes improve the runtime of GRC.** First, we observe that, assuming that \(dQ/dP\) is unimodal, introducing an appropriate partitioning process such as the sample-splitting or the dyadic process, dramatically speeds up GRC. In particular, fig. 4 shows that increasing the infinity divergence \(D_{\infty}[Q\|P]\) (for a fixed \(D_{\mathrm{KL}}[Q\|P]\)) does not affect the runtimes of GRCS and GRCD, which remain constant and small. This is a remarkable speed-up over the exponential expected runtime of GRCG.
**GRC is faster than A\({}^{*}\) coding.** Further, we observe that GRC significantly improves upon the runtime of A* coding, which is the fastest previously known algorithm with similar assumptions. In particular, Figure 4 shows that increasing the infinity divergence \(D_{\infty}[Q\|P]\), while keeping the KL divergence \(D_{\mathrm{KL}}[Q\|P]\) fixed, increases the runtime of both AS\({}^{*}\) and AD\({}^{*}\) coding, while the runtimes of GRCS and GRCD remain constant. More generally, for a fixed KL divergence, the infinity divergence can be arbitrarily large or even infinite. In such cases, A\({}^{*}\) coding would be impractically slow or even inapplicable, while GRCS and GRCD remain practically fast.
**GRCD improves on GRCS.** In our experiments, we observe that the performance of GRCD (green in fig. 4) matches that of GRCS (blue in fig. 4) in terms of runtime and codelength. While in our experiments, GRCD does not yield an improvement over GRCS, we note the following behaviour. The sample-splitting process may occasionally rule out a only a small part of space, which can slow down convergence. In particular, in appendix C we show that on average, the sample-splitting process rules out \(\sfrac{1}{2}\) of the active sample space in the best case at each step, and \(\sfrac{3}{4}\) in the worst case. By contrast, the dyadic process always rules out \(\sfrac{1}{2}\) of the sample space, potentially speeding up termination. We conjecture that GRCD achieves an optimal expected runtime with \(\beta=1\).
### Compression with Variational Autoencoders
**Compressing images with VAEs and REC.** One of the most promising applications of REC is in learnt compression. Here, we implement a proof-of-concept lossless neural compression pipeline using a VAE with a factorized Gaussian posterior on MNIST and take the architecture used by Townsend et al. (2018). To compress an image \(Y\), we encode a latent sample \(X\) from the VAE posterior \(q(X\mid Y)\) by applying GRCD dimensionwise after which we encode the image \(Y\) with entropy coding using the VAE's conditional likelihood \(p(Y\mid X)\) as the coding distribution. Unfortunately, in addition to the \(D_{\mathrm{KL}}[q(X_{d}\mid Y)\|p(X_{d})]\) bits coding cost for latent dimension \(d\), this incurs an overhead of \(\log(D_{\mathrm{KL}}[q(X_{d}\mid Y)\|p(X_{d})]+1)+\mathcal{O}(1)\) bits, analogously to how a symbol code, like Huffman coding, incurs a constant overhead per symbol (MacKay, 2003). However, since \(\log(1+x)\approx x\) when \(x\approx 0\), the logarithmic overhead of GRC can become significant compared to the KL divergence. Hence, we now investigate two approaches to mitigate this issue.
Figure 4: Comparison between GRC and A\({}^{*}\) coding on synthetic REC problems with Gaussian \(Q\) and \(P\). _Left:_ we fix \(D_{\mathrm{KL}}[Q\|P]=3\) and vary \(D_{\infty}[Q\|P]\), measuring the number of steps taken by each algorithm. _Right:_ we fix \(D_{\infty}[Q\|P]=D_{\mathrm{KL}}[Q\|P]+2\) and vary \(D_{\mathrm{KL}}[Q\|P]\), plotting the codelengths produced by each algorithm. Reported codelengths do not include additional logarithmic overhead terms. Results are averaged over \(4\times 10^{3}\) different random seeds for each datapoint. We have included error-bars in both plots but these are too small to see compared to the plot scales.
**Modified ELBO for REC.** A principled approach to optimizing our neural compression pipeline is to minimize its expected codelength. For bits-back methods (Townsend et al., 2018, 2019), the negative ELBO indeed expresses their expected codelength, but in REC's case, it does not take into account the additional dimensionwise logarithmic overhead we discussed above. Thus, we propose to minimize a modified negative ELBO to account for this (assuming that we have \(D\) latent dimensions):
\[\underbrace{\mathbb{E}_{X\sim q(X|Y)}[-\log p(Y|X)]+D_{\text{KL}}[q(X|Y)\|p(X )]}_{\text{Regular ELBO}}+\sum_{d=1}^{D}\underbrace{\log\left(D_{\text{KL}}[q( X_{d}|Y)\|p(X_{d})]+1\right)}_{\text{Logarithmic overhead per dimension}}. \tag{12}\]
**Coding the latent indices.** As the final step during the encoding process, we need a prefix code to encode the heap indices \(I_{d}\) returned by GRCD for each \(d\). Without any further information, the best we can do is use Elias \(\delta\) coding (Elias, 1975), which, assuming our conjecture on the expected runtime of GRCD holds, yields an expected codelength of \(\mathbb{I}[Y;X]+2\log(\mathbb{I}[Y;X]+1)+\mathcal{O}(1)\). However, we can improve this if we can estimate \(\mathbb{E}[\log I_{d}]\) for each \(d\): it can be shown, that the maximum entropy distribution of a positive integer-valued random variable with under a constraint on the expectation on its logarithm is \(\zeta(n|\lambda)\propto n^{-\lambda}\), with \(\lambda^{-1}=\mathbb{E}[\log I_{d}]+1\). In this case, entropy coding \(I_{d}\) using this \(\zeta\) distribution yields improves the expected codelength to \(\mathbb{I}[Y;X]+\log(\mathbb{I}[Y;X]+1)+\mathcal{O}(1)\).
**Experimental results.** We trained our VAE with \(L\in\{20,50,100\}\) latent dimensions optimized using the negative ELBO and its modified version in Equation (12), and experimented with encoding the heap indices of GRCD with both \(\delta\) and \(\zeta\) coding. We report the results of our in Table 1 on the MNIST test set in bits per pixel. In addition to the total coding cost, we report the negative ELBO per pixel, which is the fundamental lower bound on the compression efficiency of REC with each VAE. Finally, we report the logarithmic overhead due to \(\delta\) coding. We find that both the modified ELBO and \(\zeta\) coding prove beneficial, especially as the dimensionality of the latent space increases. This is expected, since the overhead is most significant for latent dimensions with small KLs, which becomes more likely as the dimension of the latent space grows. The improvements yielded by each of the two methods are significant, with \(\zeta\) coding leading to a consistent \(1-7\%\) gain compared to \(\delta\) coding and the modified objective resulting in up to \(2\%\) gain in coding performance.
## 5 Conclusion and Future Work
**Summary.** In this work, we introduced Greedy Rejection Coding (GRC), a REC algorithm which generalises the rejection algorithm of Harsha et al. to arbitrary probability spaces and partitioning processes. We proved the correctness of our algorithm under mild assumptions, and introduced GRCS and GRCD, two variants of GRC. We showed that the runtimes of GRCS and GRCD significantly improve upon the runtime of A\({}^{*}\) coding, which can be arbitrarily larger. We evaluated our algorithms empirically, verifying our theory and conducted a proof-of-concept learnt compression experiment on MNIST using VAEs. We demonstrated that a principled modification to the ELBO and entropy coding GRCD's indices using a \(\zeta\) distribution can further improve compression efficiency.
**Limitations and Further work.** One limitation of GRC is that, unlike A\({}^{*}\) coding, it requires us to be able to evaluate the CDF of \(Q\). While in some settings this CDF may be intractable, this assumption is satisfied by most latent variable generative models, and is not restrictive in practice. However, one practical limitation of GRCS and GRCD, as well as AS\({}^{*}\) and AD\({}^{*}\), is that they assume target-proposal pairs over \(\mathbb{R}\). For multivariate distributions, we can decompose them into univariate conditionals and apply GRC dimensionwise, however this incurs an additional coding overhead per dimension, resulting in a non-negligible cost. Thus, an important direction is to investigate whether fast REC algorithms for multivariate distributions can be devised, to circumvent this challenge.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Training & \multirow{2}{*}{\# latent} & Total BPP & Total BPP & Neg. ELBO & Overhead BPP \\ objective & & with \(\zeta\) coding & with \(\delta\) coding & per pixel & with \(\delta\) coding \\ \hline \multirow{3}{*}{ELBO} & 20 & \(1.472\pm 0.004\) & \(1.482\pm 0.004\) & \(1.391\pm 0.004\) & \(0.091\pm 0.000\) \\ & 50 & \(1.511\pm 0.003\) & \(1.530\pm 0.003\) & \(1.357\pm 0.003\) & \(0.172\pm 0.000\) \\ & 100 & \(1.523\pm 0.003\) & \(1.600\pm 0.003\) & \(1.362\pm 0.003\) & \(0.238\pm 0.000\) \\ \hline \multirow{3}{*}{Modified ELBO} & 20 & \(1.470\pm 0.004\) & \(1.478\pm 0.004\) & \(1.393\pm 0.004\) & \(0.085\pm 0.000\) \\ & 50 & \(1.484\pm 0.003\) & \(1.514\pm 0.003\) & \(1.373\pm 0.003\) & \(0.141\pm 0.000\) \\ \cline{1-1} & 100 & \(1.485\pm 0.003\) & \(1.579\pm 0.003\) & \(1.373\pm 0.003\) & \(0.205\pm 0.000\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Lossless compression performance comparison on the MNIST test set of a small VAE with different latent space sizes, optimized using either the ELBO or the modified ELBO in eq. (12). We report the bits per pixel (BPP) attained using different coding methods, averaged over the 10,000 test images, along with the standard error, using GRCD. See section 4.2 for further details. |
2309.05555 | Unraveling Managerial Tangents in Firm Disclosure: Concealing Issues or
Being Exposed? | Earnings calls influence stock prices and are traditionally analyzed using
sentiment and linguistic traces. Our research introduces a "Topic-Switching
Index," a novel metric quantified through the transformer model FinBERT, to
measure managerial evasion during Q$\&$A sessions in earnings calls. We find a
negative correlation between this index and subsequent stock prices, indicating
that investors penalize managerial evasiveness. This study is the first to
quantify such evasive tactics, adding a new dimension to how earnings calls are
understood and suggesting that topic shifting is an overlooked but significant
factor. We also show the predictability of the index under three different
classifier models and it stands out in all circumstances. | Xuan Zhou, Yushen Huang | 2023-09-11T15:44:02Z | http://arxiv.org/abs/2309.05555v1 | # Unraveling Managerial Tangents in Firm Disclosure:
###### Abstract
Earnings calls influence stock prices and are traditionally analyzed using sentiment and linguistic traces. Our research introduces a "Topic-Switching Index," a novel metric quantified through the transformer model FinBERT, to measure managerial evasion during Q&A sessions in earnings calls. We find a negative correlation between this index and subsequent stock prices, indicating that investors penalize managerial exasiveness. This study is the first to quantify such evasive tactics, adding a new dimension to how earnings calls are understood and suggesting that topic shifting is an overlooked but significant factor. We also show the predictability of the index under three different classifier models and it stands out in all circumstances.
Introduction
An earnings call is a conference call between the management team of a public company and analysts to discuss the company's financial conditions after a given reporting period, usually after a quarter or a year. In each earnings call, the representee of the management team, most of the time, the CEO, will firstly show the current financial achievements of the company and then describe the plan for the upcoming quarter. Next, each analyst attended can ask one question with a follow up question, which are ought to be answered by the manager no matter how awkward they could be. Therefore, the earnings call is an important event as the analysts can ask critical questions to the manager and the manager's response can reveal more information that is not fully covered in the financial reports. In fact, ending up no answers to such questions is also a way of reply.
From the conference, the investors can gather new information and adjust their investment decisions accordingly. As shown in Figure 1, we can see the stock price of Chipotle Mexican Grill has three big jumps on three earnings call dates relatively. It is indicative for us to conduct research on how does the earnings call affect the movement of company's stock price and looking into the earnings call transcripts for clues. Moreover, [1] demonstrate that discussion periods are relatively more informative than presentation periods. So our reaserach is mainly focused on analyzing the Q&A session of the earnings call transcripts. Those transcripts are provided for free by the host companies as well as on some third-party websites. In our paper, we collected those text from a website called seeking alpha.
However, there are mainly two concerns for using earnings call transcript to forecast stock price movement. 1. What factors should we consider in the earnings call to forecast the tendency? 2. How can we quantify those factors?
In literature, transcripts affect the stock price mainly in two ways, either from the sentiment aspect or the linguistic traces. So far, plenty of literature has proved that the sentiment contained have significant impact on stock price movement [2, 3, 4, 5] Additionally, the sentiment from managers and analysts does not share the same weight in investors' decision-making process. In [2], it is found that intraday prices react significantly to analyst tone, but not to management tone, for the full duration of the discussion. This effect strengthens when the analyst's tone is relatively negative. To the investors, when managers show unwillingness to answer questions directly, it also signals potential bad performance of the company [6].
Figure 1: Stock Price movement of Chipotle Mexican Grill
The linguistic complexity in manager's speech itself also indicates that the manager is trying to evade from the question, which will be reflected in the stock price later [7, 8, 9, 10, 11, 12, 13] talking about the influence of the linguistic complexity in finance. Nevertheless, it is then pointed out in [14] that linguistic complexity commingles two latent components--obfuscation and information--that are related to information asymmetry in opposite directions. By substract the complexity of analysts scripts from that of managers', this paper comes up with a novel approach to offset the language complexity required to understand the certain industry, for example, the terminology used. [15] quantify a factor of whether the manager answers the analysts' question. By detecting the presence of key phrases in the response, which is defined to have three certain forms, they classify a managerial response to a question as a non-answer using regular expression. It is also shown in the paper that 11 percent of the managers do not answer the question from the analyst. [16] show that a lack of spontaneity is negatively associated with the market reaction to the call, by using a measure of the adherence of transcripts to prepared scripts.
In conclusion, most of the previous studies employing natural language processing techniques on earnings call transcripts have focused on the emotional reactions or the linguistic way of responding, which both are proven to be reliable indicators of market expectations. However, studies based on sentiment analysis have a common tendency to overlook the potential significance of managers' responses. Managers' scripts, althogh much longer than analysts', are proved to be less useful in such researches. [17] find that tone is significantly associated with manager-specific factors such as early career experiences, and involvement in charitable organizations. Managers can make themselves sound as optimistic while the
fact does not appear to be. While being more useful in sentiment analysis, the scripts from analysts are usually very short and brief as they are only allowed to have one question and a follow up question at most, let alone make comments on the managers' answers. So the sentiment detected from analysts' transcripts onle can be less accurate and hard to tell difference quantitatively. On the other hand, researches focused on linguistic analysis of managers' answers can be also biased because of managers' own wording behavior as they can be trained to use less complex language.
In contrast, our research highlights the importance of revealing more information hidden under managers' answers. While managers can manipulate the sentiment expressed through training and preparation in advance of a conference, and adding contents that sound positive in the speech like good aspects of the company's performance, topic shifting remains a challenging behavior to disguise. When managers encounter questions for which they cannot provide positive or satisfying answers, their best option is often to evade the question altogether. Our study quantifies this topic shifting behavior and examines the market's response to it.
Our main assumption is that from the way the managers answering questions, the investors have their own thoughts on whether the company is really as what it is as described during the earnings call, or the managers deliberately hide some potential issues. Those suspicions will be later reflected in the stock market. We assume that during an earnings call, if the topics contained in the managers' responses in the Q&A session do not align or significantly deviate from the themes present in analysts' questions, it indicates a deliberate topic diversion tactic. Such diversions can be interpreted as strategies adopted by managers to cir
cumvent questions that may be difficult to address or those that might not yield affirmative reactions from analysts and investors. Furthermore, this topic shifting allows managers to portray a more positive and upbeat sentiment in their transcripts. The effect of sentiment is widely researched, and some third-party websites provide sentiment score of earnings call transcript for investors who have membership, so we have good reason to hypothesize that the managers will try to at least sound as positive as possible. While they need to face the questions and sound confident at the same time, it is common that the manager evades from tough question like spending more time adding details that are not necessarily related to the question itself. Conversely, when the topics of questions and answers align well, we believe that the manager is confident in responding in a positive and constructive manner, suggesting the company is in a good shape and do not have suspicious issues or the current problems faced by the company are well under control.
We further hypothesize that investors will detect the behavior of managers switching topics to dodge questions, and this will be reflected in stock price movement later on. More specifically, the act of topic shifting during an earnings call Q&A session will have a negative impact on the stock price of the host company, while candid responses can have a positive impact.
In our research, which is the first in this area to put forth this hypothesis, we have designed a Topic Shifting Index. We use a transformer method called Bidirectional Encoder Representation Transformer (BERT) [18] to quantify this topic switching factor which perform better than traditional methods mentioned in literature so far in sentiment analysis area [19, 20]. This metric quantifies the degree of alignment between analysts' questions and managers'
responses. According to our speculation, it has a negative correlation with stock prices post earnings call, which is proved to be true in this study and our model based on this feature outperforms the latest model in the literature so far. Our method allows us to distinguish from previous literature on manager's attempt to evade from questions in firm disclosure in that previous literature only identify cases that managers do not answer questions at all and show liguistic traces while ours includes circumstances that managers do not fully answer questions or in a indirect way.
Our research contribute to literature in methodology also in that we manage to separate every analyst's questions from the others'. Unlike previous literature, which quantify the features interested based on all the analysts or/and all the managers in one earnings call as a whole, we pair each analyst with the answers they get in each call and get the score. Then we calculate the average score for that conference. It can help us get a more accurate score and is also more clear and straightforward when checking the reliability of our topic switching index under different context. There are circumstances when the first analyst had an easy question and the manager replied with confidence, but then the second analysts raised a tough one and the manager answered using the same material covered in the first answer again. If we do not separate the analysts from each other, it is hard to get a credible score as the manager's answers do match with the topics in the question pool. With pairing every analyst with the answers they get whether they have a follow-up question or not, we assign them with equal weight during one earnings call and will avoid the bias from putting too much weight on questions with long answers.
Our findings challenge the argument made by [2] that analysts are the participants on earnings calls whose comments move stock prices during the discussion, showing that managers and their answering is also valuable indicator and detected by the market. Our study uncovers an additional layer of information embedded in managers' responses, shedding light on their communication strategies and potential attempts to evade certain questions that they are supposed to answer directly. At the same time, we do not overlook the role played by the analysts in the earnigs call but include the information in their transcripts as well. By comparing the similarity of questions and answers in pair under the whole context, we extract as much information as possible from both parties. Moreover, our research makes a significant contribution to this area by quantifying topic shifting behavior and investigating its impact on stock prices. While prior studies have explored linguistic cues and sentiment in earnings calls, our study is the first to extract topic shifting feature and prove that the managers do try to avoid answering tough questions with adding less related contents in the answers to make themselves sound good and redirect the topics. By evaluating the relationship between detected instances of topic shifting and subsequent stock price movements, we also provide empirical evidence on how market participants perceive and respond to this hidden information released by managers. This contribution expands the understanding of the information dynamics within earnings calls transcripts and offers valuable insights to analysts and institutional investors, enabling them to have better forecast and form financial strategies accordingly from these fiscal communication events.
The remainder of this paper is structured as follows: Section 2 presents our methodology for detecting topic shifting by managers and outlines the measures employed to assess market
reactions. Section 3 provides an overview of the data collection and preprocessing methods, including the selection of earnings call transcripts and the application of NLP techniques. Section 4 discusses the empirical results and their implications. Finally, Section 5 summarizes the findings, discusses limitations, and suggests avenues for future research.
## 2 Methodology
In this section, we explain our methodology to calculate the Topic Switching Index and how to use it to predict the tendency of the stock price.
### Notation
We start by introducing the notation of the stock price and its relative definition. Those definitions and notations follow from [21].
Let \(\mathbb{C}=\{C_{1},C_{2},\cdots,C_{m}\}\) be the set of \(m\) companies and \(S_{d}^{c}\) be the stock high price for company \(c\) at day \(d\). We also denote the earnings call transcripts of the company throughout all the periods as the set \(\mathbb{T}^{c}=\{T_{d_{1}}^{c},T_{d_{2}}^{c},\cdots,T_{d_{t}}^{c}\}\). Next, let us define the tendency of the stock price movement by checking that if the earnings call happens on day \(d\), whether the stock price from day \(d+1\) is greater or smaller than the stock price on day \(d-1\).
**Definition 1**: _Value Based Label Function(VBL). We define the label function \(y(T_{d}^{c})\in\{-1,1\}\) for a transcript T\({}_{d}\) of a company on the day \(d\) as follows:_
\[y(T_{d}^{c})=\left\{\begin{array}{ll}1&\mbox{If }T_{d+1}^{c}\geq T_{d-1}^{c}\\ -1&\mbox{If }T_{d+1}^{c}<T_{d-1}^{c}\end{array}\right.\]
As mentioned above, **Definition** 1 corresponds to whether the stock price on the day post the call increases compared with one day before; however, in reality, whether the investment makes a profit depends on the current risk-free rate. Hence the tendency is positive only if the stock price increase rate is larger or equal to a threshold \(\tau\). Next let us define the relative stock price movement as the following:
**Definition 2**: _Value Based Label Function(VBL). We define the label function \(y_{r}(T_{d}^{c})\in\{-1,1\}\) for a transcript T\({}_{d}\) of a company on the day \(d\) as follows, where \(\tau\) is the risk-free rate on day \(d\):_
\[y_{r}(T_{d}^{c})=\left\{\begin{array}{ll}1&\mbox{If }\frac{T_{d+1}^{c}-T_{d-1}^{c} }{T_{d-1}^{c}}\geq\tau\\ -1&\mbox{If }\frac{T_{d+1}^{c}-T_{d-1}^{c}}{T_{d-1}^{c}}<\tau\end{array}\right.\]
### BERT and Transformer Based Model
For both the Topic-Switching Feature and the Benchmark Feature, they are based on BERT [18] which is a Transformer Model [22].
#### 2.2.1 Transformer Model
: The transformer model is based on self-attention which relates to the different positions of a sequence and converts it to a vector that includes that infor
mation. More specifically, the Transformer Model has the following stage: encoder and decoder [23, 24, 25] stage, and then it will go over the linear and softmax function which gives the final output. We will dig into the details of the model after familiarizing with some definitions:
Definition 3: Scaled Dot-Product Attention: Given an input sequence of embeddings \(X\in\mathbb{R}^{n\times d}\), the self-attention mechanism computes a weighted sum of these embeddings, allowing each element in the sequence to focus on different parts of the sequence. The self-attention mechanism is defined as the following:
1. **Compute Queries, Keys, and Values:** \[Q =XW^{Q}\] \[K =XW^{K}\] \[V =XW^{V}\] where \(W^{Q}\), \(W^{K}\), and \(W^{V}\) are learned weight matrices for queries, keys, and values respectively.
2. **Calculate Attention Scores:** \[S=\frac{QK^{T}}{\sqrt{d_{k}}}\] where \(d_{k}\) is the dimension of the key vectors.
3. **Apply Softmax to Scores:** \[A=\text{softmax}(S)\]
For the transformer model, they do not directly use the Scaled Dot Product Attention. Instead, They split the input sequence of embedding into \(h\) parts and do scaled dot product attention at each part in parallel. Then they take a weighted average of it. More specifically, it is defined as the following:
Definition 4: Multi Head Attention Given an input sequence of embeddings \(X\), the multi-head attention mechanism can be described as follows:
1. **Linear Projections:** For each head \(i\) from 1 to \(n\): \[Q_{i} =XW_{i}^{Q}\] \[K_{i} =XW_{i}^{K}\] \[V_{i} =XW_{i}^{V}\] where \(W_{i}^{Q}\), \(W_{i}^{K}\), and \(W_{i}^{V}\) are weight matrices specific to the \(i\)-th attention head.
2. **Compute Scaled Dot-Product Attention for each head:** \[S_{i} =\frac{Q_{i}K_{i}^{T}}{\sqrt{d_{k}}}\] \[A_{i} =\text{softmax}(S_{i})\] \[H_{i} =A_{i}V_{i}\]
3. **Concatenate and Linearly Transform:** \[\text{MultiHead}(Q,K,V)=\text{Concat}(H_{1},H_{2},\dots,H_{n})W^{O}\]
_where \(W^{O}\) is a learned weight matrix to produce the final output of the multi-head attention._
Finally, we also need to use the definition of the Position-wise Feed-Forward Networks.
**Definition 5**.: _For a given position \(i\) in the sequence, Position-wise Feed-Forward Networks is defined as the following_
1. _First Linear Transformation:_ \[\text{FFN}_{in}(x_{i})=x_{i}W_{1}+b_{1}\] _where_ \(W_{1}\) _is a weight matrix and_ \(b_{1}\) _is a bias vector._
2. _Activation Function:_ \[\text{FFN}_{\text{ReLU}}(x_{i})=\text{ReLU}(\text{FFN}_{in}(x_{i}))\] _where_ \(\text{ReLU}(x)=\max(0,x)\) _is the Rectified Linear Unit activation function._
3. _Second Linear Transformation:_ \[\text{FFN}_{out}(x_{i})=\text{FFN}_{\text{ReLU}}(x_{i})W_{2}+b_{2}\] _where_ \(W_{2}\) _is another weight matrix and_ \(b_{2}\) _is another bias vector._
Finally, the transformer-based model performs the following process
1. **Encoder Process** The encoder is composed of a stack of 6 layers, with each layer consisting of 2 sub-layers. The first sub-layer is called the multi-head self-attention layer and the second layer is the normal positional-wise fully connected feed-forward network. After going through each layer, it will do normalizations and then produce an output vector of 512 dimensions.
2. **Decoder Process** The decoder is also composed of a stack of 6 layers. The decoder has another multi-head self-attention layer after its first multi-head self-attention layer, followed by the feed-forward network. It also has normalization after each layer.
3. Finally, it will go through the linear function and softmax function.
**BERT** : As we mentioned before, BERT is a transformer-based model that takes input as bidirectional representations from the unlabeled text by jointly conditioning on both left and right contexts in all layers. And then do the process for Transformer. For more details on how BERT is designed please see [18].
### Classifcation by Feature Vector
The BERT model will input the text and output a vector called a feature vector. This feature vector is a quantization of the text which contains both semantic and order information. Then after extracting those vectors, we can create a label for each vector by calculating the VBL from **Definition** 1 or **Definition** 2. Next we can formulate the classification problem by formulating it as an optimization problem. For this paper, we consider the following 3 optimization problems.
* **Support Vector Machine with regularar**: The \(\ell_{2}\) and \(\ell_{1}\) regularized support vector machine is formulated as \[\min_{w}\frac{1}{N}\sum_{i=1}^{N}\max(0,1-y_{i}x_{i}^{T}w_{i})+\frac{\mu_{1}}{2} ||w||_{2}2+\mu_{2}\|w\|_{1}\] (1) where \(\{(x_{i},y_{i})\}_{i=1}^{N}\) are given training data with each \(y_{i}\in\{-1,1\}\). Let \(w^{*}\) be the solution. Then for a new data point x, it can be classified as \(\text{sign}(x^{T}w^{*})\) where \(\mu_{1}\) and \(\mu_{2}\) are real values that are non-negative and could be 0. The support vector machine problem can be generalized to nonlinear classification by using the kernel trick.
* **Logistic Regression with regularar**: The \(\ell_{2}\) regularized logistic regression is formulated as \[\min_{w}\frac{1}{N}\sum_{i=1}^{N}\log(1+\exp(-y_{i}x_{i}^{T}w))+\frac{\mu}{2} ||w||_{2}2\] (2) where \(\{(x_{i},y_{i})\}_{i=1}^{N}\) are given training data with each \(y_{i}\in\{-1,1\}\). Let \(w^{*}\) be the solution. Then for a new data point x, it can be classified as \(\text{sign}(x^{T}w^{*})\). where \(\mu\) is a non-negative real value that could be 0.
* **Neural Network**: The neural network with \(\ell_{1}\) and \(\ell_{2}\) can be modeled as the following optimization problem: \[\min_{\theta}\sum_{i=1}^{N}\ell(f_{\theta}(x_{i}),y_{i})+\mu_{1}\|\theta\|_{1 }+\frac{\mu_{2}}{2}\|\theta\|_{2}^{2}\] (3)
where \(\ell\) is a loss function such as the logarithmic softmax function; \(\mu_{1}\) and \(\mu_{2}\) are real values that are non-negative and could be 0; \(f_{\theta}\) denote a neural network parameterized by \(\theta\). The function \(f_{\theta}\) is a composite of several functions of the following form:
\[\sigma_{n}(\theta_{n}\sigma_{n-1}\ \cdots\sigma_{2}(\theta_{2}\sigma_{1}( \theta_{1}x)))\]
And \(\sigma_{i}\) is usually called an activated function.
Now after formulating the classification as an optimization problem, we will also need to use optimization methods to slove. Logistic regression and Support Vector Machine are classical ways to deal with convex optimization problem and their global optimal solutions are tractable; however, for general neural network problems, the optimization problems are nonconvex and only the local optimal solutions are traceable. For our paper, we used the most common method in machine learning which is stochastic gradient descent [26]. There are certainly other types of methods can be applied here such as the momentum reduction-based method [27, 28, 29], flow-based methods [30, 31, 32] or adaptively based methods [33].
### Topic-Switching Index Calculation
Having acquired the essential techniques, We will next outline the approach for deriving the Topic-Switching Index. For each earnings call transcript, we extract questions and answers associated with individuals. Unlike previous l
we compute similarity between question and answer vectors from taking their dot product and normalizing by their respective magnitudes. Then we use \(1-\)similarity score as the Topic-Switching Index for each analyst's Q & A pair. Finally, we average the Topic-Switching scores and assign the value to this earnings call transcript.
For classification, each vector is labeled based on the VBL, as determined by either **Definition**\(1\) or **Definition**\(2\). To train our models -- whether it would be a support vector machine, logistic regression, or neural network -- we employ stochastic gradient descent. The outcome of this training is a set of weights conducive to classification tasks.
The pseudo-code will be given in **Algorithm** 1
## 3 Data
### Data Overview
We fetch the earnings call data of the \(S\&P\) 500 companies from the website [https://seekingalpha.com](https://seekingalpha.com) and get a total amount of 24,573 transcripts. Our transcript dataset ranges from 2010 January to 2022 December. After excluding data that have missing values such as not providing the NYSE symbols, the text of which does not have the same format with others and the content cannot be separate, or other miscellaneous problems that hard to fix, 13,044 ones are left and can successfully calculate the Topic-Switching Index. We also get the corresponding stock prices before and after the earnings call day from yahoo finance and match them into the whole dataset.
```
1:procedureTrainModel(EarningsCall)
2:\(TopicSwitchingList\leftarrow\) empty list \(\triangleright\) List to store TopicSwitching
3:for each earnings call in EarningsCall do\(\triangleright\) Iterate through each earnings call
4:\(TopicSwitching\leftarrow\) empty list
5:for each question, answer in EarningsCall do\(\triangleright\) Process each Q and A
6:\(feature_{question}\leftarrow\) FinBERT_Extract(\(question\))\(\triangleright\) Use FinBERT to extract features
7:\(feature_{answer}\leftarrow\) FinBERT_Extract(\(answer\))
8:\(norm_{question}\leftarrow\|feature_{question}\|\)
9:\(norm_{answer}\leftarrow\|feature_{answer}\|\)
10:if\(norm_{question}\neq 0\) and \(norm_{answer}\neq 0\)then\(\triangleright\) Avoid division by zero
11:\(similarity\leftarrow\frac{feature_{question}\cdot feature_{answer}}{norm_{question} \times norm_{answer}}\)
12:\(TopicSwitching.\)append(\(1-similarity\))\(\triangleright\) Store TopicSwitching
13:endif
14:endfor
15:\(avgTopicSwitching\leftarrow\) average(\(TopicSwitching\))\(\triangleright\) Compute average TopicSwitchingIndex
16:\(TopicSwitchingList.\)append(\(avgTopicSwitching\))
17:endfor
18:if Calculate feature using Definition 1\(\triangleright\) Assign label based on definition then
19:for each stock in CompantList do
20:\(label\leftarrow\) VBL according to Definition 1
21:endfor
22:endif
23:if Calculate feature using Definition 2\(\mathbf{\mathit{then}}\)
24:for each stock in CompantList do
25:\(label\leftarrow\) VBL according to Definition 2
26:endfor
27:endif
28: Initialize a model (SVM, Logistic Regression, or Neural Network)
29: Use stochastic gradient descent to train the model with features and labels
30:\(weights\leftarrow\) trained model's weights
31:return\(weights\)
32:endprocedure
```
**Algorithm 1** Feature and Model Training Process
### Descriptive Analysis
In this subsection, our objective is to delve deeper into the intricacies of the data associated with the Topic-Switching Feature by showing visualizations.
To have a more clear and practical understanding of our index, We show 2 examples of discussion text in the **Appendix A1**, one with a low Topic Switching Index and one with a high score. From the low score sample, we can see that the manager answered the analyst to the point with confidence and all the details put were for the arguements. While in the other one, the manager provided context and touched upon various related topics but did not give a straightforward answer about what the "new normal" might be regarding investment in ex-fuel gross margin which the analyst was interested in. After checking samples with scores, we preliminarily can see that our methodology does extract some information based on manager's attempt to evade from answering questions directly and assign it into Topic-Switching Index. Moreover, we also show the scores on the same plot as in the previous part for the earnings calls that witness a huge jump of Chipotle Mexican Grill stock price. Notice that the average score for the whole sample is 0.24. The two upwards jump are with Topic-Switching Index of 0.11 and the downward one has a score of 0.25. It is clear from figure 2 that the stock price moves with an opposite direction with Topic-Switching Index in this case.
In figure 3, we show the yearly trends for Topic-Switching Index and stock price change of the whole sample. For each plot, we include both median and mean values. For Topic-Switching Index, on average the mean is higher than the median, indicating the data are skewed to the right. The abnomal differences appear on year 2020 and 2021 might come from
the Covid situation. Still, the difference is only around 0.025. There is no clear yearly trend for the index. For relative daily stock price change, it looks like the mean and median do not have a real difference except during Covid period. In order to be consistent, we will use median in our later research.
To structure our exploration, we categorize the entire dataset into 11 distinct segments: Consumer Discretionary, Health Care, Information Technology, Consumer Staples, Industrials, Communication Services, Financials, Materials, Energy, Real Estate, and Utilities based on the Global Industry Classification Standard (GICS). This categorization can allow for a more organized and insightful analysis. Figure 4 below shows the box plots of Topic-Switching Index and relative stock price change for each category. The color is based on the rank of median in both plots, the higher the median of the industy, the darker its box appear. The boundaries of the whiskers are based on the 1.5 interquartile range. For Topic-Switching Index, we can see that while Materials having the highest median and Communication Services ranking the lowest, overall the medians are every close to each other, being around 0.25. The whiskers are similar also. For the relative daily stock price change, the medians are all close to 0 as expected. Although there is no strong evidence that the rank of industry's index median is related to that of its stock price daily change, it is understandable as the difference among industries are insignificant.
In addition, we show the statistics of Topic-Switching Index of all categories in **Table** 1, including the maximum, minimum, mean, standard deviation, and overall data count. As we have observed in the box plot, the distribution of each segment does not distinguish with each other significantly with an average close to 0.25.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Category & \multicolumn{3}{c}{Mean Std Dev Total Data Points Minimum Maximum} \\ \hline Consumer Discretionary & 0.24 & 0.07 & 1387 & 0 & 0.85 \\ Health Care & 0.22 & 0.07 & 1592 & 0 & 0.68 \\ Information Technology & 0.24 & 0.07 & 1647 & 0 & 0.88 \\ Consumer Staples & 0.24 & 0.07 & 941 & 0 & 0.89 \\ Industrials & 0.25 & 0.09 & 1710 & 0 & 0.95 \\ Communication Services & 0.21 & 0.09 & 407 & 0 & 0.96 \\ Financials & 0.23 & 0.07 & 1764 & 0 & 0.68 \\ Materials & 0.25 & 0.07 & 664 & 0 & 0.66 \\ Energy & 0.22 & 0.07 & 558 & 0 & 0.85 \\ Real Estate & 0.25 & 0.07 & 750 & 0 & 0.78 \\ Utilities & 0.22 & 0.09 & 751 & 0 & 0.72 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Statistics for Topic-Switching Index of All Categories
Figure 2: Stock Price movement of Chipotle Mexican Grill
Figure 3: Yearly Trend for Topic-Switching Index and Relative Stock Price Change
Figure 4: Box plot for Topic-Switching Index and relative stock price change.
## 4 Experiment
### Linear Regression
In the following analysis, we will first employ basic linear regression to preliminarily represent the relationship between the Topic-Switching Index and the relative daily change in stock price.
First, let us introduce some notation to facilitate our discussion. Consider \(\xi_{d}^{c}\) to represent the Topic-Switching Feature of a given company \(c\) on day \(d\). Concurrently, the relative change in stock price can be captured by the expression \(\frac{S_{d+1}^{c}-S_{d-1}^{c}}{S_{d-1}}\). Here, \(S_{d}^{c}\) stands for the stock price of the company \(c\) at time \(d\).
With the defined variables, our next step is to apply linear regression to see whether there exists a negative correlation between the Topic-Switching Index and the relative stock price change. The results are shown in **Table** 2.
From the table, we can see that 9 out of 11 industries have negative coeeficients and the whole sample itself also shows a negative correlation. The sector Materials has a positive coefficient both statistically and economically insignificant. Moreover, this sector has a relative small number of total data points as presented in **Table** 1. The coeffient for Industrials is also positive, which can be explained its unique data distribution. From the first box plot in Figure 4, we can see that the Topic-Switching scores for Industrials are very dispersed, with many of them being out of the 1.5 Interquartile range and way above. By doing linear regression based on the whole sample without segmentation, we have an estimated parameter of -2.07% with a t-value of -3.426. With the above analysis, we are confident to say that
there exists a negative correlation between our Topic-Switching Index and the stock price movement, meaning that the market does capture manager's suspicious talking strategy and reflect that in the stock price.
Moving forward to the classifier models section, we plan to explore this hypothesis in depth. Specifically, we will focus on the predictability on the stock price movement using the Topic-Switching Feature as a key metric.
### Classifier Models
In this section, we use the 3 different classifier models as we mentioned before (SVM, Logistic Regression, and Neural Network) to illustrate the predictive power of the Topic-Switching
\begin{table}
\begin{tabular}{|l|c|c|c|} \hline \hline Sector & Coefficient & Std. Error & t-value \\ \hline Consumer Discretionary & -0.0665 & 0.0234 & -2.8363 \\ Health Care & -0.0237 & 0.0203 & -1.1654 \\ Information Technology & -0.0547 & 0.0229 & -2.3867 \\ Consumer Staples & -0.0150 & 0.0201 & -0.7469 \\ Industrials & 0.0248 & 0.0125 & 1.9836 \\ Communication Services & -0.0541 & 0.0366 & -1.4780 \\ Financials & -0.0236 & 0.0132 & -1.7865 \\ Materials & 0.0004 & 0.0276 & 0.0128 \\ Energy & -0.0003 & 0.0251 & -0.0106 \\ Real Estate & -0.0313 & 0.0140 & -2.2386 \\ Utilities & -0.0040 & 0.0097 & -0.4070 \\ Overall & -0.0207 & 0.0060 & -3.4264 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Linear Regression Statistic for All Categories
Index. We will compare our index to the benchmark feature which is directly using the transformer model to extract the text feature from the earnings call. We also compare the results with the comprehensive feature which adds Topic-Switching Index to the text feature. We separate our data into two parts, with training data being all the data before 2016 and the rest being the testing data.
The results of which the label of the classification is calculated by **Definition**1 are shown in Table 3. From this table, it is clear that using Topic-Switching Index alone has the best out of sample accuracy for all models. Three models themselves do not really distinguish with each other by performance with Logistic Regression has a small advantage over others.
Table 4-9 show the results based on **Definition**2. Table 4-6 predict the relative stock price goes down more than 1%, 2% and 5% while Table 7-9 predict the price jumps up by more than 1%, 2% and 5%. According to the results, we notice that the Topic-Switching
\begin{table}
\begin{tabular}{|l|c|c|c|} \hline & Support Vector Machine & Logistic Regression & Neural Network \\ \hline \hline Benchmark Feature & 0.525 & 0.519 & 0.513 \\ \hline Benchmark with Topic-Switching Index & 0.526 & 0.521 & 0.508 \\ \hline \hline Topic-Switching Index & **0.548** & **0.548** & **0.548** \\ \hline \hline \end{tabular}
\end{table}
Table 4: The testing accuracy by using **Definition**2 and \(\tau=-0.01\)
\begin{table}
\begin{tabular}{|l|c|c|c|} \hline & Support Vector Machine & Logistic Regression & Neural Network \\ \hline \hline Benchmark Feature & 0.540 & 0.542 & 0.521 \\ \hline Benchmark with Topic-Switching Index & 0.539 & 0.544 & 0.543 \\ \hline \hline Topic-Switching Index & **0.570** & **0.570** & **0.570** \\ \hline \hline \end{tabular}
\end{table}
Table 3: The testing accuracy by using **Definition**1
\begin{table}
\begin{tabular}{|l|c|c|c|} \hline & Support Vector Machine & Logistic Regression & Neural Network \\ \hline Benchmark Feature & **0.888** & 0.886 & 0.878 \\ \hline Benchmark with Topic-Switching Index & **0.888** & 0.888 & 0.879 \\ \hline Topic-Switching Index & **0.888** & **0.888** & **0.888** \\ \hline \end{tabular}
\end{table}
Table 9: The testing accuracy by using **Definition** 2 and \(\tau=0.05\)
\begin{table}
\begin{tabular}{|l|c|c|c|} \hline & Support Vector Machine & Logistic Regression & Neural Network \\ \hline Benchmark Feature & 0.627 & 0.611 & 0.564 \\ \hline Benchmark with Topic-Switching Index & 0.627 & 0.614 & 0.597 \\ \hline Topic-Switching Index & **0.642** & **0.642** & **0.642** \\ \hline \end{tabular}
\end{table}
Table 5: The testing accuracy by using **Definition** 2 and \(\tau=-0.02\)
\begin{table}
\begin{tabular}{|l|c|c|c|} \hline & Support Vector Machine & Logistic Regression & Neural Network \\ \hline Benchmark Feature & **0.888** & 0.886 & 0.878 \\ \hline Benchmark with Topic-Switching Index & **0.888** & 0.888 & 0.879 \\ \hline Topic-Switching Index & **0.888** & **0.888** & **0.888** \\ \hline \end{tabular}
\end{table}
Table 9: The testing accuracy by using **Definition** 2 and \(\tau=0.05\)
\begin{table}
\begin{tabular}{|l|c|c|c|} \hline & Support Vector Machine & Logistic Regression & Neural Network \\ \hline Benchmark Feature & **0.888** & 0.886 & 0.878 \\ \hline Benchmark with Topic-Switching Index & **0.888** & 0.888 & 0.879 \\ \hline Topic-Switching Index & **0.888** & **0.888** & **0.888** \\ \hline \end{tabular}
\end{table}
Table 6: The testing accuracy by using **Definition** 2 and \(\tau=-0.05\)
\begin{table}
\begin{tabular}{|l|c|c|c|} \hline & Support Vector Machine & Logistic Regression & Neural Network \\ \hline Benchmark Feature & **0.888** & 0.888 & 0.879 \\ \hline Topic-Switching Index & **0.888** & **0.888** & **0.888** \\ \hline Topic-Switching Index & **0.888** & **0.888** & **0.888** \\ \hline \end{tabular}
\end{table}
Table 7: The testing accuracy by using **Definition** 2 and \(\tau=0.01\)
Index outperforms the Benchmark Feature and Benchmark with Topic-Switching Index in all cases. Additionally, there is no consistent major difference between the Topic-Switching with Benchmark Features and the Benchmark Feature.
The best performance of using only Topic-Switching Index indicates that our index has more predictive power than feeding the whole texts to FineBERT model. In addition, because the Topic-Switching Index is only a one-dimensional vector, it has less overfitting problem, which explains why the index alone outperforms the index with Benchmark Features. We also notice that the Topic-Switching Index is more robust over different models, which is observed from the fact that it has very similar accuracy with respect to different models, and is also resulting from that the feature has only one dimension. The accuracy of all methods increases when using **Definition** 2 and smaller(larger) \(\tau\). This is indicates that the percentage of stocks that have such a relative increase that will converge to 100% (0 %) as \(\tau\rightarrow+\infty(-\infty)\), hence it is natural to guess that when the stock price increase(decrease) relatively, all models will capture this information with all different features. This can also explain why the accuracy of predicting higher jumps is better than downward drops as there are fewer big upward jumps in the sample and is easier to predict.
## 5 Conclusion and Future Work
In this paper, we first introduce a new feature, topic-switching, to consider in analyzing earnings call transcripts. We then provide a novel way to quantify this feature based on a transformer model called FinBERT. The logic is that when the manager is trying to evade from answering the question from the analyst directly, the similarity between both parties'
contents becomes lower, and we try to quantify the part where the question and answer do not overlap. By looking into the transcripts and Topic-Switching Index both visually and manually, we find that our calculated index does capture this information. Next, we show that the stock price movement is negatively correlated to the Topic-Switching Index, confirming that the investors notice the evasive talking strategy adopted by the manager and show their suspicion in the investment choice. Moreover, we demonstrate that this feature has predictive power and is better than using FinBERT directly, which is our benchmark.
There is still much future work left to be done. It will be more helpful if our hyposethis can be tested on a larger dataset including companies not only listed on S&P 500. In addtion, we will expand our testing time range to ckeck whether this Topic-Switching Index have effects on the stock price for a longer period such as a week. The next step of our research is to examine whether the market is over-reacting to manager's evasion in firm disclosure by looking into the predictablility of our index to the company's future performance.
Appendix
### Sample Transcript
The following is a sample of transcript with Low Topic Switching Score:
"_Shannon Cross: Tim, can you talk a bit about what you are seeing in China with 70% year over year growth in the Greater China revenue, and clearly very strong iPhone. If you can talk a bit about what consumers are saying, what the carriers are saying, in terms of demand and opportunity. Just any color, because clearly itis quite strong._
_Tim Cook: Yeah, it was an incredible quarter. We were up 71% year over year. We set a record in China for revenues. We did that now in a quarter that included Chinese New Year, and so we have the help of a strong holiday season, much like the U.S. has a strong season in December. China is obviously in the March quarter. iPhone led the way. It was up over 70% year on year. And the current estimates from Kantar are that that would mean that we would gain more than 9 points of share on a year over year basis. And so by everything I can see, we did extremely well. The Mac also had an unbelievable quarter in China, and I am particularly very happy with this, that Mac unit sales were up 31%. And like the rest of the world, or most of the rest of the world, IDC is projecting that PC sales in China contracted by 5% last quarter. And so once again bucking the tide. Also, in China, consistent with the company but at a much different rate, the App Store had a record quarter and grew over 100% year over year. And so you can see the iPhone, the Mac, and the App Store adding, and with the iPad in PRC, not in Greater China, but in the PRC, iPad had its best quarter ever, higher than all the others, and also grew in a market that contracted for the overall market. And
so really and truly, its sort of everything you look at in China was extremely good. We have been working significantly on expanding our ecosystem there, and so we added Union Pay as a payment option for customers. We increased the iPhone point of sales to over 40,000 during the quarter. Thatis up about 9\(\%\) year on year. And more importantly than the total number, we are in many more cities than we were before. We worked significantly on our online store. Our online store revenue was up over three times year over year. As you probably heard us say before, we have opened several stores in China recently. We are now at 21 in Greater China and we are on track still to achieve 40 stores by the middle of next year. The online store will also be expanding from around 319 cities to where they can hit two day delivery to 365 cities. So adding about 50 new cities by the end of this quarter. And so the net is we are investing a lot across the board in our infrastructure, in our products, on partnering with different companies. The Chinese developers are coming on in significant numbers. We have now made payments to developers in Greater China of almost 5 billion over half of which was in the last 12 months. And so you can see this enormous momentum building in the developer community there as well. And so lots of positive things, and you know, as you probably heard me say before, I have never seen as many people coming into the middle class as they are in China. And that is where the bulk of our sales are going. And so we are really proud of the results there and continue to invest in the country._"
The second example is a sample of transcript with high Topic Switching Score:
" _Judah Frommer Okay, that makes sense. And then touching on the gross margin performance, I mean you have lapped Harris Teeter fully now and we still don't see a lot of |
2309.13337 | On the Asymptotic Learning Curves of Kernel Ridge Regression under
Power-law Decay | The widely observed 'benign overfitting phenomenon' in the neural network
literature raises the challenge to the 'bias-variance trade-off' doctrine in
the statistical learning theory. Since the generalization ability of the 'lazy
trained' over-parametrized neural network can be well approximated by that of
the neural tangent kernel regression, the curve of the excess risk (namely, the
learning curve) of kernel ridge regression attracts increasing attention
recently. However, most recent arguments on the learning curve are heuristic
and are based on the 'Gaussian design' assumption. In this paper, under mild
and more realistic assumptions, we rigorously provide a full characterization
of the learning curve: elaborating the effect and the interplay of the choice
of the regularization parameter, the source condition and the noise. In
particular, our results suggest that the 'benign overfitting phenomenon' exists
in very wide neural networks only when the noise level is small. | Yicheng Li, Haobo Zhang, Qian Lin | 2023-09-23T11:18:13Z | http://arxiv.org/abs/2309.13337v1 | # On the Asymptotic Learning Curves of Kernel Ridge Regression under Power-law Decay
###### Abstract
The widely observed 'benign overfitting phenomenon' in the neural network literature raises the challenge to the 'bias-variance trade-off' doctrine in the statistical learning theory. Since the generalization ability of the 'lazy trained' over-parametrized neural network can be well approximated by that of the neural tangent kernel regression, the curve of the excess risk (namely, the learning curve) of kernel ridge regression attracts increasing attention recently. However, most recent arguments on the learning curve are heuristic and are based on the 'Gaussian design' assumption. In this paper, under mild and more realistic assumptions, we rigorously provide a full characterization of the learning curve: elaborating the effect and the interplay of the choice of the regularization parameter, the source condition and the noise. In particular, our results suggest that the 'benign overfitting phenomenon' exists in very wide neural networks only when the noise level is small.
## 1 Introduction
Kernel methods, in particular kernel ridge regression (KRR), have been one of the most popular algorithms in machine learning. Its optimality under various settings has been an active topic since Caponnetto and De Vito (2007), Andreas Christmann (2008). The renaissance of kernel methods arising from the neural tangent kernel (NTK) theory (Jacot et al., 2018), which shows that over-parametrized neural networks can be well approximated by certain kernel regression with the corresponding NTK, has posed further challenges about the interplay of generalization, regularization and noise level. For example, it has been observed empirically that over-parametrized neural networks can fit any data perfectly but also generalize well (Zhang et al., 2017), which contradicts to our traditional belief of bias-variance trade-off (Vapnik, 1999).
The aforementioned 'benign overfitting phenomenon' that overfitted neural networks generalize well attracts lots of attention recently. Researchers provide various explanations to reconcile the contradiction between it and the bias-variance trade-off principle. For example, Belkin et al. (2019) proposed the 'double descent theory' to explain why large model can generalize well; some other works (e.g., Liang and Rakhlin (2020)) argued that kernel interpolating estimators can generalize well in high dimensional settings. In contrast to the 'benign overfitting phenomenon', several other works (e.g., Rakhlin and Zhai (2018), Li et al. (2023a)) recently showed that kernel interpolation can
not generalize in traditional fixed dimension setting. In order to understand the 'benign overfitting phenomenon', it would be of great interest to characterize the learning curve: the curve of the exact order of the generalization error of a certain algorithm (e.g., KRR) varying with respect to different choices of regularization parameters.
Recently, several works (e.g., Bordelon et al. (2020); Cui et al. (2021)) depicted the learning curve of KRR under the Gaussian design assumption that the eigenfunctions (see (5)) are i.i.d. Gaussian random functions. Though it is easy to figure out that the Gaussian design assumption can not be true in most scenarios, with some heuristic arguments, Cui et al. (2021) provide a description of the learning curves of KRR with respect to the regularization, source condition and noise levels. These works offered us some insights on the learning curve of KRR which strongly suggests that the learning curve should be U-shaped if the observations are noisy or monotone decreasing if the observations are noiseless.
In this paper, we consider the learning curves of KRR under the usual settings (without the Gaussian design assumption). Under mild assumptions, we rigorously prove the asymptotic rates of the excess risk, including both upper and lower bounds. These rates show the interplay of the eigenvalue decay of the kernel, the relative smoothness of the regression function, the noise and the choice of the regularization parameter. As a result, we obtain the traditional U-shaped learning curve for the noisy observation case and a monotone decreasing learning curve for the noiseless case, providing a full picture of the generalization of KRR in the asymptotic sense. Combined with the NTK theory, our results may also suggest that 'the benign overfitting phenomenon' may not exist if one trains a very wide neural network.
### Our contributions
The main contribution of this paper is that we remove the unrealistic Gaussian design assumption in previous non-rigorous works (Bordelon et al., 2020; Cui et al., 2021) and provide mathematically solid proof of the exact asymptotic rates of KRR with matching upper and lower bounds.
To be precise, let us introduce the quantities \(\lambda\), the regularization parameter in (1); \(\beta\), the eigenvalue decay rate in (6), which characterizes the span of the underlying reproducing kernel Hilbert space (RKHS); and \(s\), the smoothness index in (12), describes the relative smoothness of the regression function with respect to the RKHS. Here we note that larger \(\beta\) implies better regularity the RKHS and also larger \(s\) also implies better relative smoothness. Then, the asymptotic rates of the generalization error (excess risk) \(R(\lambda)\) in the noisy case is roughly
\[R(\lambda)=\begin{cases}\Theta\big{(}\lambda^{\min(s,2)}+\sigma^{2}\lambda^{- 1/\beta}/n\big{)},&\text{if}\quad\lambda=\Omega(n^{-\beta});\\ \Omega(\sigma^{2}),&\text{if}\quad\lambda=O(n^{-\beta});\end{cases}\]
where \(n\) is the number of the samples and \(\sigma^{2}\) is the noise level. This result justifies the traditional U-shaped learning curve (see also Figure 1 on page 1) with respect to the regularization parameter.
For the technical part, we use the bias-variance decomposition and determine the exact rates of the both terms. Since the variance term was already considered in Li et al. (2023), the main focus of this work is the bias term. Our technical contributions include:
* When the regularization parameter \(\lambda\) is not so small, that is, \(\lambda=\Omega(n^{-\beta})\), we provide sharp estimates of the asymptotic orders (Lemma 4.1) of the bias term with both upper and lower bounds. Our result holds for both the well-specified case (\(s\geq 1\)) and the mis-specified case (\(s\in(0,1)\)), which improves the upper bounds given in Zhang et al. (2023).
* We further show an upper bound (Lemma A.12) of the bias term in the nearly interpolating case, i.e., \(\lambda=O(n^{-\beta})\). The upper bound is tight and matches the information-theoretic lower bound provided in Proposition 4.4.
* Combining these results, we provide learning curves of KRR for both the noisy case (Theorem 3.2) and the noiseless case (Theorem 3.4). The results justify our traditional belief of the bias-variance trade-off principle.
* Our new techniques can also be generalized to other settings and might be of independent interest.
### Related works
The optimality of kernel ridge regression has been studied extensively (Caponnetto and De Vito, 2007; Steinwart et al., 2009; Fischer and Steinwart, 2020; Zhang et al., 2023a). Caponnetto and De Vito (2007) provided the classical optimality result of KRR in the well-specified case and the subsequent works further considered the mis-specified case. However, these works only provided an upper bound and the worst-case (minimax) lower bound, which are not sufficient for determining the precise learning curve. In order to answer the "benign overfitting" phenomenon (Bartlett et al., 2020; Liang and Rakhlin, 2020), several works (Rakhlin and Zhai, 2018; Buchholz, 2022; Beaglehole et al., 2022) tried to provide a lower bound for the kernel interpolation, which is a limiting case of KRR, but these works only focused on particular kernels and their techniques can hardly be generalized to provide a lower bound for KRR.
Another line of recent works considered the generalization performance of KRR under the Gaussian design assumption of the eigenfunctions (Bordelon et al., 2020; Jacot et al., 2020; Cui et al., 2021; Mallinar et al., 2022). In particular, the learning curves of KRR was described in Bordelon et al. (2020); Cui et al. (2021), but heuristic arguments are also made in addition to the unrealistic Gaussian design assumption. Though the heuristic arguments are inspirational, a rigorous proof is indispensable if one plans to perform further investigations. In this work, we provide the first rigorous proof for most scenarios of the smoothness \(s\), eigenvalue decay rate \(\beta\), noise level \(\sigma^{2}\) and the regularization parameter \(\lambda\) based on the most common/realistic assumptions.
Recently, in order to show the so-called "saturation effect" in KRR, Li et al. (2023b) proved the exact asymptotic order of both the bias and the variance term when the regression function is very smooth and the regularization parameter \(\lambda\) is relatively large. Inspired by their analysis, Li et al. (2023a) showed the exact orders of the variance term. Our work further determines the orders of the bias term, completing the full learning curve or KRR.
KRR is also connected with Gaussian process regression (Kanagawa et al., 2018). Jin et al. (2021) claimed to establish the learning curves for Gaussian process regression and thus for KRR. However, as pointed out in Zhang et al. (2023b), there is a gap in their argument. Moreover, their results are also more restrictive than ours, see Section 3.3 for a comparison.
NotationsWe write \(L^{p}(\mathcal{X},\mathrm{d}\mu)\) for the Lebesgue space and sometimes abbreviate it as \(L^{p}\). We use asymptotic notations \(O(\cdot),\;o(\cdot),\;\Omega(\cdot)\) and \(\Theta(\cdot)\), and use \(\tilde{\Theta}(\cdot)\) to suppress logarithm terms. We also write \(a_{n}\asymp b_{n}\) for \(a_{n}=\Theta(b_{n})\). We will also use the probability versions of the asymptotic notations such as \(O_{\mathbb{P}}(\cdot)\). Moreover, to present the results more clearly, we denote \(a_{n}=O^{\mathrm{poly}}(b_{n})\) if \(a_{n}=O(n^{p}b_{n})\) for any \(p>0\), \(a_{n}=\Omega^{\mathrm{poly}}(b_{n})\) if \(a_{n}=\Omega(n^{-p}b_{n})\) for any \(p>0\), \(a_{n}=\Theta^{\mathrm{poly}}(b_{n})\) if \(a_{n}=O^{\mathrm{poly}}(b_{n})\), and \(a_{n}=\Omega^{\mathrm{poly}}(b_{n})\); and we add a subscript \({}_{\mathbb{P}}\) for their probability versions.
## 2 Preliminaries
Let \(\mathcal{X}\subset\mathbb{R}^{d}\) be compact and \(\rho\) be a probability measure on \(\mathcal{X}\times\mathbb{R}\), whose marginal distribution on \(\mathcal{X}\) is denoted by \(\mu\). Suppose that we are given \(n\) i.i.d. samples \((x_{1},y_{1}),\ldots,(x_{n},y_{n})\) from \(\rho\). Let \(k\) be a continuous positive definite kernel \(k\) over \(\mathcal{X}\) and \(\mathcal{H}\) be the separable reproducing kernel Hilbert space (RKHS) associated with \(k\). Then, kernel ridge regression (KRR) obtains the regressor \(\hat{f}_{\lambda}\) via the following convex optimization problem
\[\hat{f}_{\lambda}=\operatorname*{arg\,min}_{f\in\mathcal{H}}\left(\frac{1}{n} \sum_{i=1}^{n}(y_{i}-f(x_{i}))^{2}+\lambda\|f\|_{\mathcal{H}}^{2}\right), \tag{1}\]
where \(\lambda>0\) is the regularization parameter. Let us denote \(X=(x_{1},\ldots,x_{n})\) and \(\mathbf{y}=(y_{1},\ldots,y_{n})^{T}\). A closed form of (1) can be provided by the representer theorem (Andreas Christmann, 2008):
\[\hat{f}_{\lambda}(x)=\mathbb{K}(x,X)(\mathbb{K}(X,X)+n\lambda)^{-1}\mathbf{y} \tag{2}\]
where \(\mathbb{K}(x,X)=(k(x,x_{1}),\ldots,k(x,x_{n}))\) and \(\mathbb{K}(X,X)=\big{(}k(x_{i},x_{j})\big{)}_{n\times n}\).
In terms of the generalization performance of \(\hat{f}_{\lambda}\), we consider the excess risk with respect to the squared loss
\[\mathbb{E}_{x\sim\mu}\left[\hat{f}_{\lambda}(x)-f_{\rho}^{*}(x)\right]^{2}=\left\| \hat{f}_{\lambda}-f_{\rho}^{*}\right\|_{L^{2}(\mathcal{X},\mathrm{d}\mu)}^{2}, \tag{3}\]
where \(f_{\rho}^{*}(x)\coloneqq\mathbb{E}_{\rho}[y\mid x]\) is the conditional expectation and is also referred to as the regression function. We aim to provide asymptotic orders of (3) with respect to \(n\).
### The integral operator
We will introduce the integral operator, which is crucial for the analysis, as the previous works (Caponnetto and De Vito, 2007; Lin et al., 2018). Denote by \(\mu\) the marginal probability measure of \(\rho\) on \(\mathcal{X}\). Since \(k\) is continuous and \(\mathcal{X}\) is compact, let us assume \(\sup_{x\in\mathcal{X}}k(x,x)\leq\kappa^{2}\). Then, it is known (Andreas Christmann, 2008; Steinwart and Scovel, 2012) that we have the natural embedding \(S_{\mu}:\mathcal{H}\to L^{2}\), which is a Hilbert-Schmidt operator with Hilbert-Schmidt norm \(\left\|S_{\mu}\right\|_{\mathrm{HS}}\leq\kappa\). Let \(S_{\mu}^{*}:L^{2}\to\mathcal{H}\) be the adjoint operator of \(S_{\mu}\) and \(T=S_{\mu}S_{\mu}^{*}:L^{2}\to L^{2}\). Then, it is easy to show that \(T\) is an integral operator given by
\[(Tf)(x)=\int_{\mathcal{X}}k(x,y)f(y)\mathrm{d}\mu(y), \tag{4}\]
and it is self-adjoint, positive and trace-class (thus compact) with trace norm \(\left\|T\right\|_{1}\leq\kappa^{2}\)(Caponnetto and De Vito, 2007; Steinwart and Scovel, 2012). Moreover, the spectral theorem of compact self-adjoint operators and Mercer's theorem (Steinwart and Scovel, 2012) yield the decompositions
\[T=\sum_{i\in N}\lambda_{i}\left\langle\cdot,e_{i}\right\rangle_{L^{2}}e_{i}, \qquad k(x,y)=\sum_{i\in N}\lambda_{i}e_{i}(x)e_{i}(y), \tag{5}\]
where \(N\subseteq\mathbb{N}\) is an index set, \(\left\{\lambda_{i}\right\}_{i\in N}\) is the set of positive eigenvalues of \(T\) in descending order, and \(e_{i}\) is the corresponding eigenfunction. Furthermore, \(\left\{e_{i}\right\}_{i\in N}\) forms an orthonormal basis of \(\overline{\mathrm{Ran}\,S_{\mu}}\subseteq L^{2}\) and \(\left\{\lambda_{i}^{1/2}e_{i}\right\}_{i\in N}\) forms an orthonormal basis of \(\overline{\mathrm{Ran}\,S_{\mu}^{*}}\subseteq\mathcal{H}\).
The eigenvalues \(\lambda_{i}\) actually characterize the span of the RKHS and the interplay between \(\mathcal{H}\) and \(\mu\). Since we are interested in the infinite-dimensional case, we will assume \(N=\mathbb{N}\) and assume the following polynomial eigenvalue decay as in the literature (Caponnetto and De Vito, 2007; Fischer and Steinwart, 2020; Li et al., 2023), which is also referred to as the capacity condition or effective dimension condition. Larger \(\beta\) implies better regularity of the functions in the RKHS.
**Assumption 1** (Eigenvalue decay).: There is some \(\beta>1\) and constants \(c_{\beta},C_{\beta}>0\) such that
\[c_{\beta}i^{-\beta}\leq\lambda_{i}\leq C_{\beta}i^{-\beta}\quad(i=1,2,\dots), \tag{6}\]
where \(\lambda_{i}\) is the eigenvalue of \(T\) defined in (5).
Such a polynomial decay is satisfied for the well-known Sobolev kernel (Fischer and Steinwart, 2020), Laplace kernel and, of most interest, neural tangent kernels for fully-connected multilayer neural networks (Bietti and Mairal, 2019; Bietti and Bach, 2020; Lai et al., 2023).
### The embedding index of an RKHS
We will consider the embedding index of an RKHS to sharpen our analysis. Let us first define the fractional power \(T^{*}:L^{2}\to L^{2}\) for \(s\geq 0\) by
\[T^{*}(f)=\sum_{i\in N}\lambda_{i}^{s}\left\langle f,e_{i}\right\rangle_{L^{2}} e_{i}. \tag{7}\]
Then, the interpolation space (Steinwart and Scovel, 2012; Fischer and Steinwart, 2020; Li et al., 2023)\([\mathcal{H}]^{*}\) is define by
\[[\mathcal{H}]^{s}=\mathrm{Ran}\,T^{s/2}=\left\{\sum_{i\in N}a_{i}\lambda_{i}^{ s/2}e_{i}\ \Big{|}\ \sum_{i\in N}a_{i}^{2}<\infty\right\}\subseteq L^{2}, \tag{8}\]
with the norm \(\left\|\sum_{i\in N}a_{i}\lambda_{i}^{s/2}e_{i}\right\|_{\left[\mathcal{H}\right]^{ s}}=\left(\sum_{i\in N}a_{i}^{2}\right)^{1/2}\). One may easily verify that \([\mathcal{H}]^{s}\) is also a separable Hilbert space with an orthonormal basis \(\left\{\lambda_{i}^{s/2}e_{i}\right\}_{i\in N}\). Moreover, it is clear that \([\mathcal{H}]^{0}=\overline{\operatorname{Ran}S_{\mu}}\subseteq L^{2}\) and \([\mathcal{H}]^{1}=\overline{\operatorname{Ran}S_{\mu}^{*}}\subseteq\mathcal{H}\). It can also be shown that if \(s_{1}>s_{2}\geq 0\), the inclusions \([\mathcal{H}]^{s_{1}}\hookrightarrow[\mathcal{H}]^{s_{2}}\) are compact (Steinwart and Scovel, 2012).
Now, we say \(\mathcal{H}\) has an embedding property of order \(\alpha\in(0,1]\) if \([\mathcal{H}]^{\alpha}\) can be continuously embedded into \(L^{\infty}(\mathcal{X},\mathrm{d}\mu)\), that is, the operator norm
\[\left\|[\mathcal{H}]^{\alpha}\hookrightarrow L^{\infty}(\mathcal{X},\mu) \right\|=M_{\alpha}<\infty. \tag{9}\]
Moreover, Fischer and Steinwart (2020, Theorem 9) shows that
\[\left\|[\mathcal{H}]^{\alpha}\hookrightarrow L^{\infty}(\mathcal{X},\mu) \right\|=\left\|\kappa_{\mu}^{\alpha}\right\|_{L^{\infty}}\coloneqq\operatorname {ess\,sup}_{x\in\mathcal{X},\ \mu}\sum_{i\in N}\lambda_{i}^{\alpha}e_{i}(x)^{2}. \tag{10}\]
Therefore, since \(\sup_{x\in\mathcal{X}}k(x,x)\leq\kappa^{2}\), we know that (9) always holds for \(\alpha=1\). By the inclusion relation of interpolation spaces, it is clear that if \(\mathcal{H}\) has the embedding property of order \(\alpha\), then it has the embedding properties of order \(\alpha^{\prime}\) for any \(\alpha^{\prime}\geq\alpha\). Consequently, we may introduce the following definition (Zhang et al., 2023b):
**Definition 2.1**.: The embedding index \(\alpha_{0}\) of an RKHS \(\mathcal{H}\) is defined by
\[\alpha_{0}=\inf\left\{\alpha:\left\|[\mathcal{H}]^{\alpha}\hookrightarrow L^{ \infty}(\mathcal{X},\mu)\right\|=M_{\alpha}<\infty\right\}. \tag{11}\]
It is shown in Fischer and Steinwart (2020, Lemma 10) that \(\alpha_{0}\geq\beta\) and we assume the equality holds as the following assumption.
**Assumption 2** (Embedding index).: The embedding index \(\alpha_{0}=1/\beta\), where \(\beta\) is the eigenvalue decay in (6).
Lots of the usual RKHSs satisfy this embedding index condition. It is shown in Steinwart et al. (2009) that Assumption 2 holds if the eigenfunctions are uniformly bounded, namely \(\sup_{i\in N}\left\|e_{i}\right\|_{L^{\infty}}<\infty\). Moreover, Assumption 2 also holds for the Sobolev RKHSs, RKHSs associated with periodic translation invariant kernels and RKHSs associated with dot-product kernels on spheres, see Zhang et al. (2023a, Section 4).
## 3 Main Results
Before presenting our main results, we have to introduce a source condition on the regression function. Since we will establish both precise learning rates, we have to characterize the exact smoothness order of \(f_{\rho}^{*}\) rather than merely assume \(f_{\rho}^{*}\) belongs to some interpolation space \([\mathcal{H}]^{s}\).
**Assumption 3** (Source condition).: There are some \(s>0\) and a sequence \((a_{i})_{i\geq 1}\) such that
\[f_{\rho}^{*}=\sum_{i=1}^{\infty}a_{i}\lambda_{i}^{s/2}i^{-1/2}e_{i} \tag{12}\]
and \(0<c\leq|a_{i}|\leq C\) for some constants \(c,C\).
**Remark 3.1**.: Assumption 3 is also considered in Cui et al. (2021, Eq. (8)) and a slightly weaker version of it is given in Jin et al. (2021, Assumption 5). We only consider this simple form since there is no essential difference in the proof to consider the weaker version. From the definition (8) we can see that Assumption 3 implies \(f_{\rho}^{*}\in[\mathcal{H}]^{t}\) for any \(t<s\) but \(f_{\rho}^{*}\notin[\mathcal{H}]^{s}\).
### Noisy case
Let us first consider the noisy case with the following assumption:
**Assumption 4** (Noise).: We assume
\[\mathbb{E}_{(x,y)\sim\rho}\left[\left(y-f_{\rho}^{*}(x)\right)^{2}\ \Big{|}\ x \right]=\sigma^{2}>0,\quad\mu\text{-a.e.}\ x\in\mathcal{X}. \tag{13}\]
For technical reason, we have to further assume the kernel to be Holder-continuous, which is first in introduced in Li et al. (2023b). This assumption is satisfied for the Laplace kernel, Sobolev kernels and neural tangent kernels.
**Assumption 5**.: The kernel \(k\) is Holder-continuous, that is, there exists some \(p\in(0,1]\) and \(L>0\) such that
\[|k(x_{1},x_{2})-k(y_{1},y_{2})|\leq L\|(x_{1},x_{2})-(y_{1},y_{2})\|_{\mathbb{R }^{d\times d}}^{p},\quad\forall x_{1},x_{2},y_{1},y_{2}\in\mathcal{X}. \tag{14}\]
**Theorem 3.2**.: _Under Assumptions 1-5, suppose \(\lambda\asymp n^{-\theta}\) for \(\theta>0\). Then,_
\[\mathbb{E}\left[\left\|\hat{f}_{\lambda}-f_{\rho}^{\star}\right\|_{L^{2}}^{2} \,\,\big{|}\,X\right]=\begin{cases}\tilde{\Theta}_{\mathbb{P}}\big{(}n^{- \min(s,2)\theta}+\sigma^{2}n^{-(1-\theta/\beta)}\big{)},&\text{if}\quad\theta< \beta\\ \Omega_{\mathbb{P}}^{\mathrm{poly}}\big{(}\sigma^{2}\big{)},&\text{if}\quad \theta\geq\beta,\end{cases} \tag{15}\]
_where \(\tilde{\Theta}_{\mathbb{P}}\) can be replaced with \(\Theta_{\mathbb{P}}\) for the first case if \(s\neq 2\)._
**Remark 3.3**.: The two terms in the first case in Theorem 3.2 actually correspond to the bias and the variance term respectively. Balancing the two terms, we find the optimal regularization is \(\theta_{\mathrm{op}}=\frac{\beta}{8\beta+1}\) and the optimal rate is \(\frac{\tilde{s}\beta}{8\beta+1}\), where \(\tilde{s}=\min(s,2)\), which recovers the classical optimal rate results (Caponnetto and De Vito, 2007). Moreover, while we treat \(\sigma^{2}\) as fixed for simplicity, we can also allow \(\sigma^{2}\) to vary with \(n\). Then, we can recover the results in Cui et al. (2021).
Figure 1: An illustration of the learning curves when choosing \(\lambda=n^{-\theta}\). First row: The bias-variance plot and the error curves for the noisy and noiseless cases. Second row: Tow phase diagrams of the asymptotic rates of the excess risk with respect to parameter pairs \((\theta,s)\) and \((\theta,\tau)\), where we set \(\sigma^{2}=n^{-\tau}\) and \(\tilde{s}=\min(s,2)\). In the “underfitting” (“overfitting”) region, bias (variance) is dominating. The “interpolating” region refers to the extreme cases of overfitting that the excess risk is lower bounded by a constant. For the first diagram we consider the case of constant noise. For the second diagram, the red vertical line shows the crossover of the noisy regime to the noiseless regime and an upper bound for the blank area on the upper-right corner is unknown yet.
### Noiseless case
**Theorem 3.4**.: _Under Assumptions 1-3, assume further that the noise is zero, i.e., \(y=f_{\rho}^{*}(x)\). Then, we have:_
* _Suppose_ \(\lambda\asymp n^{-\theta}\) _for_ \(\theta\in(0,\beta)\)_, we have_ \[\mathbb{E}\left[\left\|\hat{f}_{\lambda}-f_{\rho}^{*}\right\|_{L^{2}}^{2} \Bigm{|}X\right]=\tilde{\Theta}_{\mathbb{P}}\Big{(}n^{-\min(s,2)\theta}\Big{)},\] (16) _where_ \(\tilde{\Theta}_{\mathbb{P}}\) _can be replaced with_ \(\Theta_{\mathbb{P}}\) _if_ \(s\neq 2\)_._
* _Suppose_ \(\lambda\asymp n^{-\theta}\) _for_ \(\theta\geq\beta\) _and assume further that_ \(s>1\)_. Then,_ \[\mathbb{E}\left[\left\|\hat{f}_{\lambda}-f_{\rho}^{*}\right\|_{L^{2}}^{2} \Bigm{|}X\right]=O_{\mathbb{P}}^{\mathrm{poly}}\Big{(}n^{-\min(s,2)\beta} \Big{)}.\] (17) _Moreover, we have the information-theoretical lower rate:_ \[\sup_{\left\|f_{\rho}^{*}\right\|_{\left|\mathcal{R}\right|^{s}}\leq R} \mathbb{E}\left[\left\|\hat{f}_{\lambda}-f_{\rho}^{*}\right\|_{L^{2}}^{2} \Bigm{|}X\right]=\Omega(n^{-s\beta}),\] (18) _where_ \(R>0\) _is a fixed constant._
**Remark 3.5**.: Theorem 3.4 shows that the generalization error of KRR in the noiseless case is monotone decreasing when \(\theta\) increases and reaches the optimal rate \(n^{-\beta}\) when \(\theta\geq\beta\) if \(s\leq 2\). Since the case \(\theta\to\infty\) corresponds to kernel interpolation, our result implies that kernel interpolation is optimal when there is no noise. In contrast, as shown in Theorem 3.2 (or Li et al. [2023a]), kernel interpolation can not generalize in the noisy case. For the case \(s>2\), the KRR method suffers from saturation and the resulting convergence rate is limited to \(n^{-2\beta}\), while the possible lower rate is \(n^{-s\beta}\).
### Discussion
Our results provide a full picture of the generalization of KRR, which is in accordance with our traditional belief of the bias-variance trade-off principle: the generalization error is a U-shaped curve with respect to the regularization parameter \(\lambda\) in the noisy case and is monotone decreasing in the noiseless case. See Figure 1 on page 1 for an illustration.
Our rates coincide with the upper rates in the traditional KRR literature (Caponnetto and De Vito, 2007; Fischer and Steinwart, 2020). Moreover, our results also recover the learning curves in Cui et al. [2021], but we do not need the strong assumption of Gaussian design eigenfunctions as in Cui et al. [2021], which may not be true in most cases. Our assumptions are mild and hold for a large class of kernels including the Sobolev kernels and the neural tangent kernels (NTK) on spheres.
Our results are based on the bias-variance decomposition and determining the rates for each term respectively. In the proof of Li et al. [2023b], they determined the rates of the variance term under the condition that \(\theta<\frac{1}{2}\) and that of the bias term when \(s\geq 2\) and \(\theta<1\). The subsequent work Li et al. [2023a] proved the rates of the variance term when \(\theta<\beta\) and provided a near constant lower bound for \(\theta\geq\beta\). Considering the counterpart, our works further prove the rates of the bias term, which finally enables us to determine the complete learning curve of KRR.
The connection between KRR and Gaussian process regression also results in the connection between their learning curves. Jin et al. [2021] claimed to show learning curves for Gaussian process regression. However, regardless of the gap in their proof as pointed out in Zhang et al. [2023b], their results are more restrictive than ours. Considering a boundedness assumption of the eigenfunctions that \(\left\|e_{i}\right\|_{\infty}\leq Ci^{\tau}\) for some \(\tau\geq 0\), they could only cover the regime of \(\theta<\beta/(1+2\tau)\). Moreover, to approach the \(\theta=\beta\) regime for the \(\Omega(1)\) bound in the noisy case or the optimal rate in noiseless case, they have to require \(\tau=0\), that is, the eigenfunctions are uniformly bounded, but it is not true for some kernels such as dot-product kernels on spheres (and thus for NTK) since in general spherical harmonics are not uniformly bounded. In contrast, our embedding index assumption still holds in this case.
Proof sketch
We first introduce the following sample versions of the auxiliary integral operators, which are commonly used in the related literature (Caponnetto and De Vito, 2007; Fischer and Steinwart, 2020; Li et al., 2023b). We define the sampling operator \(K_{x}:\mathbb{R}\to\mathcal{H}\) by \(K_{x}y=yk(x,\cdot)\), whose adjoint \(K_{x}^{*}:\mathcal{H}\to\mathbb{R}\) is given by \(K_{x}^{*}f=f(x)\). The sample covariance operator \(T_{X}:\mathcal{H}\to\mathcal{H}\) is defined by
\[T_{X}\coloneqq\frac{1}{n}\sum_{i=1}^{n}K_{x_{i}}K_{x_{i}}^{*}, \tag{19}\]
and the sample basis function is \(g_{Z}\coloneqq\frac{1}{n}\sum_{i=1}^{n}K_{x_{i}}y_{i}\in\mathcal{H}\). As shown in Caponnetto and De Vito (2007), the operator form of KRR writes
\[\hat{f}_{\lambda}=(T_{X}+\lambda)^{-1}g_{Z}. \tag{20}\]
Let us further define
\[\tilde{g}_{Z}\coloneqq\mathbb{E}\left(g_{Z}|X\right)=\frac{1}{n}\sum_{i=1}^{ n}K_{x_{i}}f_{\rho}^{*}(x_{i})\in\mathcal{H}, \tag{21}\]
and
\[\tilde{f}_{\lambda}\coloneqq\mathbb{E}\left(\hat{f}_{\lambda}|X \right)=\left(T_{X}+\lambda\right)^{-1}\tilde{g}_{Z}\in\mathcal{H}. \tag{22}\]
Then, the traditional bias-variance decomposition (Li et al., 2023b; Zhang et al., 2023a) yields
\[\mathbb{E}\left(\left\|\hat{f}_{\lambda}-f_{\rho}^{*}\right\|_{L^{2}}^{2} \Bigm{|}X\right)=\mathbf{Bias}^{2}(\lambda)+\mathbf{Var}(\lambda), \tag{23}\]
where
\[\mathbf{Bias}^{2}(\lambda)\coloneqq\left\|\tilde{f}_{\lambda}-f_{\rho}^{*} \right\|_{L^{2}}^{2},\quad\mathbf{Var}(\lambda)\coloneqq\frac{\sigma^{2}}{n^{2 }}\sum_{i=1}^{n}\left\|(T_{X}+\lambda)^{-1}k(x_{i},\cdot)\right\|_{L^{2}}^{2}. \tag{24}\]
### The noisy case
To prove the desired result, we have to establish the asymptotic orders of both \(\mathbf{Bias}^{2}(\lambda)\) and \(\mathbf{Var}(\lambda)\). We first prove the asymptotic order of \(\mathbf{Bias}^{2}(\lambda)\) as one of our technical contributions. As far as we know, we are the first to provide such a lower bound in (25).
**Lemma 4.1**.: _Under Assumptions 1,2,3, suppose \(\lambda\asymp n^{-\theta}\) for \(\theta\in(0,\beta)\). Then,_
\[\mathbf{Bias}^{2}(\lambda)=\tilde{\Theta}_{\mathbb{P}}\Big{(}n^{- \min(s,2)\theta}\Big{)}, \tag{25}\]
_where \(\tilde{\Theta}_{\mathbb{P}}\) can be replaced with \(\Theta_{\mathbb{P}}\) if \(s\neq 2\)._
Proof sketch of Lemma 4.1.: Denote \(\tilde{s}=\min(s,2)\). We first introduce the regularized regression function \(f_{\lambda}\coloneqq T(T+\lambda)^{-1}f_{\rho}^{*}\) and triangle inequality implies
\[\mathbf{Bias}(\lambda)=\left\|\tilde{f}_{\lambda}-f_{\rho}^{*} \right\|_{L^{2}}\geq\left\|f_{\lambda}-f_{\rho}^{*}\right\|_{L^{2}}-\left\| \tilde{f}_{\lambda}-f_{\lambda}\right\|_{L^{2}}.\]
There is no randomness in the first term and we can use the expansion (12) and (5) to show that \(\left\|f_{\lambda}-f_{\rho}^{*}\right\|_{L^{2}}=\tilde{\Theta}\left(n^{-\tilde {s}\theta}\right)\). Then, we have to prove the error term \(\left\|\tilde{f}_{\lambda}-f_{\lambda}\right\|_{L^{2}}\) to be infinitesimal with respect to the main term, which is the main difficulty since it requires a refined analysis. Previous work only consider the case \(\theta=\frac{\beta}{8\beta+1}\) (corresponding to the optimal regularization) and show an \(O(n^{-\tilde{s}\theta})\) bound rather than the \(o(n^{-\tilde{s}\theta})\) bound that we require. For the proof, we (1) apply the concentration techniques in Fischer and Steinwart (2020); (2) consider the \(L^{q}\)-embedding property in Zhang et al. (2023a) for the mis-specified case when \(s\) is small; (3) sharpen the estimation by exploiting the embedding property \(\alpha_{0}=1/\beta\) and \(\theta<\beta\). For the detail, see Section 2.2 in the supplementary material.
The variance term has been analyzed in Li et al. (2023). We present the following proposition as a combination of Proposition 5.3 and Theorem 5.10 in Li et al. (2023).
**Proposition 4.2**.: Under Assumptions 1-5, suppose that \(\lambda\asymp n^{-\theta}\). Then,
\[\mathbf{Var}(\lambda)=\begin{cases}\Theta_{\mathbb{P}}^{\mathrm{poly}}\big{(} \sigma^{2}n^{-(1-\theta/\beta)}\big{)},&\quad\text{if}\quad\theta<\beta;\\ \Omega_{\mathbb{P}}^{\mathrm{poly}}\big{(}\sigma^{2}\big{)},&\quad\text{if} \quad\theta\geq\beta.\end{cases} \tag{26}\]
### The noiseless case
For the noiseless case, the variance term vanishes in (23), and thus we only need to consider the bias term. Since we have already established the estimation for large \(\lambda\) in Lemma 4.1, we focus on the case of small \(\lambda\).
**Lemma 4.3**.: _Under Assumptions 1,2,3, assume further \(s>1\). Suppose \(\lambda\asymp n^{-\theta}\) for \(\theta\geq\beta\). Then,_
\[\mathbf{Bias}^{2}(\lambda)=O_{\mathbb{P}}^{\mathrm{poly}}(n^{-\min(s,2)\beta }). \tag{27}\]
Proof sketch of Lemma 12.: Intuitively, we hope to bound \(\mathbf{Bias}^{2}(\lambda)\) with \(\mathbf{Bias}^{2}(\tilde{\lambda})\) for \(\tilde{\lambda}>\lambda\) such that concentration still works. However, we can not directly derive no monotone property of \(\mathbf{Bias}(\lambda)\). Nevertheless, since \(f_{\rho}^{*}\in\mathcal{H}\) when \(s>1\), the bias term can be written as
\[\mathbf{Bias}(\lambda)=\big{\|}\lambda(T_{X}+\lambda)^{-1}f_{\rho}^{*}\big{\|} _{L^{2}}=\Big{\|}T^{\frac{1}{2}}\lambda(T_{X}+\lambda)^{-1}f_{\rho}^{*}\Big{\|} _{\mathcal{H}}\leq\Big{\|}T^{\frac{1}{2}}\lambda(T_{X}+\lambda)^{-1}\Big{\|} _{\mathscr{B}(\mathcal{H})}\big{\|}f_{\rho}^{*}\big{\|}_{\mathcal{H}}.\]
Then, by operator calculus we can show that
\[\big{\|}T^{*}\big{[}\lambda(T_{X}+\lambda)^{-1}\big{]}\big{\|}_{\mathscr{B}( \mathcal{H})}\leq\Big{\|}T^{*}\left[\tilde{\lambda}(T_{X}+\tilde{\lambda})^{- 1}\right]\Big{\|}_{\mathscr{B}(\mathcal{H})}\]
reducing \(\lambda\) to \(\tilde{\lambda}\). Now, we can replace \(T_{X}\) with \(T\) using concentration results and derive the desired upper bound.
The following proposition shows that the upper bound in Lemma A.12 matches the information-theoretical lower bound. The proof follows idea of the minimax principle (Micchelli and Wahba, 1979) and is deferred to the supplementary material.
**Proposition 4.4**.: Suppose Assumption 1 holds and \(s\geq 1\). For any \(X=(x_{1},\ldots,x_{n})\), we have
\[\sup_{\|f_{\rho}^{*}\|_{|\mathcal{H}|^{s}}\leq R}\mathbf{Bias}^{2}(\lambda)= \Omega\big{(}n^{-s\beta}\big{)}, \tag{28}\]
where we note that here \(\mathbf{Bias}(\lambda)\) is viewed as a function depending also on \(f_{\rho}^{*}\) and \(X\).
## 5 Experiments
Lots of numerical experiments on both synthetic data and real data are done to study to learning curves of KRR (Li et al., 2023; Cui et al., 2021). In this section, we consider numerical experiments on a toy model to verify our theory.
Let us consider the kernel \(k(x,y)=\min(x,y)\) and \(x\sim\mathcal{U}[0,1]\). Then, the corresponding RKHS is (Wainwright, 2019)
\[\mathcal{H}=\left\{f:[0,1]\to\mathbb{R}\ \Big{|}\ f\ \text{is absolutely continuous,}\ f(0)=0,\ \int_{0}^{1}(f^{\prime}(x))^{2}\mathrm{d}x<\infty\right\}\]
and the eigenvalue decay rate \(\beta=2\). Moreover, the eigensystem of \(k\) is known to be \(\lambda_{i}=\big{(}\frac{2i-1}{2}\pi\big{)}^{-2}\) and \(e_{i}(x)=\sqrt{2}\sin\big{(}\frac{2i-1}{2}\pi x\big{)}\), which allows us to directly compute the smoothness of certain functions. For some \(f^{*}\), we generate data from the model \(y=f^{*}(x)+\varepsilon\) where \(\varepsilon\sim\mathcal{N}(0,0.05)\) and perform KRR with \(\lambda=cn^{-\theta}\) for different \(\theta\)'s with some fixed constant \(c\). Then, we numerically compute the variance, bias and excess risk by Simpson's formula with \(N\gg n\) nodes. Repeating the experiment for \(n\) ranged in 1000 to 5000, we can estimate the convergence rate \(r\) by a logarithmic least-squares \(\log\text{err}=r\log n+b\) on the values (variance, bias and excess risk). The results are collected in Table 1 on page 10. It can be seen that the resulting values basically match the theoretical values and we conclude that our theory is supported by the experiments. For more experiments and more details, we refer to the supplementary material.
## 6 Conclusion
In this paper, we prove rigorously the learning curves of KRR, showing the interplay of the eigenvalue decay of the kernel, the relative smoothness of the regression function, the noise and the choice of the regularization parameter. The results justify our traditional bias-variance trade-off principle and provide a full picture of the generalization performance of KRR. These results will help us better understand the generalization mystery of neural networks.
As for future works, we notice that for the nearly interpolating regime when \(\theta\geq\beta\), there are still some missing parts due to technical limitations. We expect that further analysis will prove the exact orders of the variance term like that given in Mallinar et al. (2022) under the Gaussian design assumption. We also hypothesize that Lemma A.12 still holds in the mis-specified case (\(s<1\)).
## Acknowledgments and Disclosure of Funding
This work is supported in part by the Beijing Natural Science Foundation (Grant Z190001) and National Natural Science Foundation of China (Grant 11971257).
\begin{table}
\begin{tabular}{|c|c|c c|c c|c c|} \hline & \(f^{*}(x)=\) & \multicolumn{2}{c|}{\(\cos 2\pi x\) (\(s=\frac{1}{2}\))} & \multicolumn{2}{c|}{\(\sin 2\pi x\) (\(s=1.5\))} & \multicolumn{2}{c|}{\(\sin\frac{3}{2}\pi x\) (\(s=\infty\))} \\ \hline \(\theta\) & Variance & Bias & Risk & Bias & Risk & Bias & Risk \\ \hline
0.2 & 0.90 (0.90) & 0.13 (0.10) & 0.13 (0.10) & 0.34 (0.30) & 0.34 (0.30) & 0.40 (0.40) & 0.42 (0.40) \\ \hline
0.4 & 0.80 (0.80) & 0.22 (0.20) & 0.22 (0.20) & 0.68 (0.60) & 0.69 (0.60) & 0.82 (0.80) & **0.81 (0.80)** \\ \hline
0.5 & 0.75 (0.75) & 0.26 (0.25) & 0.26 (0.25) & 0.84 (0.75) & **0.79 (0.75)** & 1.04 (1.00) & 0.77 (0.75) \\ \hline
1.0 & 0.49 (0.50) & 0.54 (0.50) & **0.52 (0.50)** & 1.69 (1.50) & 0.49 (0.50) & 2.21 (2.00) & 0.49 (0.50) \\ \hline
2.0 & 0.00 (0.00) & 1.05 (1.00) & 0.09 (0.00) & 3.26 (3.00) & 0.00 (0.00) & 3.99 (4.00) & 0.00 (0.00) \\ \hline
3.0 & 0.00 (0.00) & 1.05 (1.00) & 0.09 (0.00) & 3.26 (3.00) & 0.00 (0.00) & 3.98 (4.00) & 0.00 (0.00) \\ \hline \end{tabular}
\end{table}
Table 1: Asymptotic rates of bias, variance and excess risk under three regressions and different choices of \(\theta\). The numbers in parenthesis are the theoretical values. The bolded cells correspond to the best rate over the choices of \(\theta\)’s.
Detailed proofs
The first step of the proof is the traditional bias-variance decomposition. Let us further define
\[\tilde{g}_{Z}\coloneqq\mathbb{E}\left(g_{Z}|X\right)=\frac{1}{n}\sum_{i=1}^{n}K _{x_{i}}f_{\rho}^{*}(x_{i})\in\mathcal{H}, \tag{29}\]
and
\[\tilde{f}_{\lambda}\coloneqq\mathbb{E}\left(\hat{f}_{\lambda}|X \right)=\left(T_{X}+\lambda\right)^{-1}\tilde{g}_{Z}\in\mathcal{H}. \tag{30}\]
Recalling (20), we have
\[\hat{f}_{\lambda} =\frac{1}{n}(T_{X}+\lambda)^{-1}\sum_{i=1}^{n}K_{x_{i}}y_{i}=\frac {1}{n}(T_{X}+\lambda)^{-1}\sum_{i=1}^{n}K_{x_{i}}(f_{\rho}^{*}(x_{i})+\epsilon_ {i})\] \[=(T_{X}+\lambda)^{-1}\tilde{g}_{Z}+\frac{1}{n}\sum_{i=1}^{n}(T_{X }+\lambda)^{-1}K_{x_{i}}\epsilon_{i},\]
so that
\[\hat{f}_{\lambda}-f_{\rho}^{*}=\left(\tilde{f}_{\lambda}-f_{\rho}^ {*}\right)+\frac{1}{n}\sum_{i=1}^{n}(T_{X}+\lambda)^{-1}K_{x_{i}}\epsilon_{i}.\]
Taking expectation over the noise \(\epsilon\) conditioned on \(X\), since \(\varepsilon|x\) are independent noise with mean 0 and variance \(\sigma^{2}\), we have
\[\mathbb{E}\left(\left\|\hat{f}_{\lambda}-f_{\rho}^{*}\right\|_{ L^{2}}^{2}\Bigm{|}X\right)=\mathbf{Bias}^{2}(\lambda)+\mathbf{Var}(\lambda), \tag{31}\]
where
\[\mathbf{Bias}^{2}(\lambda)\coloneqq\left\|\tilde{f}_{\lambda}-f_{ \rho}^{*}\right\|_{L^{2}}^{2},\quad\mathbf{Var}(\lambda)\coloneqq\frac{\sigma ^{2}}{n^{2}}\sum_{i=1}^{n}\left\|(T_{X}+\lambda)^{-1}k(x_{i},\cdot)\right\|_{ L^{2}}^{2}. \tag{32}\]
### The variance term
**Theorem A.1**.: _Under Assumptions 1-5, suppose that \(\lambda\asymp n^{-\theta}\). Then,_
\[\mathbf{Var}(\lambda)=\begin{cases}\Theta_{\mathbb{P}}^{\mathrm{ poly}}\big{(}\sigma^{2}n^{-(1-\theta/\beta)}\big{)},&\text{if}\quad\theta<\beta;\\ \Omega_{\mathbb{P}}^{\mathrm{poly}}\big{(}\sigma^{2}\big{)},&\text{if}\quad \theta\geq\beta.\end{cases} \tag{33}\]
The computation in Li et al. (2023b) shows that
\[\mathbf{Var}(\lambda)=\frac{\sigma^{2}}{n^{2}}\int_{\mathcal{X}} \mathbb{K}(x,X)(K+\lambda)^{-2}\mathbb{K}(X,x)\mathrm{d}\mu(x).\]
Then, Theorem A.1 directly follows from Proposition 5.3 and Theorem 5.10 in Li et al. (2023a).
### The bias term
**Theorem A.2**.: _Under Assumptions 1,2,3, suppose \(\lambda\asymp n^{-\theta}\) for \(\theta\in(0,\beta)\). Then,_
\[\mathbf{Bias}^{2}(\lambda)=\tilde{\Theta}_{\mathbb{P}}\left(n^{- \min(s,2)\theta}\right), \tag{34}\]
_where \(\tilde{\Theta}_{\mathbb{P}}\) can be replaced with \(\Theta_{\mathbb{P}}\) if \(s\neq 2\)._
Let us define the regularized version of the regression function
\[f_{\lambda}\coloneqq(T+\lambda)^{-1}Tf_{\rho}^{*}. \tag{35}\]
Then, the triangle inequality implies that
\[\mathbf{Bias}(\lambda)=\left\|\tilde{f}_{\lambda}-f_{\rho}^{*}\right\|_{L^{2}} \geq\left\|f_{\lambda}-f_{\rho}^{*}\right\|_{L^{2}}-\left\|\tilde{f}_{\lambda} -f_{\lambda}\right\|_{L^{2}} \tag{36}\]
Then, the proof of Theorem A.2 is the combination of the following Lemma A.3 (with \(\gamma=0\)) and Lemma A.4, showing that the main term \(\left\|f_{\lambda}-f_{\rho}^{*}\right\|_{L^{2}}=\tilde{\Theta}_{\mathbb{P}} \left(n^{-\min(s,2)\theta/2}\right)\) and the error term \(\left\|\tilde{f}_{\lambda}-f_{\lambda}\right\|_{L^{2}}=o_{\mathbb{P}}\left(n^ {-\min(s,2)\theta/2}\right)\).
**Lemma A.3**.: _Under Assumptions 1 and 3, for any \(0\leq\gamma<s\), we have_
\[\left\|f_{\lambda}-f_{\rho}^{*}\right\|_{\left|\mathcal{H}\right|^{\gamma}}^{2 }\asymp\begin{cases}\lambda^{s-\gamma},&s-\gamma<2;\\ \lambda^{2}\ln\frac{1}{\lambda},&s-\gamma=2;\\ \lambda^{2},&s-\gamma>2.\end{cases} \tag{37}\]
Proof.: From the definition of interpolating norms, letting \(p=(s-\gamma)/2\), we have
\[\left\|f_{\lambda}-f_{\rho}^{*}\right\|_{\left|\mathcal{H}\right|^{\gamma}}^{2 }=\sum_{i=1}^{\infty}a_{i}^{2}\frac{\lambda^{2}}{(\lambda_{i}+\lambda)^{2}}( \lambda_{i}^{*}i^{-1})\lambda_{i}^{-\gamma}\asymp\lambda^{2}\sum_{i=1}^{ \infty}\left(\frac{\lambda_{i}^{p}}{\lambda_{i}+\lambda}\right)^{2}i^{-1}. \tag{38}\]
Then result then follows by applying Proposition B.2 for the last series.
The following lemma shows the error term in (36) is infinitesimal, whose proof relies on fine-grained concentration results established in Section A.3.
**Lemma A.4**.: _Under Assumptions 1-3. Suppose \(\lambda\asymp n^{-\theta}\) with \(\theta\in(0,\beta)\), then_
\[\left\|\tilde{f}_{\lambda}-f_{\lambda}\right\|_{L^{2}}=o_{\mathbb{P}}\left(n^ {-\min(s,2)\theta/2}\right) \tag{39}\]
Proof.: We begin with
\[\left\|\tilde{f}_{\lambda}-f_{\lambda}\right\|_{L^{2}} =\left\|T^{\frac{1}{2}}\left(\tilde{f}_{\lambda}-f_{\lambda}\right) \right\|_{\mathcal{H}}\] \[\leq\left\|T^{\frac{1}{2}}T_{\lambda}^{-\frac{1}{2}}\right\|\cdot \left\|T_{\lambda}^{\frac{1}{2}}T_{X\lambda}^{-1}T_{\lambda}^{\frac{1}{2}} \right\|\cdot\left\|T_{\lambda}^{-\frac{1}{2}}\left(\tilde{g}_{Z}-T_{X\lambda }f_{\lambda}\right)\right\|_{\mathcal{H}}. \tag{40}\]
From operator calculus we know \(\left\|T^{\frac{1}{2}}T_{\lambda}^{-\frac{1}{2}}\right\|\leq 1\). Moreover, since \(\theta<\beta\) and the embedding index \(\alpha_{0}=1/\beta\), by Lemma B.5 we get \(\left\|T_{\lambda}^{\frac{1}{2}}T_{X\lambda}^{-1}T_{\lambda}^{\frac{1}{2}} \right\|\leq 3\) with high probability as long as \(n\) is sufficiently large. For the last term in (40), we have
\[T_{\lambda}^{-\frac{1}{2}}\left(\tilde{g}_{Z}-T_{X\lambda}f_{ \lambda}\right) =T_{\lambda}^{-\frac{1}{2}}\left[\left(\tilde{g}_{Z}-\left(T_{X}+ \lambda+T-T\right)f_{\lambda}\right)\right]\] \[=T_{\lambda}^{-\frac{1}{2}}\left[\left(\tilde{g}_{Z}-T_{X}f_{ \lambda}\right)-\left(T+\lambda\right)f_{\lambda}+Tf_{\lambda}\right]\] \[=T_{\lambda}^{-\frac{1}{2}}\left[\left(\tilde{g}_{Z}-T_{X}f_{ \lambda}\right)-\left(g-Tf_{\lambda}\right)\right].\]
Therefore, Lemma A.5 and Lemma A.10 show that
\[\left\|T_{\lambda}^{-\frac{1}{2}}\left(\tilde{g}_{Z}-T_{X\lambda}f_{\lambda} \right)\right\|_{\mathcal{H}}=\left\|T_{\lambda}^{-\frac{1}{2}}\left[\left( \tilde{g}_{Z}-T_{X}f_{\lambda}\right)-\left(g-Tf_{\lambda}\right)\right] \right\|_{\mathcal{H}}=o_{\mathbb{P}}\left(n^{-\min(s,2)\theta/2}\right)\]
for both \(s>\alpha_{0}\) and \(s\leq\alpha_{0}\) cases.
### Approximation results
Let us further denote
\[\xi(x)=T_{\lambda}^{-\frac{1}{2}}(K_{x}f_{\rho}^{*}(x)-T_{x}f_{\lambda}). \tag{41}\]
Then, it is easy to check that
\[T_{\lambda}^{-\frac{1}{2}}\left[\left(\tilde{g}_{Z}-T_{X}f_{\lambda}\right)- \left(g-Tf_{\lambda}\right)\right]=\frac{1}{n}\sum_{i=1}^{n}\xi(x_{i})-\mathbb{ E}_{x\sim\mu}\xi(x).\]
The following lemma deals with the easy case when \(s>\alpha_{0}\).
**Lemma A.5**.: _Suppose Assumptions 1-3 hold and \(s>\alpha_{0}\). Let \(\lambda\asymp n^{-\theta}\) with \(\theta\in(0,\beta)\) and \(\delta\in(0,1)\). Then, for \(\alpha>\alpha_{0}=\beta^{-1}\) being sufficiently close, it holds with probability at least \(1-\delta\) that_
\[\left\|T_{\lambda}^{-\frac{1}{2}}\left[(\tilde{g}_{Z}-T_{X}f_{ \lambda})-(g-Tf_{\lambda})\right]\right\|_{\mathcal{H}}\leq C\ln\frac{2}{ \delta}\cdot\left(M_{\alpha}^{2}\frac{\lambda^{-\alpha}}{n}+M_{\alpha}\sqrt{ \frac{\lambda^{-\alpha}\ln n}{n}}\right)\lambda^{\tilde{s}/2}, \tag{42}\]
_where \(\tilde{s}=\min(s,2)\). Consequently,_
\[\left\|T_{\lambda}^{-\frac{1}{2}}\left[(\tilde{g}_{Z}-T_{X}f_{ \lambda})-(g-Tf_{\lambda})\right]\right\|=o_{\mathbb{P}}(\lambda^{\tilde{s}/2 })=o_{\mathbb{P}}(n^{-\tilde{s}\theta/2}). \tag{43}\]
Before proving Lemma A.5, we have to introduce the following proposition bounding the \(\gamma\)-norms of the regularized basis function, which is a part of Li et al. (2023a, Corollary 5.6).
**Proposition A.6**.: Suppose \(\mathcal{H}\) has embedding index \(\alpha_{0}\). Then for any \(\alpha>\alpha_{0}\),
\[\left\|T_{\lambda}^{-1/2}k(x,\cdot)\right\|_{\mathcal{H}}\leq M_ {\alpha}\lambda^{-\alpha/2},\quad\mu\text{-a.e.}\ x\in\mathcal{X}. \tag{44}\]
Proof of Lemma A.5.: To use Bernstein inequality in Lemma B.4, let us bound the \(m\)-th moment of \(\xi(x)\):
\[\mathbb{E}\|\xi(x)\|_{\mathcal{H}}^{m} =\mathbb{E}\Big{\|}T_{\lambda}^{-\frac{1}{2}}K_{x}(f_{\rho}^{*}( x)-f_{\lambda}(x))\Big{\|}_{\mathcal{H}}^{m}\] \[\leq\mathbb{E}\left[\left\|T_{\lambda}^{-\frac{1}{2}}k(x,\cdot) \right\|_{\mathcal{H}}^{m}\cdot\mathbb{E}\big{(}\big{|}f_{\rho}^{*}(x)-f_{ \lambda}(x)\big{|}^{m}\bigm{|}x\big{)}\right]. \tag{45}\]
The first term in (45) is bounded through (44). For the second term, since \(s>\alpha_{0}\), using the embedding condition and Lemma A.3, we have
\[\left\|f_{\lambda}-f_{\rho}^{*}\right\|_{L^{\infty}}\leq M_{\alpha}\big{\|}f_ {\lambda}-f_{\rho}^{*}\big{\|}_{[\mathcal{H}]^{\alpha}}\leq CM_{\alpha} \lambda^{\min(s-\alpha,2)/2}\leq CM_{\alpha}\lambda^{(\tilde{s}-\alpha)/2},\]
where we notice that \(\min(s-\alpha,2)=\min(s,2+\alpha)-\alpha\geq\tilde{s}-\alpha\) for the last inequality. Moreover, Lemma A.3 also implies
\[\mathbb{E}\big{|}f_{\lambda}(x)-f_{\rho}^{*}(x)\big{|}^{2}=\left\|f_{\lambda}( x)-f_{\rho}^{*}(x)\right\|_{L^{2}}^{2}\leq C\lambda^{\tilde{s}}\ln\frac{1}{ \lambda}\leq C\lambda^{\tilde{s}}\ln n.\]
Plugging in these estimations in (45), we get
(45) \[\leq(M_{\alpha}\lambda^{-\alpha/2})^{m}\cdot\left\|f_{\lambda}-f _{\rho}^{*}\right\|_{L^{\infty}}^{m-2}\cdot\mathbb{E}\big{|}f_{\lambda}(x)-f_ {\rho}^{*}(x)\big{|}^{2}\] \[\leq(M_{\alpha}\lambda^{-\alpha/2})^{m}\cdot\left(CM_{\alpha} \lambda^{(\tilde{s}-\alpha)/2}\right)^{m-2}\cdot(C\lambda^{\tilde{s}}\ln n)\] \[\leq\frac{1}{2}m!\left(CM_{\alpha}^{2}\lambda^{\tilde{s}-\alpha} \ln n\right)\cdot\left(CM_{\alpha}^{2}\lambda^{-\alpha+\tilde{s}/2}\right)^{m -2}.\] (46)
The proof is then complete by Lemma B.4.
The case of \(s\leq\alpha_{0}\) is more difficult. We will use the truncation technique introduced in Zhang et al. (2023a). The following lemma can be proven similarly to Lemma A.3.
**Lemma A.7**.: _Under Assumptions 1 and 3, for any \(0\leq\gamma<s+2\), we have_
\[\left\|f_{\lambda}\right\|_{[\mathcal{H}]^{\gamma}}^{2}\asymp\begin{cases} \lambda^{s-\gamma},&s<\gamma;\\ \ln\frac{1}{\lambda},&s=\gamma;\\ 1,&s>\gamma.\end{cases} \tag{47}\]
Proof.: Simply notice that
\[\left\|f_{\lambda}\right\|_{[\mathcal{H}]^{\gamma}}^{2}=\sum_{i=1}^{\infty}a_ {i}^{2}\frac{\lambda_{i}^{2}}{(\lambda_{i}+\lambda)^{2}}(\lambda_{i}^{s}i^{-1} )\lambda_{i}^{-\gamma}\asymp\sum_{i=1}^{\infty}\left(\frac{\lambda_{i}^{p}}{ \lambda_{i}+\lambda}\right)^{2}i^{-1},\]
where \(p=(s+2-\gamma)/2\). Then we can apply Proposition B.2.
Then, we are able to show the following concentration result about the truncated \(\xi_{i}\)'s, whose proof resembles that of Lemma A.5.
**Lemma A.8**.: _Suppose Assumptions 1-3 hold and \(s\leq\alpha_{0}\). Let \(\lambda\asymp n^{-\theta}\) with \(\theta\in(0,\beta)\) and \(\delta\in(0,1)\). For any \(t>0\), denote \(\Omega_{t}=\{x\in\mathcal{X}:\left|f_{\rho}^{*}(x)\right|\leq t\}\) and \(\bar{\xi}(x)=\xi(x)\mathbf{1}_{\{x\in\Omega_{t}\}}\). Then, for \(\alpha>\alpha_{0}=\beta^{-1}\) being sufficiently close, it holds with probability at least \(1-\delta\) that_
\[\left\|\frac{1}{n}\sum_{i=1}^{n}\bar{\xi}(x_{i})-\mathbb{E}\bar{\xi}(x) \right\|\leq C\ln\frac{2}{\delta}\cdot\left[\frac{M_{\alpha}}{n}\left(M_{ \alpha}\lambda^{-\alpha}+t\lambda^{-\frac{\alpha+s}{2}}\right)+M_{\alpha} \sqrt{\frac{\lambda^{-\alpha}\ln n}{n}}\right]\lambda^{s/2}. \tag{48}\]
_Consequently, if \(t\asymp n^{l}\) with \(l<1-\frac{\alpha+s}{2}\theta\), we have_
\[\left\|\frac{1}{n}\sum_{i=1}^{n}\bar{\xi}(x_{i})-\mathbb{E}\bar{\xi}(x) \right\|=o_{\mathbb{P}}(\lambda^{s/2}). \tag{49}\]
Proof.: We follow the same routine of the proof of Lemma A.5 and obtain (45) with \(\xi\) replaced with \(\bar{\xi}\). The only difference is that we have to control
\[\left\|\mathbf{1}\{x\in\Omega_{t}\}(f_{\lambda}-f_{\rho}^{*}) \right\|_{L^{\infty}} \leq\left\|f_{\lambda}\right\|_{L^{\infty}}+\left\|\mathbf{1}\{x \in\Omega_{t}\}f_{\rho}^{*}\right\|_{L^{\infty}}\] \[\leq M_{\alpha}\|f_{\lambda}\|_{[\mathcal{H}]^{\alpha}}+t\] \[\leq CM_{\alpha}\lambda^{(s-\alpha)/2}+t,\]
where we apply Lemma A.7 at the second inequality. Then, (46) changes to
\[\frac{1}{2}m!\left(CM_{\alpha}^{2}\lambda^{\bar{s}-\alpha}\ln n\right)\cdot \left(CM_{\alpha}^{2}\lambda^{-\alpha+\bar{s}/2}+M_{\alpha}\lambda^{-\alpha/2 }t\right)^{m-2}\]
and the rest follows.
To bound the extra error terms caused by truncation, we have to use the following proposition about the \(L^{q}\) embedding of the RKHS (Zhang et al., 2023a, Theorem 5).
**Proposition A.9**.: Under Assumption 2, for any \(0<s\leq\alpha_{0}\) and \(\alpha>\alpha_{0}\), we have embedding
\[[\mathcal{H}]^{s}\hookrightarrow L^{q_{s}}(\mathcal{X},\mathrm{d}\mu),\quad q _{s}=\frac{2\alpha}{\alpha-s}. \tag{50}\]
**Lemma A.10**.: _Suppose Assumptions 1-3 hold and \(s\leq\alpha_{0}\). Let \(\lambda\asymp n^{-\theta}\) with \(\theta\in(0,\beta)\) and \(\delta\in(0,1)\). Then_
\[\left\|T_{\lambda}^{-\frac{1}{2}}\left[(\bar{g}_{Z}-T_{X}f_{\lambda})-(g-Tf_{ \lambda})\right]\right\|=o_{\mathbb{P}}(\lambda^{s/2})=o_{\mathbb{P}}(n^{-s \theta/2}). \tag{51}\]
Proof.: We will choose \(t=n^{l}\) for some \(l\) that will be determined later and choose some \(\alpha>\alpha_{0}\) being sufficiently close. Using the same notations as in (49), we decompose
\[\left\|\frac{1}{n}\sum_{i=1}^{n}\xi(x_{i})-\mathbb{E}\xi(x) \right\|_{\mathcal{H}} \leq\left\|\frac{1}{n}\sum_{i=1}^{n}\bar{\xi}(x_{i})-\mathbb{E} \bar{\xi}(x)\right\|_{\mathcal{H}}+\left\|\frac{1}{n}\sum_{i=1}^{n}\xi(x_{i}) \mathbf{1}_{\{x_{i}\notin\Omega_{t}\}}\right\|_{\mathcal{H}} \tag{52}\] \[\quad+\left\|\mathbb{E}\xi(x)\mathbf{1}_{\{x\notin\Omega_{t}\}} \right\|_{\mathcal{H}}.\]
The first term in (49) is already bounded by (49) if \(l<1-\frac{\alpha+s}{2}\theta\). To bound the second term in (52), we notice that
\[x_{i}\in\Omega_{t},\;\forall i=1,\ldots,n\quad\text{implies}\quad\frac{1}{n} \sum_{i=1}^{n}\xi(x_{i})\mathbf{1}_{\{x_{i}\notin\Omega_{t}\}}=0.\]
Since Markov's inequality yields
\[\mathbb{P}_{x\sim\mu}\left\{x\notin\Omega_{t}\right\}\leq t^{-q}\left\|f_{ \rho}^{*}\right\|_{L^{q}}^{q}, \tag{53}\]
where \(q=\frac{2\alpha}{\alpha-s}\), we get
\[\mathbb{P}\left\{x_{i}\in\Omega_{t},\ \forall i\right\}=(\mathbb{P}_{x\sim\mu} \left\{x\in\Omega_{t}\right\})^{n}=(1-\mathbb{P}_{x\sim\mu}\left\{x\notin\Omega _{t}\right\})^{n}\geq(1-t^{-q}\big{\|}f_{\rho}^{*}\big{\|}_{L^{q}}^{q})^{n}.\]
So the second term vanishes with high probability as long as \(l>1/q\).
For the third term in (52), using (44), we get
\[\big{\|}\mathbb{E}\xi(x)\mathbf{1}_{\{x\notin\Omega_{t}\}}\big{\|} _{\mathcal{H}} \leq\mathbb{E}\big{\|}\xi(x)\mathbf{1}_{\{x\notin\Omega_{t}\}} \big{\|}_{\mathcal{H}}\] \[\leq M_{\alpha}\lambda^{-\alpha/2}\mathbb{E}\left[\mathbf{1}_{\{ x\notin\Omega_{t}\}}(f_{\rho}^{*}(x)-f_{\lambda}(x))\right]\] \[\leq M_{\alpha}\lambda^{-\alpha/2}\left[\mathbb{E}(f_{\rho}^{*}(x )-f_{\lambda}(x))^{2}\right]^{\frac{1}{2}}[\mathbb{P}\{x\notin\Omega_{t}\}]^{ \frac{1}{2}}\] \[\leq M_{\alpha}\lambda^{-\alpha/2}\lambda^{s/2}t^{-q/2}\big{\|}f _{\rho}^{*}\big{\|}_{L^{q}}^{q/2}.\]
Consequently, if \(l>\frac{\alpha\theta}{q}\), then
\[\big{\|}\mathbb{E}\xi(x)\mathbf{1}_{\{x\notin\Omega_{t}\}}\big{\|}_{\mathcal{H }}=o(\lambda^{s/2}).\]
Finally, the three requirements of \(l\) are
\[l<1-\frac{\alpha+s}{2}\theta,\quad l>\frac{1}{q},\quad\text{and}\quad l>\frac {\theta\alpha}{q},\]
where \(q=\frac{2\alpha}{\alpha-s}\). Since \(\theta<\beta=\alpha_{0}^{-1}\), we can choose \(\alpha\) sufficiently close to \(\alpha_{0}\) such that \(\theta\alpha<1\). Then,
\[(1-\frac{\alpha+s}{2}\theta)-\frac{1}{q}=(1-\theta\alpha)\left(\frac{\alpha+ s}{2\alpha}\right)>0,\]
and thus
\[\frac{\theta\alpha}{q}<\frac{1}{q}<1-\frac{\alpha+s}{2}\theta,\]
showing that we can choose some \(l\) satisfying all the requirements and the proof is finish.
### The noiseless case
The case when \(\lambda=n^{-\theta}\) for \(\theta<\beta\) is already covered in Theorem A.2. For the case \(\theta\geq\beta\), the approximation Lemma B.5 no longer holds, and we must reduce it to the former case. However, there is no direct monotone property of \(\mathbf{Bias}(\lambda)\). Nevertheless, we have the following monotone relation about the operator norms, whose proof utilizes the idea in Lin et al. (2021, Proposition 6.1) with modification.
**Proposition A.11**.: Let \(\psi_{\lambda}=\lambda(T_{X}+\lambda)^{-1}\in\mathscr{B}(\mathcal{H})\). Suppose \(\lambda_{1}\leq\lambda_{2}\), then for any \(s,p\geq 0\),
\[\big{\|}T^{s}\psi_{\lambda_{1}}^{p}\big{\|}_{\mathscr{B}(\mathcal{H})}=\big{\|} \psi_{\lambda_{1}}^{p}T^{s}\big{\|}_{\mathscr{B}(\mathcal{H})}\leq\big{\|}T^{ s}\psi_{\lambda_{2}}^{p}\big{\|}_{\mathscr{B}(\mathcal{H})}=\big{\|}\psi_{ \lambda_{2}}^{p}T^{s}\big{\|}_{\mathscr{B}(\mathcal{H})}. \tag{54}\]
Proof.: Let us denote by \(\preceq\) the partial order induced by positive operators. Since the function \(\lambda\mapsto\frac{\lambda}{z+\lambda}\) is monotone decreasing with \(\lambda\), we obtain \(\psi_{\lambda_{1}}^{2p}=\psi_{\lambda_{2}}^{2p}\), which further implies
\[T^{s}\psi_{\lambda_{1}}^{2p}T^{s}\preceq T^{s}\psi_{\lambda_{2}}^{2p}T^{s}.\]
Then, since \(\left\|A\right\|^{2}=\left\|AA^{*}\right\|\), we have
\[\big{\|}T^{s}\psi_{\lambda_{1}}^{p}\big{\|}_{\mathscr{B}(\mathcal{H})}^{2}= \Big{\|}T^{s}\psi_{\lambda_{1}}^{2p}T^{s}\Big{\|}_{\mathscr{B}(\mathcal{H})} \leq\Big{\|}T^{s}\psi_{\lambda_{2}}^{2p}T^{s}\Big{\|}_{\mathscr{B}(\mathcal{H} )}=\big{\|}T^{s}\psi_{\lambda_{2}}^{p}\big{\|}_{\mathscr{B}(\mathcal{H})}^{2},\]
and the equality in (54) is proven by \(\left\|A\right\|=\left\|A^{*}\right\|\).
**Lemma A.12**.: _Under Assumption 1,2,3, assume further \(s>1\). Suppose \(\lambda\asymp n^{-\theta}\) for \(\theta\geq\beta\). Then,_
\[\mathbf{Bias}^{2}(\lambda)=O_{\mathbb{P}}^{\mathrm{poly}}(n^{-\min(s,2)\beta}). \tag{55}\]
Proof.: Since \(f_{\rho}^{*}\) is given in (12) and \(s>1\), we have \(f_{\rho}^{*}\in[\mathcal{H}]^{t}\) for \(1\leq t<s\). In particular, \(f_{\rho}^{*}\in\mathcal{H}\), so the bias term can also be written as
\[\mathbf{Bias}(\lambda)=\left\|\lambda(T_{X}+\lambda)^{-1}f_{\rho}^{*}\right\|_ {L^{2}}. \tag{56}\]
Moreover, from the construction (8) of \([\mathcal{H}]^{t}\), we may assume \(f_{\rho}^{*}=T^{t/2}g\) for some \(g\in L^{2}\) with \(\left\|g\right\|_{L^{2}}\leq C\), and restrict further that \(t\leq 2\). Let \(\tilde{\lambda}\asymp n^{-l}\) for \(l\in(0,\beta)\). Then, using the same notation in Proposition A.11, we have
\[\mathbf{Bias}(\lambda) =\left\|\psi_{\lambda}f_{\rho}^{*}\right\|_{L^{2}}=\left\|T^{1/2} \psi_{\lambda}T^{\frac{t-1}{2}}\cdot T^{1/2}g\right\|_{\mathcal{H}}\] \[\leq\left\|T^{1/2}\psi_{\lambda}T^{(t-1)/2}\right\|\cdot\left\| T^{1/2}g\right\|_{\mathcal{H}}\] \[\leq C\left\|T^{1/2}\psi_{\lambda}^{1/2}\right\|\cdot\left\| \psi_{\lambda}^{1/2}T^{\frac{t-1}{2}}\right\|\] \[\leq C\left\|T^{1/2}\psi_{\lambda}^{1/2}\right\|\cdot\left\| \psi_{\tilde{\lambda}}^{(2-t)/2}\right\|\cdot\left\|\psi_{\tilde{\lambda}}^{ \frac{t-1}{2}}T^{\frac{t-1}{2}}\right\|\] \[=C\tilde{\lambda}^{t/2}\left\|\psi_{\tilde{\lambda}}^{(2-t)/2} \right\|\cdot\left\|T^{1/2}T_{X\tilde{\lambda}}^{-1/2}\right\|\cdot\left\|T^{ \frac{t-1}{2}}T_{X\tilde{\lambda}}^{-\frac{t-1}{2}}\right\|\] \[\leq C\tilde{\lambda}^{t/2}\left\|T^{1/2}T_{X\tilde{\lambda}}^{-1 /2}\right\|^{t},\]
where we use Lemma B.6 for the last inequality. Finally, since \(\tilde{\lambda}\asymp n^{-l}\) for \(l<\beta\), Lemma B.5 implies that with high probability we have
\[\left\|T^{\frac{1}{2}}T_{X\tilde{\lambda}}^{-\frac{1}{2}}\right\|=\left\|T^{ \frac{1}{2}}T_{\lambda}^{-\frac{1}{2}}T_{X}^{\frac{1}{2}}T_{X\tilde{\lambda}}^ {-\frac{1}{2}}\right\|\leq\left\|T^{\frac{1}{2}}T_{\lambda}^{-\frac{1}{2}} \right\|\left\|T_{\lambda}^{\frac{1}{2}}T_{X\tilde{\lambda}}^{-\frac{1}{2}} \right\|\leq 1\cdot\sqrt{3}=\sqrt{3}.\]
Therefore, we obtain
\[\mathbf{Bias}(\lambda)=O_{\mathbb{P}}(\tilde{\lambda}^{t/2})=O_{\mathbb{P}}(n ^{-tl/2}).\]
Since \(t<\min(s,2)\) and \(l<\beta\) can arbitrarily close, we conclude (55).
Proof of Proposition 4.4.: Let us denote \(\mathcal{F}=\left\{f:\left\|f_{\rho}^{*}\right\|_{[\mathcal{H}]^{s}}\leq R\right\}\) for convenience. Since \(f_{\rho}^{*}\in\mathcal{H}\), the bias term can be given by
\[\mathbf{Bias}(\lambda)=\left\|f_{\rho}^{*}-T_{X}(T_{X}+\lambda)^{-1}f_{\rho}^{ *}\right\|_{L^{2}}=\left\|(I-L_{X})f_{\rho}^{*}\right\|_{L^{2}}\]
for a linear operator \(L_{X}=T_{X}(T_{X}+\lambda)^{-1}\) on \(\mathcal{H}\). Then,
\[\sup_{f_{\rho}^{*}\in\mathcal{F}}\mathbf{Bias}(\lambda) =\sup_{f_{\rho}^{*}\in\mathcal{F}}\left\|(I-L_{X})f_{\rho}^{*} \right\|_{L^{2}}\overset{(a)}{=}\sup_{\left\|g\right\|_{\mathcal{H}}\leq R} \left\|T^{\frac{t}{2}}(I-L_{X})T^{\frac{s-1}{2}}g\right\|_{\mathcal{H}}\] \[=\sup_{\left\|g\right\|_{\mathcal{H}}\leq R}\left\|(T^{\frac{s}{2} }-T^{\frac{s}{2}}L_{X}T^{\frac{s-1}{2}})g\right\|_{\mathcal{H}}\] \[=\left\|T^{\frac{s}{2}}-T^{\frac{s}{2}}L_{X}T^{\frac{s-1}{2}} \right\|_{\mathcal{B}(\mathcal{H})}\] \[\overset{(b)}{\geq}\lambda_{n+1}^{s/2}=(n+1)^{-s\beta/2}=\Omega (n^{-s\beta/2}),\]
where in (a) we use the relation between the interpolation spaces and in (b) we use the fact that and \(\left\|A-B\right\|\geq\lambda_{n+1}(A)\) for any operator \(B\) with rank at most \(n\) (see, for example, Simon [2015, Section 3.5]).
## Appendix B Auxiliary results
**Proposition B.1**.: Let
\[f(z)=\frac{z^{\alpha}}{z+\lambda}.\]
Then,
1. If \(\alpha=0\), then \(f(z)\) is monotone decreasing.
2. If \(\alpha\in(0,1)\), then \(f(z)\) is monotone increasing in \([0,\frac{\alpha\lambda}{1-\alpha}]\), and decreasing in \([\frac{\alpha\lambda}{1-\alpha},+\infty)\). Consequently, \(f(z)\leq\lambda^{\alpha-1}\).
3. If \(\alpha\geq 1\), then \(f(z)\) monotone increasing on \([0,+\infty)\).
Proof.: We simply notice that
\[f^{\prime}(z)=\frac{z^{\alpha-1}}{(z+\lambda)^{2}}(\alpha\lambda-(1-\alpha)z).\]
**Proposition B.2**.: Suppose \(c_{\beta}i^{-\beta}\leq\lambda_{i}\leq C_{\beta}i^{-\beta}\) and \(p>0\), then as \(\lambda\to 0\), we have
\[\sum_{i=1}^{\infty}\left(\frac{\lambda_{i}^{p}}{\lambda_{i}+\lambda}\right)^{ 2}i^{-1}\asymp\begin{cases}\lambda^{2(p-1)},&p<1;\\ \ln\frac{1}{\lambda},&p=1;\\ 1,&p>1.\end{cases} \tag{57}\]
Proof.: We first consider the case when \(p<1\). Since \(c_{\beta}i^{-\beta}\leq\lambda_{i}\leq C_{\beta}i^{-\beta}\), from Proposition B.1, letting \(q=\frac{p}{1-p}\), we have
\[\frac{\lambda_{i}^{p}}{\lambda_{i}+\lambda}\leq\begin{cases}\frac{C_{\beta}^{ p}i^{-p\beta}}{C_{\beta}i^{-\beta}+\lambda},&\text{if}\quad C_{\beta}i^{- \beta}\leq q\lambda;\\ \lambda_{i}^{p}/\lambda_{i}\leq C_{\beta}^{p}i^{-(p-1)\beta},&\text{if}\quad C _{\beta}i^{-\beta}>q\lambda;\end{cases}\]
Therefore,
\[\sum_{i=1}^{\infty}\left(\frac{\lambda_{i}^{p}}{\lambda_{i}+ \lambda}\right)^{2}i^{-1} \leq C\sum_{i:C_{\beta}i^{-\beta}>q\lambda}i^{-2(p-1)\beta-1}+C \sum_{i:C_{\beta}i^{-\beta}\leq q\lambda}\frac{i^{-2p\beta}}{(C_{\beta}i^{- \beta}+\lambda)^{2}}i^{-1}\] \[\asymp S_{1}+S_{2}.\]
For \(S_{1}\), noticing \(C_{\beta}i^{-\beta}>q\lambda\) implies \(i<(q\lambda/C_{\beta})^{-1/\beta}\), we have
\[S_{1}\leq C\sum_{i=1}^{\lfloor(q\lambda/C_{\beta})^{-1/\beta}\rfloor}i^{-2(p- 1)\beta-1}\leq C\lambda^{2(p-1)}.\]
For \(S_{2}\), using Proposition B.1 again we have
\[S_{2} \leq C\int_{(q\lambda/C_{\beta})^{-1/\beta}-1}^{\infty}\frac{x^{- 2p\beta}}{(C_{\beta}x^{-\beta}+\lambda)^{2}}x^{-1}\mathrm{d}x\] \[=C\lambda^{2p-2}\int_{(C_{\beta}/q)^{1/\beta}-\lambda^{1/\beta}} ^{\infty}\frac{y^{-2p\beta}}{(C_{\beta}y^{-\beta}+1)^{2}}y^{-1}\mathrm{d}y \quad(x=\lambda^{-1/\beta}y)\] \[\leq C\lambda^{2p-2},\]
where we note that the last integral is bounded above by a constant. Therefore, we conclude that \(\left\|f_{\lambda}-f_{\rho}^{*}\right\|_{[\mathcal{H}]^{\gamma}}^{2}\leq C \lambda^{2p-2}\). For the lower bound, if \(C_{\beta}i^{-\beta}\leq q\lambda\), we have
\[\frac{\lambda_{i}^{p}}{\lambda_{i}+\lambda}\geq\frac{c_{\beta}^{p}i^{-p\beta} }{c_{\beta}i^{-\beta}+\lambda},\]
and hence
\[\sum_{i=1}^{\infty}\left(\frac{\lambda_{i}^{p}}{\lambda_{i}+ \lambda}\right)^{2}i^{-1} \geq C\int_{(q\lambda/C_{\beta})^{-1/\beta}}^{\infty}\frac{x^{-2p \beta}}{(C_{\beta}x^{-\beta}+\lambda)^{2}}x^{-1}\mathrm{d}x\] \[=C\lambda^{2p-2}\int_{(C_{\beta}/p)^{1/\beta}}^{\infty}\frac{y^{- 2p\beta}}{(C_{\beta}y^{-\beta}+1)^{2}}y^{-1}\mathrm{d}y\] \[\geq C\lambda^{2p-2},\]
where we note that the last integral is independent of \(\lambda\).
For the case \(p=1\), by Proposition B.1, we have
\[\sum_{i=1}^{\infty}\left(\frac{\lambda_{i}}{\lambda_{i}+\lambda} \right)^{2}i^{-1} \leq C\sum_{i=1}^{\infty}\left(\frac{i^{-\beta}}{C_{\beta}i^{- \beta}+\lambda}\right)^{2}i^{-1}\] \[\leq C\sum_{i=1}^{\lfloor 2\lambda^{-1/\beta}\rfloor}\left(\frac{i^ {-\beta}}{C_{\beta}i^{-\beta}+\lambda}\right)^{2}i^{-1}+C\sum_{i=\lfloor 2 \lambda^{-1/\beta}\rfloor+1}^{\infty}\left(\frac{i^{-\beta}}{C_{\beta}i^{- \beta}+\lambda}\right)^{2}i^{-1}\] \[\leq C\sum_{i=1}^{\lfloor 2\lambda^{-1/\beta}\rfloor}i^{-1}+C \int_{2\lambda^{-1/\beta}}^{\infty}\left(\frac{x^{-\beta}}{C_{\beta}x^{-\beta }+\lambda}\right)^{2}x^{-1}\mathrm{d}x\] \[\leq C\ln\frac{1}{\lambda}+C\int_{2}^{\infty}\left(\frac{y^{- \beta}}{C_{\beta}y^{-\beta}+1}\right)^{2}y^{-1}\mathrm{d}y\] \[\leq C\ln\frac{1}{\lambda}.\]
For the lower bound, we have
\[\sum_{i=1}^{\infty}\left(\frac{\lambda_{i}}{\lambda_{i}+\lambda} \right)^{2}i^{-1} \geq c\sum_{i=1}^{\lfloor\lambda^{-1/\beta}\rfloor}\left(\frac{i^ {-\beta}}{c_{\beta}i^{-\beta}+\lambda}\right)^{2}i^{-1}\] \[\geq c\sum_{i=1}^{\lfloor\lambda^{-1/\beta}\rfloor}i^{-1}\geq c\ln \frac{1}{\lambda}.\]
For the case \(p>1\), by Proposition B.1, we have
\[\sum_{i=1}^{\infty}\left(\frac{\lambda_{i}^{p}}{\lambda_{i}+\lambda}\right)^{ 2}i^{-1} \leq C\sum_{i=1}^{\infty}\frac{i^{-2p\beta}}{(C_{\beta}i^{-\beta}+ \lambda)^{2}}i^{-1}\leq C\sum_{i=1}^{\infty}i^{-2(p-1)\beta-1}\leq C,\]
since the last series is summable. The lower bound is derived by
\[\sum_{i=1}^{\infty}\left(\frac{\lambda_{i}^{p}}{\lambda_{i}+\lambda}\right)^{ 2}i^{-1} \geq\frac{\lambda_{1}^{p}}{\lambda_{1}+\lambda}\geq c.\]
**Proposition B.3**.: Under Assumption 1, for any \(p\geq 1\), we have
\[\mathcal{N}_{p}(\lambda)=\mathrm{tr}\left(TT_{\lambda}^{-1}\right)^{p}=\sum_{ i=1}^{\infty}\left(\frac{\lambda_{i}}{\lambda+\lambda_{i}}\right)^{p}\asymp \lambda^{-1/\beta}. \tag{58}\]
Proof.: Since \(c\)\(i^{-\beta}\leq\lambda_{i}\leq Ci^{-\beta}\), we have
\[\mathcal{N}_{p}(\lambda) =\sum_{i=1}^{\infty}\left(\frac{\lambda_{i}}{\lambda_{i}+\lambda }\right)^{p}\leq\sum_{i=1}^{\infty}\left(\frac{Ci^{-\beta}}{Ci^{-\beta}+ \lambda}\right)^{p}=\sum_{i=1}^{\infty}\left(\frac{C}{C+\lambda i^{\beta}} \right)^{p}\] \[\leq\int_{0}^{\infty}\left(\frac{C}{\lambda x^{\beta}+C}\right) ^{p}\mathrm{d}x=\lambda^{-1/\beta}\int_{0}^{\infty}\left(\frac{C}{y^{\beta}+C} \right)^{p}\mathrm{d}y\leq\tilde{C}\lambda^{-1/\beta}.\]
for some constant \(C\). The lower bound is similar.
The following inequality about vector-valued random variables is well-known in the literature [1].
**Lemma B.4**.: _Let \(p=1\) and \(\lambda=\lambda_{1}\). Then for any \(p\geq 1\),_
\[\mathcal{N}_{p}(\lambda) =\mathrm{tr}\left(TT_{\lambda}^{-1}\right)^{p}=\sum_{i=1}^{\infty} \left(\frac{\lambda_{i}}{\lambda+\lambda_{i}}\right)^{p}\asymp\lambda^{-1/ \beta}. \tag{59}\]
Proof.: Since \(c\)\(i^{-\beta}\leq\lambda_{i}\leq Ci^{-\beta}\), we have
\[\mathcal{N}_{p}(\lambda) =\sum_{i=1}^{\infty}\left(\frac{\lambda_{i}}{\lambda_{i}+ \lambda}\right)^{p}\asymp\lambda^{-1/\beta}\] \[\leq\sum_{i=1}^{\infty}\left(\frac{C}{\lambda x^{\beta}+C}\right) ^{p}\mathrm{d}x=\lambda^{-1/\beta}\int_{0}^{\infty}\left(\frac{C}{y^{\beta}+C} \right)^{p}\mathrm{d}y\leq\tilde{C}\lambda^{-1/\beta}.\]
## Appendix C Proof of Theorem 1
In this section, we prove the following result.
**Lemma B.1**.: _Let \(p=1\) and \(\lambda=\lambda_{1}\). Then for any \(p\geq 1\),_
\[\mathcal{N}_{p}(\lambda) =\mathrm{tr}\left(TT_{\lambda}^{-1}\right)^{p}=\sum_{i=1}^{\infty} \left(\frac{\lambda_{i}}{\lambda+\lambda_{i}}\right)^{p}\asymp\lambda^{-1/ \beta}. \tag{60}\]
Proof.: Since \(c\)\(i^{-\beta}\leq\lambda_{i}\leq Ci^{-\beta}\), we have
\[\mathcal{N}_{p}(\lambda) =\sum_{i=1}^{\infty}\left(\frac{\lambda_{i}}{\lambda_{i}+ \lambda}\right)^{p}\asymp\lambda^{-1/\beta}\] \[\leq\sum_{i=1}^{\infty}\left(\frac{C}{\lambda x^{\beta}+C} \right)^{p}\mathrm{d}x=\lambda^{-1/\beta}\int_{0}^{\infty}\left(\frac{C}{y^{ \beta}+C}\right)^{p}\mathrm{d}y\leq\tilde{C}\lambda^{-1/\beta}.\]
Proof.: Since \(c\)\(i^{-\beta}\leq\lambda_{i}\leq Ci^{-\beta}\), we have
\[\mathcal{N}_{p}(\lambda) =\sum_{i=1}^{\infty}\left(\frac{\lambda_{i}}{\lambda_{i}+ \lambda}\right)^{p}\asymp\lambda^{-1/\beta}\] \[\leq\sum_{i=1}^{\infty}\left(\frac{C}{\lambda x^{\beta}+C}\right) ^{p}\mathrm{d}x=\lambda^{-1/\beta}\int_{0}^{\infty}\left(\frac{C}{y^{\beta}+C} \right)^{p}\mathrm{d}y\leq\tilde{C}\lambda^{-1/\beta}.\]
## Appendix D Proof of Theorem 1
In this section, we prove the following result.
**Lemma B.1**.: _Let \(p=1\) and \(\lambda=\lambda_{1}\). Then for any \(p\geq 1\),_
\[\mathcal{N}_{p}(\lambda) =\mathrm{tr}\left(TT_{\lambda}^{-1}\right)^{p}=\sum_{i=1}^{\infty} \left(\frac{\lambda_{i}}{\lambda+\lambda_{i}}\right)^{p}\asymp\lambda^{-1/ \beta}. \tag{61}\]
Proof.: Since \(c\)\(i^{-\beta}\leq\lambda_{i}\leq Ci^{-\beta}\), we have
\[\mathcal{N}_{p}(\lambda) =\sum_{i=1}^{\infty}\left(\frac{\lambda_{i}}{\lambda_{i}+ \lambda}\right)^{p}\asymp\lambda^{-1/\beta}\] \[\leq\sum_{i=1}^{\infty}\left(\frac{C}{\lambda x^{\beta}+C}\right) ^{p}\mathrm{d}x=\lambda^{-1/\beta}\int_{0}^{\infty}\left(\frac{C}{y^{\beta}+C} \right)^{p}\mathrm{d}y\leq\tilde{C}\lambda^{-1/\beta}.\]
Proof.: Since \(c\)\(i^{-\beta}\leq\lambda_{i}\leq Ci^{-\beta}\), we have
\[\mathcal{N}_{p}(\lambda) =\sum_{i=1}^{\infty}\left(\frac{\lambda_{i}}{\lambda_{i}+ \lambda}\right)^{p}\asymp\lambda^{-1/\beta}\] \[\leq\sum_{i=1}^{\infty}\left(\frac{C}{\lambda x^{\beta}+C} \right)^{p}\mathrm{d}x=\lambda^{-1/\beta}\int_{0}^{\infty}\left(\frac{C}{y^{ \beta}+C}\right)^{p}\mathrm{d}y\leq\tilde{C}\lambda^{-1/\beta}.\]
Proof.: Since \(c\)\(i^{-\beta}\leq\lambda_{i}\leq Ci^{-\beta}\), we have
\[\mathcal{N}_{p}(\lambda) =\sum_{i=1}^{\infty}\left(\frac{\lambda_{i}}{\lambda_{i}+ \lambda}\right)^{p}\asymp\lambda^{-1/\beta}\] \[\leq\sum_{i=1}^{\infty}\left(\frac{C}{\lambda x^{\beta}+C}\right) ^{p}\mathrm{d}x=\lambda^{-1/\beta}\int_{0}^{\infty}\left(\frac{C}{y^{\beta}+C} \right)^{p}\mathrm{d}y\leq\tilde{C}\lambda^{-1/\beta}.\]
**Lemma B.4**.: _Let \(H\) be a real separable Hilbert space. Let \(\xi,\xi_{1},\ldots,\xi_{n}\) be i.i.d. random variables taking values in \(H\). Assume that_
\[\mathbb{E}\|\xi-\mathbb{E}\xi\|_{H}^{m}\leq\frac{1}{2}m!\sigma^{2}L^{m-2},\quad \forall m=2,3,\ldots. \tag{59}\]
_Then for fixed \(\delta\in(0,1)\), one has_
\[\mathbb{P}\left\{\left\|\frac{1}{n}\sum_{i=1}^{n}\xi_{i}-\mathbb{E}\xi\right\| _{H}\leq 2\left(\frac{L}{n}+\frac{\sigma}{\sqrt{n}}\right)\ln\frac{2}{\delta} \right\}\geq 1-\delta. \tag{60}\]
_Particularly, a sufficient condition for (59) is_
\[\left\|\xi\right\|_{H}\leq\frac{L}{2}\text{ a.s., and }\mathbb{E}\|\xi\|_{H}^{2} \leq\sigma^{2}.\]
The following concentration result has been shown in Fischer and Steinwart (2020); Zhang et al. (2023). We use the form in Li et al. (2023, Proposition 5.8) for convenience, see also Zhang et al. (2023, Lemma 12).
**Lemma B.5**.: _Suppose \(\mathcal{H}\) has embedding index \(\alpha_{0}\) and Assumption 1 holds. Let \(\lambda=\lambda(n)\to 0\) satisfy \(\lambda=\Omega\left(n^{-1/\alpha_{0}+p}\right)\) for some \(p>0\) and fix arbitrary \(\alpha\in(\alpha_{0},\alpha_{0}+p)\). Then, for all \(\delta\in(0,1)\), when \(n\) is sufficiently large, with probability at least \(1-\delta\),_
\[\left\|T_{\lambda}^{-\frac{1}{2}}(T-T_{X})T_{\lambda}^{-\frac{1}{2}}\right\|_ {\mathcal{H}}\leq C\left(\frac{\lambda^{-\alpha}}{n}\ln n\right)^{1/2}, \tag{61}\]
_where \(C>0\) is a constant to depending on \(\delta,n,\alpha\), and we also have_
\[\left\|T_{\lambda}^{1/2}T_{X\lambda}^{-1/2}\right\|_{\mathcal{B}(\mathcal{H}) },\,\left\|T_{\lambda}^{-1/2}T_{X\lambda}^{1/2}\right\|_{\mathcal{B}(\mathcal{ H})}\leq\sqrt{3}. \tag{62}\]
The following operator inequality(Fujii et al., 1993) will be used in our proofs.
**Lemma B.6** (Cordes' Inequality).: _Let \(A,B\) be two positive semi-definite bounded linear operators on separable Hilbert space \(H\). Then_
\[\|A^{s}B^{s}\|_{\mathcal{B}(H)}\leq\|AB\|_{\mathcal{B}(H)}^{s},\quad\forall s \in[0,1]. \tag{63}\]
The following lemma is a consequence of the fact that \(x^{r}\) is operator monotone when \(r\in(0,1]\) and is Lipschitz when \(r>1\), see Zhang et al. (2023, Lemma 35) or Lin et al. (2018, Lemma 5.8).
**Lemma B.7**.: _Suppose that \(A\) and \(B\) are two positive self-adjoint operators on some Hilbert space, then_
* _for_ \(r\in(0,1]\)_, we have_ \[\|A^{r}-B^{r}\|\leq\|A-B\|^{r}.\]
* _for_ \(r\geq 1\)_, denote_ \(c=\max(\|A\|,\|B\|)\)_, we have_ \[\|A^{r}-B^{r}\|\leq rc^{r-1}\|A-B\|.\]
## Appendix C Experiments
### Details of experiments in the main text
Recall that in the experiments section of the main text, we considered the kernel \(k(x,y)=\min(x,y)\) and \(x\sim\mathcal{U}[0,1]\). We know the eigensystem of \(k\) that \(\lambda_{i}=\left(\frac{2i-1}{2}\pi\right)^{-2}\) and \(e_{i}(x)=\sqrt{2}\sin(\frac{2i-1}{2}\pi x)\). For the three target functions used in the experiments, simple calculation shows that the relative smoothness (source condition) of \(\cos(2\pi x),\sin(2\pi x),\sin(\frac{3}{2}\pi x)\) are \(0.5,1.5,\infty\) respectively.
For some \(f^{*}\), we generate data from the model \(y=f^{*}(x)+\varepsilon\) where \(\varepsilon\sim\mathcal{N}(0,0.05)\) and perform KRR with \(\lambda=cn^{-\theta}\) for different \(\theta\)'s with some fixed constant \(c\). Then, we numerically compute the variance, bias and excess risk by Simpson's formula with \(N\gg n\) nodes. Repeating the experiment for \(n\) ranged in 1000 to 5000 with an increment of 100, we can estimate the convergence rate \(r\) by a logarithmic least-squares \(\log\text{err}=r\log n+b\) on the resulting values (variance, bias and excess risk). Figure 2 on page 20 shows the corresponding curves of the results in Table 1 in the main text. Note that for each setting, we tried different \(c\)'s in the regularization parameter \(\lambda=cn^{-\theta}\) and show the curves under the best choice of \(c\) (\(c=0.005\)).
Figure 2: Decay curves of the variance; the bias and excess risk of three target functions. Both axes are logarithmic. The curves show the average bias over 100 trials; and the regions within one standard deviation are shown in the corresponding colors.
### Learning curves with different noises
Cui et al. (2021) discussed the 'crossover from the noiseless to noisy regime' and shown the interaction between the magnitude of noise and the sample size. As discussed in Remark 3.2 in the main text, our theory can also reflect this interaction. In Figure 3 on page 3.2, we exhibit the learning curves with different magnitudes of noises and visualize this interaction. Note that in the following the sample size is chosen as \(10,20,\cdots,100,120,\cdots,1000,1100,\cdots,5000\), and we use the same kernel and data generation process as before. We repeat the experiments for 100 times for each sample size and present the average excess risk.
In the above settings, the bias decays faster than variance. Figure 3 on page 3.2 shows that the excess risk decays fast when \(n\) is relatively small and coincides the theoretical asymptotic rate in Theorem 3.2 when \(n\) is large. The crossover happens for smaller \(n\) when the magnitude of noise is larger. Similar phenomenon has also been reported by Cui et al. (2021, FIG.2, FIG.3). In addition, comparing the sample size when crossover happens for three target functions, our results show that the crossover happens for smaller \(n\) when the function is smoother, which is also consistent with Theorem 3.2.
Figure 3: Learning curves of three target functions with different noises when choosing \(\lambda=cn^{-\theta}\), \(\theta=1.0,2.0\). Both axes are logarithmic. The black dashed lines represent the theoretical slopes under each choice of \(\theta\).
Theorem 3.2 shows that when \(\theta\geq\beta\), the excess risk is a constant asymptotically. Figure 4 on page 4 shows the curves of kernel interpolation (\(\lambda=0\)). It can be seen that they are similar to the curves in the second column of Figure 3 on page 4, where we choose \(\theta=\beta=2\).
|
2309.07342 | Scattering for the Wave Equation on de Sitter Space in All Even Spatial
Dimensions | For any $n\geq4$ even, we establish a complete scattering theory for the
linear wave equation on the $(n+1)$-dimensional de Sitter space. We prove the
existence and uniqueness of scattering states, and asymptotic completeness.
Moreover, we construct the scattering map taking asymptotic data at past
infinity $\mathscr{I}^-$ to asymptotic data at future infinity $\mathscr{I}^+$.
Identifying $\mathscr{I}^-$ and $\mathscr{I}^+$ with $S^n,$ we prove that the
scattering map is a Banach space isomorphism on $H^{s+n}(S^n)\times
H^{s}(S^n),$ for any $s\geq1.$
The main analysis is carried out at the level of the model equation obtained
by differentiating the linear wave equation $\frac{n}{2}$ times in the time
variable. The main result of the paper follows from proving a scattering theory
for this equation. In particular, for the model equation we construct a
scattering isomorphism from asymptotic data in $H^{s+\frac{1}{2}}(S^n)\times
H^s(S^n)\times H^s(S^n)$ to Cauchy initial data in
$H^{s+\frac{1}{2}}(S^n)\times H^{s+\frac{1}{2}}(S^n)\times
H^{s-\frac{1}{2}}(S^n)$. | Serban Cicortas | 2023-09-13T22:43:20Z | http://arxiv.org/abs/2309.07342v3 | # Scattering for the Wave Equation on de Sitter Space
###### Abstract
For any \(n\geq 4\) even, we establish a complete scattering theory for the linear wave equation on the \((n+1)\)-dimensional de Sitter space. We prove the existence and uniqueness of scattering states, and asymptotic completeness. Moreover, we construct the scattering map taking asymptotic data at past infinity \(\mathcal{I}^{-}\) to asymptotic data at future infinity \(\mathcal{I}^{+}\). Identifying \(\mathcal{I}^{-}\) and \(\mathcal{I}^{+}\) with \(S^{n}\), we prove that the scattering map is a Banach space isomorphism on \(H^{s+n}(S^{n})\times H^{s}(S^{n})\), for any \(s\geq 1\).
The main analysis is carried out at the level of the model equation obtained by differentiating the linear wave equation \(\frac{n}{2}\) times in the time variable. The main result of the paper follows from proving a scattering theory for this equation. In particular, for the model equation we construct a scattering isomorphism from asymptotic data in \(H^{s+\frac{1}{2}}(S^{n})\times H^{s}(S^{n})\times H^{s}(S^{n})\) to Cauchy initial data in \(H^{s+\frac{1}{2}}(S^{n})\times H^{s+\frac{1}{2}}(S^{n})\times H^{s-\frac{1}{2} }(S^{n})\).
###### Contents
* 1 Introduction
* 1.1 Main Result
* 1.2 Idea of the Proof
* 1.3 Outline of the Paper
* 1.4 Acknowledgements
* 2 Set-up
* 2.1 Coordinate Systems and Self-similarity
* 2.2 Derivation of the Main Model Equation
Estimates for the Main Model Equation
* 3.1 Estimates from \(\mathcal{I}^{-}\) to a Finite Time Hypersurface
* 3.2 Existence and Uniqueness of Scattering States
* 3.3 Estimates from a Finite Time Hypersurface to \(\mathcal{I}^{+}\)
* 3.4 Asymptotic Completeness
* 4 Scattering for Self-similar Solutions of the Wave Equation in Minkowski Space
* 4.1 Compatibility Relations
* 4.2 Existence and Uniqueness of Scattering States
* 4.3 Asymptotic Completeness
* 4.4 The Scattering Isomorphism
* 5 Scattering for Solutions of the Wave Equation on de Sitter Space
* 6 Appendix
* 6.1 Littlewood-Paley decomposition
## 1 Introduction
For any \(n\geq 3\), we consider the \((n+1)\)-dimensional de Sitter space \(\left(\mathbb{R}\times S^{n},g_{dS}\right)\) with metric:
\[g_{dS}=-dT^{2}+\cosh^{2}Td\sigma_{n}^{2} \tag{1}\]
where \(d\sigma_{n}^{2}\) denotes the standard metric on \(S^{n}\). We denote the past infinity \(\{T\rightarrow-\infty\}\) by \(\mathcal{I}^{-}\), and future infinity \(\{T\rightarrow\infty\}\) by \(\mathcal{I}^{+}\). Both \(\mathcal{I}^{-}\) and \(\mathcal{I}^{+}\) can be identified with \(S^{n}\).
The metric (1) introduced in [11] is the ground state solution to the Einstein vacuum equations with positive cosmological constant \(\Lambda=\frac{n(n-1)}{2}:\)
\[Ric_{\mu\nu}-\frac{1}{2}Rg_{\mu\nu}+\Lambda g_{\mu\nu}=0 \tag{2}\]
The study of de Sitter space has been of great interest in mathematical general relativity, starting with the works of Friedrich [10]-[11], who proved that the \((3+1)\)-dimensional de Sitter space is non-linearly stable to small perturbations of the asymptotic data at \(\mathcal{I}^{-}\). The argument uses the essential fact that in \((3+1)\) dimensions de Sitter space has a smooth conformal compactification. This method was generalized by Anderson [1] to all odd spatial dimensions, once again making use of the smooth conformal compactification. Both works establish a scattering theory in a neighborhood of de Sitter space by proving:
1. _Existence and uniqueness of scattering states_: for any suitable asymptotic data at \(\mathcal{I}^{-}\) in a neighborhood of the de Sitter data at \(\mathcal{I}^{-}\), there exists a unique solution of (2) which is globally close to (1).
2. _Asymptotic completeness_: any regular solution of (2) which is sufficiently close to de Sitter space induces unique asymptotic data at \(\mathcal{I}^{-}\) and \(\mathcal{I}^{+}\).
3. _Boundedness of the scattering map_: for the above solution, the map taking the asymptotic data at \(\mathcal{I}^{-}\) to the asymptotic data at \(\mathcal{I}^{+}\) is bounded.
We refer the reader to [14] and [15] for a more detailed introduction to scattering theory.
The above argument does not apply in even spatial dimensions or in the presence of general types of matter, because of the lack of smoothness of the conformal compactification. A robust proof of stability in all dimensions in the more general setting of the Einstein equations coupled to a non-linear scalar field was given by Ringstrom in [13]. Unlike the above works, [13] only considers Cauchy data on a finite time slice and does not address the problem of scattering.
Our discussion highlights the fact that without the conformal method it is significantly more challenging to prove a scattering theory for de Sitter space. Some of the difficulties that one faces in the even dimensional case are already present for the linear wave equation on a fixed de Sitter background with \(n\geq 4\) even:
\[\square_{g_{dS}}\tilde{\phi}=0 \tag{3}\]
The purpose of this paper is to prove a complete scattering theory for (3).
### Main Result
In this section, we state the main theorem of the paper. Beforehand, we briefly introduce the formal analysis of (3), which provides important insights into the statement of the theorem.
The formal expansion of \(\tilde{\phi}\) at \(\mathcal{I}^{-}\) is given by:
\[\tilde{\phi}(T)=\sum_{k=0}^{n/2-1}\frac{1}{k!}\phi_{k}\cdot e^{2kT}+\frac{1}{( n/2)!}\mathcal{O}\cdot Te^{nT}+\frac{1}{(n/2)!}\tilde{h}\cdot e^{nT}+O\big{(}T^{2}e ^{(n+2)T}\big{)} \tag{4}\]
This is an expansion in \(e^{T}\) and \(T\), which illustrates the lack of smoothness at infinity. On the contrary, in the odd dimensional case the above expansion would only contain powers of \(e^{T}\), allowing for a smooth compactification.
By further studying formal solutions of (3) of the form (4), we obtain a series of compatibility relations. These imply that each \(\phi_{a}\) is determined by \(\phi_{0}\) and its derivatives, and to leading order term we have that \(\phi_{a}\sim\Delta^{a}\phi_{0}+\ldots\), for all \(1\leq a\leq\frac{n}{2}-1\). Here \(\Delta\) represents the Laplacian of the standard metric on \(S^{n}\). Similarly, we also have that \(\mathcal{O}\sim\Delta^{\frac{n}{2}}\phi_{0}+\ldots\) is determined by \(\phi_{0}.\) Moreover, we get that each \(O\big{(}T^{2}e^{(n+2)T}\big{)}\) term in (4) is formally determined by \(\phi_{0}\) and \(\tilde{h}\). This suggests that we could consider \(\big{(}\phi_{0},\tilde{h}\big{)}\) to represent asymptotic data at \(\mathcal{I}^{-}.\) While this choice would parameterize the space of smooth asymptotic data, it does not capture the quantitative properties of the solution. A key aspect that will be clear from our analysis is
the need to renormalize \(\tilde{h}\) by setting \(\mathfrak{h}=\tilde{h}-(\log\nabla)\mathcal{O},\) where \((\log\nabla)\mathcal{O}\) is defined using a suitable Fourier multiplier.
We state the main result of the paper:
**Theorem 1.1** (Scattering Theory for the Wave Equation).: _For any \(n\geq 4\) even integer, we have a complete scattering theory for (3):_
1. _Existence and uniqueness of scattering states: for any_ \(\phi_{0},\mathfrak{h}\in C^{\infty}(S^{n})\)_, there exists a unique smooth solution_ \(\tilde{\phi}:\mathbb{R}\times S^{n}\to\mathbb{R}\) _of (_3_) with asymptotic data at_ \(\mathcal{I}^{-}\) _given by_ \(\big{(}\phi_{0},\mathfrak{h}\big{)}\)_._
2. _Asymptotic completeness: any smooth solution_ \(\tilde{\phi}:\mathbb{R}\times S^{n}\to\mathbb{R}\) _of (_3_) induces_ \(\phi_{0},\mathfrak{h}\in C^{\infty}(S^{n})\) _unique asymptotic data at_ \(\mathcal{I}^{-}\) _and_ \(\underline{\phi_{0}},\underline{\mathfrak{h}}\in C^{\infty}(S^{n})\) _unique asymptotic data at_ \(\mathcal{I}^{+}.\)__
3. _The scattering isomorphism: for the above solution, we define the scattering map_ \(\big{(}\phi_{0},\mathfrak{h}\big{)}\mapsto\big{(}\underline{\phi_{0}}, \underline{\mathfrak{h}}\big{)}\)_. Identifying_ \(\mathcal{I}^{-}\) _and_ \(\mathcal{I}^{+}\) _with_ \(S^{n},\) _we have that the scattering map extends as a Banach space isomorphism on_ \(H^{s+n}(S^{n})\times H^{s}(S^{n})\) _for any_ \(s\geq 1\)_._
In the paper, we also introduce a class of "\(s\)-regularity solutions" with asymptotic initial data in \(H^{s+n}(S^{n})\times H^{s}(S^{n})\), for \(s\geq 2.\) We prove a similar scattering result for these low regularity solutions.
The same argument also applies in the case of the generalized de Sitter space, where we replace the standard sphere \(\big{(}S^{n},g_{S^{n}}\big{)}\) by any compact Riemannian manifold \(\big{(}M^{n},g_{M}\big{)}\) which satisfies the equation \(Ric(g_{M})=(n-1)g_{M}\).
We remark that the scattering problem for the Klein-Gordon equation on asymptotically de Sitter-like spaces was previously addressed in [20], which constructs the scattering isomorphism on \(C^{\infty}(S^{n})\times C^{\infty}(S^{n})\) using microlocal analysis techniques. On the other hand, in the present paper we take a different approach to construct the scattering map as a Banach space isomorphism on \(H^{s+n}(S^{n})\times H^{s}(S^{n})\). We also remark that the correspondence between Cauchy data on a finite time slice and asymptotic data at infinity for equation (3) was studied in the more general context of linear wave equations on cosmological metric backgrounds by Ringstrom in [19]. While this work deals with far more complex examples, applied to our situation it only controls \(H^{s}\)-type Sobolev norms with an \(\epsilon\)-loss of derivatives in each direction.
### Idea of the Proof
We briefly outline some of the most important ingredients of our proof. For the purpose of the exposition, we restrict to the case of smooth solutions for the rest of the introduction.
#### Self-similar Solutions of the Wave Equation in Minkowski Space
The first point is that de Sitter space \(\mathbb{R}\times S^{n}\) can be embedded as a hyperboloid in Minkowski space \(\mathbb{R}^{n+2}\), see [14]. Moreover, this hyperboloid is obtained as the quotient of the region \(\{u<0,\ v>0\}\subset\mathbb{R}^{n+2}\) by
the action of the scaling vector field \(S\). As a result, the study of (3) is equivalent to studying self-similar solutions of:
\[\Box\phi=0 \tag{5}\]
in the region \(\{u<0,\ v>0\}\subset\mathbb{R}^{n+2}.\)
The advantage of this perspective is that we can identify \(\mathcal{I}^{-}=\{T=-\infty\}\) to \(\{v=0\}\subset\mathbb{R}^{n+2},\) and similarly \(\mathcal{I}^{+}=\{T=\infty\}\) to \(\{u=0\}\subset\mathbb{R}^{n+2}.\) The goal is to prove a scattering theory with asymptotic data at \(\{v=0\}\) and \(\{u=0\}\). We can interpret this as a compactification which is useful even if the solution is not smooth up to the null cone, unlike the conformal compactification introduced previously. This allows us to reduce a global problem with data at infinity to a finite problem with singular data.
Another advantage is that self-similar vacuum spacetimes are well-studied in the region where \(S\) is spacelike in [14]. The paper develops a theory to explain the notion of asymptotic data on the incoming null cone \(\{v=0\},\) and it makes rigorous the Fefferman-Graham expansions of [13]. In our simplified setting of studying (3), this suggests an approach to define asymptotic data at \(\mathcal{I}^{-}\) and make the expansion (4) rigorous. Moreover, the similarities with [14] justify our notation for \(\mathcal{O}\) and \(h\). Finally, we point out that the Fefferman-Graham expansion at \(\mathcal{I}^{-}\) also appears in [1] in the odd dimensional case.
### The Main Model Equation
The most important part of the analysis is not carried out at the level of (5), but only once we differentiate this \(\frac{n}{2}\) times in order to obtain the main model equation. The scattering theory for (5), and implicitly the proof of Theorem 1.1, will follow from the scattering theory for the main model equation.
To further explain this aspect, we notice that under the above correspondence we have that \(\phi\) satisfies the formal expansion on \(\{u=-1\}:\)
\[\phi(v)=\sum_{k=0}^{n/2-1}\frac{1}{k!}\phi_{k}\cdot v^{k}+\frac{1}{2(n/2)!} \mathcal{O}\cdot v^{\frac{n}{2}}\log v+\frac{1}{(n/2)!}\tilde{h}\cdot v^{ \frac{n}{2}}+O\big{(}v^{\frac{n}{2}+1}|\log v|^{2}\big{)} \tag{6}\]
The difficulty in this is that the freely prescribed data is at orders \(0\) and \(\frac{n}{2}.\) We define \(\alpha=\partial_{v}^{\frac{n}{2}}\phi,\) which satisfies the formal expansion on \(\{u=-1\}:\)
\[\alpha(v)=\frac{1}{2}\mathcal{O}\log v+h+O\big{(}v|\log v|^{2}\big{)} \tag{7}\]
where we renormalized \(\tilde{h}\) by a linear factor of \(\mathcal{O}.\) We also set \(\chi=\partial_{v}^{\frac{n}{2}-1}\phi\) and we introduce the new time variable \(\tau=\sqrt{v}.\) We obtain that \((\alpha,\chi)\) satisfy the main model equation:
\[\partial_{\tau}^{2}\alpha+\frac{1}{\tau}\partial_{\tau}\alpha+4q^{\prime} \alpha-\frac{4}{(\tau^{2}+1)^{2}}\cdot\Delta\alpha=f_{1}(\tau)\tau\partial_{ \tau}\alpha+f_{2}(\tau)\tau^{2}\alpha+f_{3}(\tau)\chi \tag{8}\]
\[\partial_{\tau}\chi(\tau)=2\tau\alpha(\tau) \tag{9}\]
where \(|f_{1}|+|f_{2}|+|f_{3}|=O(1)\). The asymptotic data for the system is given by \(\big{(}\chi(0),\mathcal{O},h\big{)}.\)
We use the system (8)-(9) to model the equation for \(\partial_{v}^{\frac{n}{2}}\phi\) along \(u=-1\) for \(v\in[0,1]\), and similarly the equation for \(\partial_{u}^{\frac{n}{2}}\phi\) along \(v=1\) for \(u\in[-1,0]\). We then recover the properties of \(\phi\) by integration and using the compatibility relation \(\mathcal{O}=\Delta^{\frac{n}{2}}\phi_{0}+\cdots.\) This proves the desired scattering statement for self-similar solutions of (5).
#### Scattering Theory for the Main Model Equation
Based on the argument outlined above, the essential step in proving Theorem 1.1 is the following scattering result for the main model equation (8)-(9):
**Theorem 1.2** (Scattering Theory for the Model Equation).: _We have a complete scattering theory for (8)-(9):_
1. _Existence and uniqueness of scattering states: for any_ \(\chi(0),\mathcal{O},h\in C^{\infty}(S^{n})\)_, there exists a unique smooth solution_ \(\big{(}\alpha,\chi\big{)}\) _of (_8_)-(_9_) with asymptotic data at_ \(\{\tau=0\}\) _given by_ \(\big{(}\chi(0),\mathcal{O},h\big{)}\)_, which satisfies the estimate:_ \[\left(\big{\|}\alpha\big{\|}_{H^{s+1/2}}+\big{\|}\partial_{\tau}\alpha\big{\|} _{H^{s-1/2}}+\big{\|}\chi\big{\|}_{H^{s+1/2}}\right)\bigg{|}_{\tau=1}\lesssim \big{\|}\mathfrak{h}\big{\|}_{H^{s}}+\big{\|}\mathcal{O}\big{\|}_{H^{s}}+ \big{\|}\chi(0)\big{\|}_{H^{s+1/2}},\] (10) _where_ \(\mathfrak{h}:=h-(\log\nabla)\mathcal{O}.\)__
2. _Asymptotic completeness: any smooth solution_ \(\big{(}\alpha,\chi\big{)}\) _of (_8_)-(_9_) with initial data at_ \(\{\tau=1\}\) _given by_ \(\chi(1),\alpha(1),\partial_{\tau}\alpha(1)\in C^{\infty}(S^{n})\) _induces smooth asymptotic data at_ \(\{\tau=0\}\) _given by_ \(\chi(0),\mathcal{O},\) _and_ \(h\)_, which satisfies the estimate:_ \[\big{\|}\mathfrak{h}\big{\|}_{H^{s}}+\big{\|}\mathcal{O}\big{\|}_{H^{s}}+ \big{\|}\chi(0)\big{\|}_{H^{s+1/2}}\lesssim\left(\big{\|}\alpha\big{\|}_{H^{s +1/2}}+\big{\|}\partial_{\tau}\alpha\big{\|}_{H^{s-1/2}}+\big{\|}\chi\big{\|}_ {H^{s+1/2}}\right)\bigg{|}_{\tau=1}\] (11)
3. _The scattering isomorphism: we define the scattering map_ \(\big{(}\chi(0),\mathcal{O},\mathfrak{h}\big{)}\mapsto\big{(}\chi(1),\alpha(1 ),\partial_{\tau}\alpha(1)\big{)}\)_. This extends as a Banach space isomorphism from_ \(H^{s+\frac{1}{2}}(S^{n})\times H^{s}(S^{n})\times H^{s}(S^{n})\) _to_ \(H^{s+\frac{1}{2}}(S^{n})\times H^{s+\frac{1}{2}}(S^{n})\times H^{s-\frac{1}{2} }(S^{n})\) _for any_ \(s\geq 1\)_._
One remarkable aspect of this statement is that at nonzero times the solution improves spatial regularity compared to the asymptotic data. We point out that a similar phenomenon is present in the more general context of linear wave equations on cosmological metric backgrounds in [14]. Restricted to our setting, the paper obtains an almost \(\frac{1}{2}\) improvement of regularity in terms of \(H^{s}\)-type Sobolev norms, but has an \(\epsilon\)-loss of derivatives in each direction (which is natural since the quantity corresponding to \(h\) is not renormalized).
**Remark 1.1**.: _We point out that equation (8) also models the wave equation on the FLRW spacetime:_
\[-dt^{2}+t^{\frac{2}{3}}\big{(}dx_{1}^{2}+dx_{2}^{2}+dx_{3}^{2}\big{)}\]
_where we set \(\tau=t^{\frac{2}{3}}.\) The proof of Theorem 1.2 implies the existence of a scattering isomorphism between Cauchy data at \(\tau=1\) and asymptotic initial data at \(\tau=0.\)_
**Remark 1.2**.: _The methods used in the proof of Theorem 1.2 also apply to the scattering problem for the linearised Einstein vacuum equations with a positive cosmological constant, where the linearisation is done around de Sitter space in all even spatial dimensions \(n\geq 4\). According to [10],[11], any \((n+1)\)-dimensional solution of (2) corresponds to an \((n+2)\)-dimensional straight self-similar vacuum spacetime. In the gauge of [11], we consider the system of Bianchi equations satisfied by the curvature components of the \((n+2)\)-dimensional spacetime along an outgoing null hypersurface. This linearises to a system of equations of the form (8), which allows us to address the scattering problem in a future work._
#### Estimates from \(\mathcal{I}^{-}\) to a Finite Time Hypersurface
We briefly explain how to obtain the estimate (10). In the case of the wave equation (3), this gives an estimate on \(\frac{n}{2}\) time derivatives of \(\tilde{\phi}\) at finite times in terms of the asymptotic data at \(\mathcal{I}^{-}\). Denoting by \(\{\varphi_{i}\}\) the eigenfunctions of the Laplacian on \(S^{n}\) with eigenvalues \(\lambda_{i}\), we have the frequency decomposition \(\alpha=\sum_{i}\langle\alpha,\varphi_{i}\rangle\varphi_{i}.\) To simplify the discussion, we assume that the solution is supported on frequencies in the interval \([2^{l},2^{l+1})\), so \(\langle\alpha,\varphi_{i}\rangle=0\) for all \(\lambda_{i}\notin[2^{l},2^{l+1}).\) Also, we introduce the new time variable \(t=2^{l}\tau.\)
We decompose the solution into the regular and singular components which satisfy the expansions:
\[\alpha_{J}(t)=h-\mathcal{O}\log(2^{l})+O\big{(}t^{2}|\log(t)|^{2}\big{)},\]
\[\alpha_{Y}(t)=\mathcal{O}\log(t)+O\big{(}t^{2}|\log(t)|^{2}\big{)}.\]
This decomposition highlights the need to define \(\mathfrak{h}:=h-(\log\nabla)\mathcal{O}.\) We choose this notation since \(\big{(}\alpha_{J},\alpha_{Y}\big{)}\) will be shown to satisfy similar bounds as the first and second Bessel functions \(J_{0},\ Y_{0},\) as defined in [13].
The time interval \(t\in[0,1]\) represents the low frequency regime for both components of the solution, with dominant behavior given by the first term in the above expansions. We capture this by energy estimates using multipliers suitable for each component. The time interval \(t\in[1,2^{l+1}]\) represents the high frequency regime for the solution, with leading behavior given by \(1/\sqrt{t},\) which is again captured using energy estimates. We point out that the high frequency regime behavior, together with the frequency dependent time of transition between the low and high regime, is responsible for the gain of regularity seen in (10).
The above argument is robust, by the use of energy estimates and multipliers. Moreover, most of the argument is carried out in physical space by using the time rescaling properties of the equation. The decomposition into frequency strips is only necessary for simple interpolation inequalities.
#### Estimates from a Finite Time Hypersurface to \(\mathcal{I}^{+}\)
We outline the proof of the estimate (11). In the case of the wave equation (3), this implies an estimate on the asymptotic data at \(\mathcal{I}^{+}\) in terms of \(\frac{n}{2}\) time derivatives of \(\tilde{\phi}\) at a finite time hypersurface. We assume as before that our solution is supported on frequencies in the interval \([2^{l},2^{l+1})\) and we introduce the time variable \(t=2^{l}\tau.\) We study the high frequency regime \(t\in[1,2^{l+1}]\) and the low frequency regime \(t\in[0,1]\)
using different energy estimates than in the previous case, in order to have good bulk terms. We remark that decomposing into frequency strips is now important for constructing multipliers.
A similar strategy of considering two separate regimes also appears in [14], which studies the ODEs obtained by projecting the solution on each eigenfunction. However, we can construct the desired scattering isomorphism in our situation since we anticipate the need to renormalize \(h\) to \(\mathfrak{h}\) before determining the asymptotics.
**Remark 1.3**.: _Once we prove the energy estimates (10) and (11), the existence results stated in Theorem 1.2 follow using standard iteration methods. Similarly, we introduce the notion of \(s\)-regularity solutions with non-smooth data in suitable Sobolev spaces and obtain analogous results by density arguments._
### Outline of the Paper
We outline the remainder of the paper. In Section 2, we introduce the necessary framework by proving the correspondence between solutions of the wave equation on de Sitter space and self-similar solutions of the wave equation in Minkowski space. We also compute the reduction to the main model system (8)-(9). In Section 3, we prove the scattering theory result in Theorem 1.2 for the main model system (8)-(9). We prove separately the energy estimates and the corresponding existence results for both smooth solutions and \(s\)-regularity solutions. In Section 4, we prove the scattering theory result in Theorem 4.1 for self-similar solutions of the wave equation in Minkowski space, which is based on Theorem 1.2 and the compatibility relations. In Section 5, we complete the proof of Theorem 1.1, which follows as a consequence of Theorem 4.1 and the correspondence in Section 2. We also prove a similar scattering result for \(s\)-regularity solutions in Theorem 5.1.
### Acknowledgements
The author would like to acknowledge Igor Rodnianski for his valuable advice and guidance in the process of writing this paper. The author would also like to thank Mihalis Dafermos and Warren Li for the very helpful discussions and suggestions.
## 2 Set-up
In this section we provide the framework needed in our proof. We introduce our coordinate conventions and the correspondence between solutions of the wave equation on de Sitter space and self-similar solutions of the wave equation in Minkowski space. Finally, we derive the main model system of equations (8)-(9).
### Coordinate Systems and Self-similarity
In the previous section we introduced the de Sitter metric in standard coordinates on \(\mathbb{R}\times S^{n}\):
\[g_{dS}=-dT^{2}+\cosh^{2}Td\sigma_{n}^{2}\]
where \(d\sigma_{n}^{2}\) denotes the standard metric on \(S^{n}.\) We want to write the de Sitter space as a quotient of a region in Minkowksi space by the action of the scaling vector field \(S.\)
We recall that the Minkowski metric in standard coordinates on \(\mathbb{R}^{n+2}\) is given by:
\[m=-dt^{2}+dr^{2}+r^{2}d\sigma_{n}^{2}\]
Moreover, this metric is self-similar, with the scaling vector field:
\[S=t\partial_{t}+r\partial_{r}\]
We introduce the standard double null coordinates:
\[u=\frac{t-r}{2},\ v=\frac{t+r}{2}\]
With respect to the double null coordinates, the Minkowski metric and the scaling vector field are:
\[m=-2\big{(}du\otimes dv+dv\otimes du\big{)}+r^{2}d\sigma_{n}^{2}\]
\[S=u\partial_{u}+v\partial_{v}\]
In the region \(\{u<0,\ v>0\}\subset\mathbb{R}^{n+2}\) we define the self-similar coordinates:
\[x=2\sqrt{-uv},\ T=\frac{1}{2}\log\frac{v}{-u}\]
In self-similar coordinates we have that:
\[m=dx^{2}+x^{2}\big{(}-dT^{2}+\cosh^{2}Td\sigma_{n}^{2}\big{)}\]
\[S=x\partial_{x}\]
The above formula shows that de Sitter space is the quotient of \(\{u<0,\ v>0\}\subset\mathbb{R}^{n+2}\) by the action of the scaling vector field \(S\). We define the embedding map \(\iota:\mathbb{R}\times S^{n}\rightarrow(0,\infty)\times\mathbb{R}\times S^{n}\) by \(\iota(T,\omega)=(1,T,\omega)\). We also define the projection map \(\pi:(0,\infty)\times\mathbb{R}\times S^{n}\rightarrow\mathbb{R}\times S^{n}\) by \(\pi(x,T,\omega)=(T,\omega)\). We proved that:
**Lemma 2.1**.: \(\tilde{\phi}:\mathbb{R}\times S^{n}\rightarrow\mathbb{R}\) _is a solution of \(\square_{dS}\tilde{\phi}=0\) if and only if \(\phi:(0,\infty)\times\mathbb{R}\times S^{n}\rightarrow\mathbb{R}\) given by \(\phi=\tilde{\phi}\circ\pi\) is a self-similar solution of \(\square\phi=0.\) If this holds, we also have \(\tilde{\phi}=\phi\circ\iota.\)_
We conclude that studying the linear wave equation on de Sitter space \(\mathbb{R}\times S^{n}\) is equivalent to studying self-similar solutions of the wave equation on the \(\{u<0,\ v>0\}\) region of Minkowski space \(\mathbb{R}^{n+2}\). Moreover, we can identify \(\mathcal{I}^{-}=\{T=-\infty\}\) to \(\{v=0\}\subset\mathbb{R}^{n+2}\), and similarly \(\mathcal{I}^{+}=\{T=\infty\}\) to \(\{u=0\}\subset\mathbb{R}^{n+2}\).
### Derivation of the Main Model Equation
For any even integer \(n\geq 4\), we have the wave equation in double null coordinates on \(\mathbb{R}^{n+2}:\)
\[\partial_{u}\partial_{v}\phi-\frac{n/2}{v-u}\partial_{v}\phi+\frac{n/2}{v-u} \partial_{u}\phi-(v-u)^{-2}\Delta\phi=0 \tag{12}\]
As explained in the introduction, we want to derive an equation for \(\partial_{v}^{\frac{n}{2}}\phi\) on \(u=-1.\) We use the self-similarity assumption \(S\phi=0\) in order to rewrite the above wave equation on \(u=-1:\)
\[v\partial_{v}^{2}\phi+\bigg{(}1-\frac{n}{2}\bigg{)}\partial_{v}\phi+\frac{nv} {v+1}\partial_{v}\phi-\frac{1}{(v+1)^{2}}\Delta\phi=0 \tag{13}\]
We multiply by \((v+1)^{2}\) and differentiate with respect to \(v\):
\[v(v+1)^{2}\partial_{v}^{3}\phi+\bigg{(}2-\frac{n}{2}\bigg{)}(v+1)^{2}\partial _{v}^{2}\phi+(n+2)(v+1)v\partial_{v}^{2}\phi+(nv+2v+2)\partial_{v}\phi-\Delta \partial_{v}\phi=0 \tag{14}\]
By induction we obtain that for all \(2\leq a\leq\frac{n}{2}\), there exist constants \(p_{a},q_{a},p^{\prime}_{a},q^{\prime}_{a},q^{\prime\prime}_{a}\geq 1\):
\[v(v+1)^{2}\partial_{v}^{a+2}\phi+\bigg{(}a+1-\frac{n}{2}\bigg{)}(v+1)^{2} \partial_{v}^{a+1}\phi+\big{[}n(v+1)+p_{a}v+q_{a}\big{]}v\partial_{v}^{a+1}\phi+ \tag{15}\]
\[+(p^{\prime}_{a}v+q^{\prime}_{a})\partial_{v}^{a}\phi+q^{\prime\prime}_{a} \partial_{v}^{a-1}\phi-\Delta\partial_{v}^{a}\phi=0\]
In particular, in the case \(a=\frac{n}{2}\) we obtain the desired equation for \(\partial_{v}^{\frac{n}{2}}\phi\):
\[v\partial_{v}^{\frac{n}{2}+2}\phi+\partial_{v}^{\frac{n}{2}+1}\phi+\frac{pv+q }{(v+1)^{2}}v\partial_{v}^{\frac{n}{2}+1}\phi+\frac{p^{\prime}v+q^{\prime}}{(v +1)^{2}}\partial_{v}^{\frac{n}{2}}\phi+\frac{q^{\prime\prime}}{(v+1)^{2}} \partial_{v}^{\frac{n}{2}-1}\phi-\frac{1}{(v+1)^{2}}\Delta\partial_{v}^{ \frac{n}{2}}\phi=0\]
for some constants \(p,q,p^{\prime},q^{\prime},q^{\prime\prime}\) with \(q^{\prime}\geq 1\).
We introduce the notation \(\alpha=\partial_{v}^{\frac{n}{2}}\phi,\ \chi=\partial_{v}^{\frac{n}{2}-1}\phi\). Thus, we have deduced the following equation:
\[v\partial_{v}^{2}\alpha+\partial_{v}\alpha+\frac{pv+q}{(v+1)^{2}}v\partial_{ v}\alpha+\frac{p^{\prime}v+q^{\prime}}{(v+1)^{2}}\alpha+\frac{q^{\prime\prime}}{(v +1)^{2}}\chi-\frac{1}{(v+1)^{2}}\Delta\alpha=0 \tag{16}\]
Finally, under the change of variables \(\tau=\sqrt{v}\) we obtain:
\[\partial_{\tau}^{2}\alpha+\frac{1}{\tau}\partial_{\tau}\alpha+4q^{\prime} \alpha-\frac{4}{(\tau^{2}+1)^{2}}\cdot\Delta\alpha=f_{1}(\tau)\tau\partial_{ \tau}\alpha+f_{2}(\tau)\tau^{2}\alpha+f_{3}(\tau)\chi\]
\[\partial_{\tau}\chi(\tau)=2\tau\alpha(\tau)\]
where \(|f_{1}|+|f_{2}|+|f_{3}|=O(1)\) can be computed explicitly, but their exact value is irrelevant. This completes the derivation of the main model system of equations (8)-(9).
## 3 Estimates for the Main Model Equation
The essential part of the analysis needed for the scattering theory for the wave equation on de Sitter space in Theorem 1.1 is carried out at the level of (8)-(9). In this section, we study this model system of equations:
\[\partial_{\tau}^{2}\alpha+\frac{1}{\tau}\partial_{\tau}\alpha+4q^{\prime} \alpha-\frac{4}{(\tau^{2}+1)^{2}}\cdot\Delta\alpha=f_{1}(\tau)\tau\partial_{ \tau}\alpha+f_{2}(\tau)\tau^{2}\alpha+f_{3}(\tau)\chi \tag{17}\]
\[\partial_{\tau}\chi(\tau)=2\tau\alpha(\tau) \tag{18}\]
where \(|f_{1}|+|f_{2}|+|f_{3}|=O(1).\)
We first introduce the notion of a smooth solution of the above system:
**Definition 3.1**.: _Let \(\chi(0),\mathcal{O},h\in C^{\infty}(S^{n}).\) We say that \(\big{(}\alpha,\chi\big{)}\) is a smooth solution of (17)-(18) with asymptotic initial data given by \(\chi(0),\mathcal{O},\) and \(h\) if:_
\[\alpha-\mathcal{O}\log(\tau)-h\in C^{1}_{\tau}([0,\infty))C^{\infty}(S^{n}), \ \chi\in C^{1}_{\tau}([0,\infty))C^{\infty}(S^{n})\]
_and \(\alpha\) satisfies the expansions:_
\[\alpha(\tau)=\mathcal{O}\log(\tau)+h+O\big{(}\tau^{2}|\log(\tau)|^{2}\big{)}, \ \partial_{\tau}\alpha(\tau)=\frac{\mathcal{O}}{\tau}+O\big{(}\tau|\log(\tau)|^{ 2}\big{)}\ \text{in}\ C^{\infty}(S^{n}) \tag{19}\]
_Given \(\alpha,\chi\in C^{\infty}\big{(}(0,\infty)\times S^{n}\big{)}\) solving (17)-(18), we say that \(\big{(}\chi(0),\mathcal{O},h\big{)}\) determine the asymptotic data of the solution if the above conditions hold._
The main result of the section is the proof of Theorem 1.2. This consists of proving energy estimates from \(\mathcal{I}^{-}\) to finite times in Section 3.1, the existence and uniqueness of scattering states in Section 3.2, energy estimates from finite times to \(\mathcal{I}^{+}\) in Section 3.3, and asymptotic completeness in Section 3.4. As a result, we can construct the scattering map from asymptotic data at \(\tau=0\) to initial data at \(\tau=1\), and we obtain it is an isomorphism. By density, we also prove a similar result in the case of the \(s\)-regularity solutions defined in Section 3.2.
### Estimates from \(\mathcal{I}^{-}\) to a Finite Time Hypersurface
In this section we assume the existence of smooth solutions of (17)-(18) with prescribed asymptotic initial data, and prove estimates on the solutions at nonzero times \(\tau\in(0,1]\) in terms of the asymptotic data. In the context of (3), these correspond to estimates on \(\frac{n}{2}\) time derivatives of \(\tilde{\phi}\) at finite times in terms of the asymptotic data at \(\mathcal{I}^{-}\). The main result in Theorem 3.1 proves the estimate (10) of Theorem 1.2. As a consequence of our estimates, we also obtain the uniqueness of solutions with given smooth asymptotic initial data (_uniqueness of scattering states_).
Let \(\chi(0),\mathcal{O},h\) be smooth functions. For any parameters \(m_{J},m_{Y}\geq 1,\) we consider \(\big{(}\alpha_{J},\chi_{J}\big{)}\) and \(\big{(}\alpha_{Y},\chi_{Y}\big{)}\) to be smooth solutions of (17)-(18) with asymptotic initial data given by \(\big{(}\chi_{J}(0)=\frac{1}{2}\chi(0),0,\mathfrak{h}\big{)}\) and \(\big{(}\chi_{Y}(0)=\frac{1}{2}\chi(0),\mathcal{O},\mathcal{O}\log(m_{Y})\big{)},\) where \(\mathfrak{h}=h-\mathcal{O}\log(m_{Y}).\) In particular, the solutions satisfy the expansions:
\[\alpha_{J}(\tau)=\mathfrak{h}+O\big{(}\tau^{2}|\log(\tau)|^{2}\big{)},\]
\[\alpha_{Y}(\tau)=\mathcal{O}\log(m_{Y}\tau)+O\big{(}\tau^{2}|\log(\tau)|^{2} \big{)}.\]
Using the fact that (17) and (18) are linear, we obtain that \(\alpha=\alpha_{J}+\alpha_{Y},\ \chi=\chi_{J}+\chi_{Y}\) also solve (17)-(18), with asymptotic initial data given by \(\big{(}\chi(0),\mathcal{O},h\big{)}.\) We refer to \(\alpha_{J}\) as the regular component of \(\alpha\), and to \(\alpha_{Y}\)
as the singular component of \(\alpha\). We recall that our notation suggests the fact that \(\alpha_{J}\) and \(\alpha_{Y}\) satisfy similar bounds as the first and second Bessel functions.
When proving estimates for the solution, we treat separately the low frequency regime \(\tau\in[0,(2m)^{-1}]\) with dominant behavior given by the first term in the above expansions, and the high frequency regime \(\tau\in[(2m)^{-1},1]\) with dominant behavior bounded by \(1/\sqrt{m\tau}.\) We remark that the transition time depends on our choice of the parameter \(m.\) This time rescaling property of the equations allows us to carry out most of the estimates without localizing in frequency.
We also remark that in order to prove estimates for the two components of the solution, we use different multipliers adapted to their asymptotic expansion at \(\tau=0\). We start by proving a low frequency regime estimate for the regular component of the solution:
**Proposition 3.1**.: _For any \(\tau\leq(m_{J})^{-1}\), we have that \(\alpha_{J}\) and \(\chi_{J}\) satisfy the estimates:_
\[\big{\|}\alpha_{J}\big{\|}_{H^{1}}^{2}+\big{\|}\partial_{\tau}\alpha_{J}\big{\|} _{L^{2}}^{2}\lesssim\big{\|}\mathfrak{h}\big{\|}_{H^{1}}^{2}+\frac{1}{m_{J}^{ 2}}\big{\|}\chi(0)\big{\|}_{L^{2}}^{2} \tag{20}\]
\[\big{\|}\alpha_{J}\big{\|}_{L^{2}}^{2}\lesssim\big{\|}\mathfrak{h}\big{\|}_{L ^{2}}^{2}+\tau^{2}\big{\|}\nabla\mathfrak{h}\big{\|}_{L^{2}}^{2}+\frac{\tau^{2 }}{m_{J}^{2}}\big{\|}\chi(0)\big{\|}_{L^{2}}^{2} \tag{21}\]
\[\big{\|}\chi_{J}\big{\|}_{L^{2}}^{2}\lesssim\big{\|}\chi(0)\big{\|}_{L^{2}}^{ 2}+\tau^{4}\big{\|}\mathfrak{h}\big{\|}_{H^{1}}^{2} \tag{22}\]
Proof.: We introduce the new time variable \(t=m_{J}\tau\). Equation (17) can be written as:
\[\partial_{t}^{2}\alpha_{J}+\frac{1}{t}\partial_{t}\alpha_{J}+4q^{\prime}\frac {\alpha_{J}}{m_{J}^{2}}-\frac{4}{(t^{2}/m_{J}^{2}+1)^{2}}\cdot\Delta\frac{ \alpha_{J}}{m_{J}^{2}}=\frac{f_{1}^{\prime}(t)}{m_{J}^{2}}t\partial_{t} \alpha_{J}+\frac{f_{2}^{\prime}(t)}{m_{J}^{4}}t^{2}\alpha_{J}+\frac{f_{3}^{ \prime}(t)}{m_{J}^{2}}\chi_{J}\]
where \(f_{1}^{\prime},f_{2}^{\prime},f_{3}^{\prime}\) are bounded functions of \(t.\) We also notice that from (18) we have:
\[\chi_{J}(t)=\chi(0)+\frac{1}{m_{J}^{2}}\int_{0}^{t}2t^{\prime}\alpha_{J}(t^{ \prime})dt^{\prime}\]
Multiplying the equation by \(\partial_{t}\alpha_{J}\) and integrating by parts, we obtain the standard energy estimate:
\[\big{\|}\partial_{t}\alpha_{J}\big{\|}_{L^{2}}^{2}(t)+\int_{0}^{t}\frac{1}{t^{ \prime}}\big{\|}\partial_{t}\alpha_{J}\big{\|}_{L^{2}}^{2}(t^{\prime})dt^{ \prime}+\frac{1}{m_{J}^{2}}\big{\|}\alpha_{J}\big{\|}_{H^{1}}^{2}(t)+\frac{1} {m_{J}^{4}}\int_{0}^{t}t^{\prime}\big{\|}\nabla\alpha_{J}\big{\|}_{L^{2}}^{2}( t^{\prime})dt^{\prime}\lesssim\]
\[\lesssim\frac{1}{m_{J}^{2}}\big{\|}\mathfrak{h}\big{\|}_{H^{1}}^{2}+\int_{0}^ {t}\frac{t^{\prime}}{m_{J}^{2}}\big{\|}\partial_{t}\alpha_{J}\big{\|}_{L^{2}}^ {2}+\int_{0}^{t}\int_{S}\frac{1}{m_{J}^{2}}\bigg{(}|\chi_{J}|+\frac{t^{\prime 2 }}{m_{J}^{2}}|\alpha_{J}|\bigg{)}\cdot|\partial_{t}\alpha_{J}|dt^{\prime}\]
We notice that for \(t\in\big{[}0,1\big{]}\), we can apply Gronwall to \(\partial_{t}\alpha\) to obtain:
\[\big{\|}\partial_{t}\alpha_{J}\big{\|}_{L^{2}}^{2}+\frac{1}{m_{J}^{2}}\big{\|} \alpha_{J}\big{\|}_{H^{1}}^{2}\lesssim\frac{1}{m_{J}^{2}}\big{\|}\mathfrak{h} \big{\|}_{H^{1}}^{2}+\int_{0}^{t}\frac{1}{m_{J}^{4}}\bigg{(}\big{\|}\chi_{J} \big{\|}_{L^{2}}^{2}+\frac{1}{m_{J}^{4}}\big{\|}\alpha_{J}\big{\|}_{L^{2}}^{2} \bigg{)}\]
We also have the bound:
\[\big{\|}\chi_{J}\big{\|}_{L^{2}}^{2}\lesssim\big{\|}\chi(0)\big{\|}_{L^{2}}^{2} +\frac{1}{m_{J}^{4}}\int_{0}^{t}\big{\|}\alpha_{J}\big{\|}_{L^{2}}^{2}dt^{\prime}\]
We combine our previous two estimates to obtain:
\[\left\|\partial_{t}\alpha_{J}\right\|_{L^{2}}^{2}+\frac{1}{m_{J}^{2}}\cdot\left\| \alpha_{J}\right\|_{H^{1}}^{2}\lesssim\frac{1}{m_{J}^{2}}\left\|\mathfrak{h} \right\|_{H^{1}}^{2}+\frac{1}{m_{J}^{4}}\left\|\chi(0)\right\|_{L^{2}}^{2}+ \frac{1}{m_{J}^{8}}\int_{0}^{t}\left\|\alpha_{J}\right\|_{L^{2}}^{2}dt^{\prime}\]
We apply Gronwall to obtain (20). We use this in the inequality:
\[\left\|\alpha_{J}\right\|_{L^{2}}\lesssim\left\|\mathfrak{h}\right\|_{L^{2}}+ \int_{0}^{\tau}\left\|\partial_{\tau}\alpha_{J}\right\|_{L^{2}}d\tau^{\prime}\]
in order to prove (21). Finally, these bounds together with (18) imply (22).
We prove a similar low frequency regime estimate for singular component of the solution:
**Proposition 3.2**.: _For any \(\tau\leq(2m_{Y})^{-1}\), we have that \(\alpha_{Y}\) satisfies the estimate:_
\[\left\|\frac{\alpha_{Y}}{\log(m_{Y}\tau)}\right\|_{H^{1}}^{2}+\left\|\partial _{\tau}\frac{\alpha_{Y}}{\log(m_{Y}\tau)}\right\|_{L^{2}}^{2}\lesssim\left\| \mathcal{O}\right\|_{H^{1}}^{2}+\frac{1}{m_{Y}^{2}}\left\|\chi(0)\right\|_{L^{ 2}}^{2} \tag{23}\]
\[\left\|\frac{\alpha_{Y}}{\log(m_{Y}\tau)}\right\|_{L^{2}}^{2}\lesssim\left\| \mathcal{O}\right\|_{L^{2}}^{2}+\tau^{2}\left\|\nabla\mathcal{O}\right\|_{L^{ 2}}^{2}+\frac{\tau^{2}}{m_{Y}^{2}}\left\|\chi(0)\right\|_{L^{2}}^{2} \tag{24}\]
\[\left\|\chi_{Y}\right\|_{L^{2}}^{2}\lesssim\left\|\chi(0)\right\|_{L^{2}}^{2}+ \tau^{2}\left\|\mathcal{O}\right\|_{H^{1}}^{2} \tag{25}\]
Proof.: We introduce the new time variable \(t=m_{Y}\tau\). We notice that the expansion for \(\alpha_{Y}\) implies:
\[\frac{\alpha_{Y}}{\log t}\bigg{|}_{t=0}=\mathcal{O},\ \partial_{t}\bigg{(} \frac{\alpha_{Y}}{\log t}\bigg{)}\bigg{|}_{t=0}=0\]
As before, equation (17) can be written as:
\[\partial_{t}^{2}\alpha_{Y}+\frac{1}{t}\partial_{t}\alpha_{Y}+4q^{\prime}\frac {\alpha_{Y}}{m_{Y}^{2}}-\frac{4}{(t^{2}/m_{Y}^{2}+1)^{2}}\cdot\Delta\frac{ \alpha_{Y}}{m_{Y}^{2}}=\frac{f_{1}^{\prime}(t)}{m_{Y}^{2}}t\partial_{t}\alpha _{Y}+\frac{f_{2}^{\prime}(t)}{m_{Y}^{4}}t^{2}\alpha_{Y}+\frac{f_{3}^{\prime}(t )}{m_{Y}^{2}}\chi_{Y}\]
where \(f_{1}^{\prime},f_{2}^{\prime},f_{3}^{\prime}\) are bounded functions of \(t.\) We multiply by \(\frac{1}{\log t}\) to get:
\[\partial_{t}^{2}\bigg{(}\frac{\alpha_{Y}}{\log t}\bigg{)}+\frac{1}{t}\bigg{(} 1+\frac{2}{\log t}\bigg{)}\partial_{t}\bigg{(}\frac{\alpha_{Y}}{\log t} \bigg{)}+\frac{4q^{\prime}}{m_{Y}^{2}}\cdot\frac{\alpha_{Y}}{\log t}-\frac{4}{ (t^{2}/m_{Y}^{2}+1)^{2}}\cdot\frac{1}{m_{Y}^{2}}\Delta\frac{\alpha_{Y}}{\log t}=\]
\[=\frac{f_{1}^{\prime\prime}(t)}{m_{Y}^{2}}t\partial_{t}\bigg{(}\frac{\alpha_{Y }}{\log t}\bigg{)}+\frac{f_{2}^{\prime\prime}(t)}{m_{Y}^{2}}\cdot\frac{\alpha _{Y}}{\log t}+\frac{f_{3}^{\prime\prime}(t)}{m_{Y}^{2}}\chi_{Y}\]
where \(f_{1}^{\prime\prime},f_{2}^{\prime\prime},f_{3}^{\prime\prime}\) are bounded functions of \(t.\) We also have from (18):
\[\chi_{Y}(t)=\chi(0)+\frac{1}{m_{Y}^{2}}\int_{0}^{t}2t^{\prime}\alpha_{Y}(t^{ \prime})dt^{\prime}\]
which implies the bound:
\[\left\|\chi_{Y}\right\|_{L^{2}}^{2}\lesssim\left\|\chi(0)\right\|_{L^{2}}^{2} +\frac{1}{m_{Y}^{4}}\int_{0}^{t}\left\|\frac{\alpha_{Y}}{\log t}\right\|_{L^{2} }^{2}dt^{\prime}\]
Using the equation for \(\frac{\alpha_{Y}}{\log t}\), we obtain the standard energy estimate:
\[\left\|\partial_{t}\frac{\alpha_{Y}}{\log t}\right\|_{L^{2}}^{2}(t)+\int_{0}^{ t}\frac{1}{t^{\prime}}\left\|\partial_{t}\frac{\alpha_{Y}}{\log t}\right\|_{L^{2}}^{2}(t^{ \prime})dt^{\prime}+\frac{1}{m_{Y}^{2}}\left\|\frac{\alpha_{Y}}{\log t} \right\|_{H^{1}}^{2}(t)+\frac{1}{m_{Y}^{4}}\int_{0}^{t}t^{\prime}\left\|\frac {\nabla\alpha_{Y}}{\log t}\right\|_{L^{2}}^{2}(t^{\prime})dt^{\prime}\lesssim\]
\[\leq\frac{1}{m_{Y}^{2}}\big{\|}\mathcal{O}\big{\|}_{H^{1}}^{2}+\int_{0}^{t} \left(\frac{t^{\prime}}{m_{Y}^{2}}+\frac{\mathbf{1}_{[1/10,1/2]}}{t^{\prime}| \log t^{\prime}|}\right)\bigg{\|}\partial_{t}\frac{\alpha_{Y}}{\log t}\bigg{\|} _{L^{2}}^{2}+\int_{0}^{t}\int_{S}\frac{1}{m_{Y}^{2}}\bigg{(}|\chi_{Y}|+\bigg{|} \frac{\alpha_{Y}}{\log t}\bigg{|}\bigg{)}\cdot\bigg{|}\partial_{t}\frac{\alpha_ {Y}}{\log t}\bigg{|}\]
We point out that the error term \(\frac{1}{t^{\prime}\lfloor\log t^{\prime}|}\big{\|}\partial_{t}\frac{\alpha_ {Y}}{\log t}\big{\|}_{L^{2}}^{2}\cdot\mathbf{1}_{[1/10,1/2]}\) appears on the RHS because for \(t\in[0,1/10]\) we have \(1+2/\log t\gtrsim 1.\) Since \(t\in\big{[}0,\frac{1}{2}\big{]},\) we apply Gronwall to \(\partial_{t}\frac{\alpha_{Y}}{\log t}\) to obtain:
\[\bigg{\|}\partial_{t}\frac{\alpha_{Y}}{\log t}\bigg{\|}_{L^{2}}^{2}+\frac{1}{ m_{Y}^{2}}\cdot\bigg{\|}\frac{\alpha_{Y}}{\log t}\bigg{\|}_{H^{1}}^{2}(t)\lesssim \frac{1}{m_{Y}^{2}}\big{\|}\mathcal{O}\big{\|}_{H^{1}}^{2}+\frac{1}{m_{Y}^{4 }}\int_{0}^{t}\bigg{(}\big{\|}\chi_{Y}\big{\|}_{L^{2}}^{2}+\bigg{\|}\frac{ \alpha_{Y}}{\log t}\bigg{\|}_{L^{2}}^{2}\bigg{)}\lesssim\]
\[\lesssim\frac{1}{m_{Y}^{2}}\big{\|}\mathcal{O}\big{\|}_{H^{1}}^{2}+\frac{1}{m _{Y}^{4}}\big{\|}\chi(0)\big{\|}_{L^{2}}^{2}+\frac{1}{m_{Y}^{4}}\int_{0}^{t} \bigg{\|}\frac{\alpha_{Y}}{\log t}\bigg{\|}_{L^{2}}^{2}(t^{\prime})dt^{\prime}\]
We apply Gronwall to obtain (23). We use this in the inequality:
\[\bigg{\|}\frac{\alpha_{Y}}{\log(m_{Y}\tau)}\bigg{\|}_{L^{2}}\lesssim\big{\|} \mathcal{O}\big{\|}_{L^{2}}+\int_{0}^{\tau}\bigg{\|}\partial_{\tau}\frac{ \alpha_{Y}}{\log(m_{Y}\tau)}\bigg{\|}_{L^{2}}d\tau^{\prime}\]
in order to prove (24). Finally, these bounds together with (18) imply (25).
Next, we prove a high frequency regime estimate which applies both to the regular component and to the singular component of the solution:
**Proposition 3.3**.: _For any parameter \(m\geq 1,\) and any \(\tau\in\big{[}(2m)^{-1},1\big{]}\) we have that \(\alpha\) satisfies the estimate:_
Proof.: We introduce the new time variable \(t=m\tau\). As before, equation (17) can be written as:
\[\partial_{t}^{2}\alpha+\frac{1}{t}\partial_{t}\alpha+4q^{\prime}\frac{\alpha} {m^{2}}-\frac{4}{(t^{2}/m^{2}+1)^{2}}\cdot\Delta\frac{\alpha}{m^{2}}=\frac{f_{ 1}^{\prime}(t)}{m^{2}}t\partial_{t}\alpha+\frac{f_{2}^{\prime}(t)}{m^{4}}t^{2 }\alpha+\frac{f_{3}^{\prime}(t)}{m^{2}}\chi\]
where \(f_{1}^{\prime},f_{2}^{\prime},f_{3}^{\prime}\) are bounded functions of \(t.\) We multiply by \(\sqrt{t}\) to get:
\[\partial_{t}^{2}\big{(}\alpha\sqrt{t}\big{)}+\frac{1}{4t^{2}}\alpha\sqrt{t}+ \frac{4q^{\prime}}{m^{2}}\alpha\sqrt{t}-\frac{4}{(t^{2}/m^{2}+1)^{2}}\cdot \Delta\frac{\alpha\sqrt{t}}{m^{2}}=\frac{f_{1}^{\prime\prime}(t)}{m^{2}}t \partial_{t}\big{(}\alpha\sqrt{t}\big{)}+\frac{f_{2}^{\prime\prime}(t)}{m^{2}} \alpha\sqrt{t}+\frac{f_{3}^{\prime\prime}(t)}{m^{2}}\chi\sqrt{t}\]
where \(f_{1}^{\prime\prime},f_{2}^{\prime\prime},f_{3}^{\prime\prime}\) are bounded functions of \(t.\) We also have from (18):
\[\chi(t)=\chi(1/2)+\frac{1}{m^{2}}\int_{1/2}^{t}2t^{\prime}\alpha(t^{\prime})dt^{\prime}\]
which implies the bound for \(t\in[1/2,m]\):
\[\big{\|}\chi\big{\|}_{L^{2}}^{2}\lesssim\big{\|}\chi(1/2)\big{\|}_{L^{2}}^{2}+ \frac{1}{m^{2}}\int_{1/2}^{t}\big{\|}\alpha\sqrt{t^{\prime}}\big{\|}_{L^{2}}^{2} dt^{\prime}\]
Using the equation for \(\alpha\sqrt{t}\), we obtain the standard energy estimate:
\[\big{\|}\partial_{t}\big{(}\alpha\sqrt{t}\big{)}\big{\|}_{L^{2}}^{2}(t)+\frac{ 1}{t^{2}}\big{\|}\alpha\sqrt{t}\big{\|}_{L^{2}}^{2}(t)+\int_{1/2}^{t}\frac{1}{t^ {\prime 3}}\big{\|}\alpha\sqrt{t^{\prime}}\big{\|}_{L^{2}}^{2}dt^{\prime}+\frac{1}{m^{2 }}\big{\|}\nabla\alpha\sqrt{t}\big{\|}_{L^{2}}^{2}(t)+\frac{1}{m^{4}}\int_{1/2}^ {t}t^{\prime}\big{\|}\nabla\alpha\sqrt{t^{\prime}}\big{\|}_{L^{2}}^{2}dt^{ \prime}\lesssim\]
\[\leq\left(\big{\|}\partial_{t}\alpha\big{\|}_{L^{2}}^{2}+\big{\|}\alpha\big{\|}_{ L^{2}}^{2}+\frac{1}{m^{2}}\big{\|}\nabla\alpha\big{\|}_{L^{2}}^{2}\right) \bigg{|}_{t=\frac{1}{2}}+\int_{1/2}^{t}\frac{t^{\prime}}{m^{2}}\big{\|}\partial _{t}\big{(}\alpha\sqrt{t^{\prime}}\big{)}\big{\|}_{L^{2}}^{2}+\int_{1/2}^{t} \int_{S}\frac{\sqrt{t^{\prime}}}{m^{2}}\big{(}|\chi|+|\alpha|\big{)}\cdot\big{|} \partial_{t}\big{(}\alpha\sqrt{t^{\prime}}\big{)}\big{|}\]
We remark that we can bound:
\[\int_{1/2}^{t}\int_{S}\frac{\sqrt{t^{\prime}}}{m^{2}}\big{(}|\chi|+|\alpha| \big{)}\cdot\big{|}\partial_{t}\big{(}\alpha\sqrt{t^{\prime}}\big{)}\big{|} \lesssim\int_{1/2}^{t}\frac{t^{\prime}}{m^{2}}\big{\|}\partial_{t}\big{(} \alpha\sqrt{t^{\prime}}\big{)}\big{\|}_{L^{2}}^{2}+\frac{1}{m^{2}}\int_{1/2}^ {t}\big{\|}\alpha\big{\|}_{L^{2}}^{2}+\big{\|}\chi\big{\|}_{L^{2}}^{2}\lesssim\]
Combining the previous two inequalities we have:
\[\big{\|}\partial_{t}\big{(}\alpha\sqrt{t}\big{)}\big{\|}_{L^{2}}^{2}(t)+\frac {1}{t^{2}}\big{\|}\alpha\sqrt{t}\big{\|}_{L^{2}}^{2}(t)+\frac{1}{m^{2}}\big{\|} \nabla\alpha\sqrt{t}\big{\|}_{L^{2}}^{2}(t)\lesssim\]
Since \(t\leq m\), we can apply Gronwall to obtain:
Finally, we can rewrite this as:
\[\frac{1}{\tau}\big{\|}\alpha\big{\|}_{L^{2}}^{2}+\tau\big{\|}\nabla\alpha \big{\|}_{L^{2}}^{2}+\tau\big{\|}\partial_{\tau}\alpha\big{\|}_{L^{2}}^{2} \lesssim\frac{1}{m}\cdot\left(m^{2}\big{\|}\alpha\big{\|}_{L^{2}}^{2}+\big{\|} \nabla\alpha\big{\|}_{L^{2}}^{2}+\big{\|}\partial_{\tau}\alpha\big{\|}_{L^{2} }^{2}+m\big{\|}\chi\big{\|}_{L^{2}}^{2}\right)\bigg{|}_{\tau=(2m)^{-1}}\]
As previously explained, we combine the low frequency regime and the high frequency regime estimates for the regular component of the solution, and we obtain the following result:
**Corollary 3.1**.: \(\alpha_{J}\) _satisfies the estimate:_
Proof.: We apply Proposition 3.3 to \(\alpha_{J},\chi_{J}\), and \(m_{J}:\)
\[\left(\big{\|}\alpha_{J}\big{\|}_{H^{1}}^{2}+\big{\|}\partial_{\tau}\alpha_{J }\big{\|}_{L^{2}}^{2}\right)\bigg{|}_{\tau=1}\lesssim\frac{1}{m_{J}}\bigg{(}m _{J}^{2}\big{\|}\alpha_{J}\big{\|}_{L^{2}}^{2}+\big{\|}\nabla\alpha_{J}\big{\|} _{L^{2}}^{2}+\big{\|}\partial_{\tau}\alpha_{J}\big{\|}_{L^{2}}^{2}+m_{J} \big{\|}\chi_{J}\big{\|}_{L^{2}}^{2}\bigg{)}\bigg{|}_{\tau=(2m_{J})^{-1}}\]
Next, we use (20),(21), and (22) to get:
\[\big{\|}\alpha_{J}\big{\|}_{L^{2}}^{2}\big{(}(2m_{J})^{-1}\big{)}\lesssim \big{\|}\mathfrak{h}\big{\|}_{L^{2}}^{2}+\frac{1}{m_{J}^{2}}\big{\|}\nabla \mathfrak{h}\big{\|}_{L^{2}}^{2}+\frac{1}{m_{J}^{4}}\big{\|}\chi(0)\big{\|}_{L^ {2}}^{2}\]
\[\big{\|}\chi_{J}\big{\|}_{L^{2}}^{2}\big{(}(2m_{J})^{-1}\big{)}\lesssim\big{\|} \chi(0)\big{\|}_{L^{2}}^{2}+\frac{1}{m_{J}^{4}}\big{\|}\mathfrak{h}\big{\|}_{H^ {1}}^{2}\]
Combining all these inequalities we obtain the desired estimate.
Similarly, we use the low frequency regime and the high frequency regime estimates for the singular component of the solution in order to obtain:
**Corollary 3.2**.: \(\alpha_{Y}\) _satisfies the estimate:_
Proof.: We apply Proposition 3.3 to \(\alpha_{Y},\chi_{Y},\) and \(m_{Y}:\)
\[\left(\left\|\alpha_{Y}\right\|_{H^{1}}^{2}+\left\|\partial_{\tau}\alpha_{Y} \right\|_{L^{2}}^{2}\right)\biggr{|}_{\tau=1}\lesssim\frac{1}{m_{Y}}\biggl{(} m_{Y}^{2}\bigl{\|}\alpha_{Y}\bigr{\|}_{L^{2}}^{2}+\bigl{\|}\nabla\alpha_{Y} \bigr{\|}_{L^{2}}^{2}+\bigl{\|}\partial_{\tau}\alpha_{Y}\bigr{\|}_{L^{2}}^{2} +m_{Y}\bigl{\|}\chi_{Y}\bigr{\|}_{L^{2}}^{2}\biggr{)}\biggr{|}_{\tau=(2m_{Y}) ^{-1}}\]
We notice that:
\[\bigl{\|}\partial_{\tau}\alpha_{Y}\bigr{\|}_{L^{2}}^{2}\bigl{(}(2m_{Y})^{-1} \bigr{)}\lesssim m_{Y}^{2}\bigl{\|}\alpha_{Y}\bigr{\|}_{L^{2}}^{2}\bigl{(}(2m _{Y})^{-1}\bigr{)}+\biggl{\|}\partial_{\tau}\frac{\alpha_{Y}}{\log(m_{Y}\tau )}\biggr{\|}_{L^{2}}^{2}\bigl{(}(2m_{Y})^{-1}\bigr{)}\]
Next, we use (23),(24), and (25) to get:
\[\bigl{\|}\alpha_{Y}\bigr{\|}_{L^{2}}^{2}\bigl{(}(2m_{Y})^{-1}\bigr{)}\lesssim \bigl{\|}\mathcal{O}\bigr{\|}_{L^{2}}^{2}+\frac{1}{m_{Y}^{2}}\bigl{\|}\nabla \mathcal{O}\bigr{\|}_{L^{2}}^{2}+\frac{1}{m_{Y}^{4}}\bigl{\|}\chi(0)\bigr{\|} _{L^{2}}^{2}\]
Combining all these inequalities we obtain the desired estimate.
An immediate consequence of our previous estimates is the uniqueness to the initial value problem (17)-(18) with given smooth asymptotic initial data:
**Proposition 3.4** (Uniqueness of scattering states).: _Let \(\bigl{(}\alpha,\chi\bigr{)}\) and \(\bigl{(}\alpha^{\prime},\chi^{\prime}\bigr{)}\) be smooth solutions of the system (17)-(18), with the same asymptotic initial data given by \(\chi(0),\mathcal{O},h\in C^{\infty}(S^{n}).\) Then \(\alpha\equiv\alpha^{\prime}\) and \(\chi\equiv\chi^{\prime}.\)_
Proof.: We define \(\alpha_{0}=\alpha-\alpha^{\prime},\ \chi_{0}=\chi-\chi^{\prime}.\) By linearity, \(\bigl{(}\alpha_{0},\chi_{0}\bigr{)}\) solves the system with vanishing asymptotic initial data. In particular, \(\alpha_{0}\) satisfies the expansion:
\[\alpha_{0}(\tau)=O(\tau^{2}|\log\tau|^{2}),\ \partial_{\tau}\alpha_{0}(\tau)=O( \tau|\log\tau|^{2}),\]
Thus, we can apply the estimate in Corollary 3.2 to get \(\alpha_{0}(1)=\partial_{\tau}\alpha_{0}(1)=0.\) The standard uniqueness result for hyperbolic equations implies that \(\alpha_{0}\) vanishes identically.
We are now in the position to prove the main result of this section, estimate (10). In particular, we prove that at \(\tau=1\) the solution gains spatial regularity compared to the asymptotic initial data. To capture
this from our previous estimates, we first need to introduce the Littlewood-Paley frequency decomposition, according to Appendix 6.1. Thus, for any \(f\in L^{2}(S^{n})\) we can write:
\[f=f_{0}+\sum_{l=1}^{\infty}f_{l},\ f_{0}=P_{\leq 1}f,\ f_{l}=P_{(2^{l-1},2^{l}]}f\]
We prove the main estimate of this section:
**Theorem 3.1**.: _Let any \(s\geq 1.\) Let \(\big{(}\alpha,\chi\big{)}\) be a smooth solution of (17)-(18) with asymptotic initial data given by \(\big{(}\chi(0),\mathcal{O},h\big{)}\). We define:_
\[\mathfrak{h}:=h-(\log\nabla)\mathcal{O}:=h-\sum_{l=1}^{\infty}l\log 2\cdot \mathcal{O}_{l}\]
_Then, the solution \(\big{(}\alpha,\chi\big{)}\) satisfies the estimates:_
\[\bigg{(}\big{\|}\alpha\big{\|}_{H^{s+1/2}}+\big{\|}\partial_{ \tau}\alpha\big{\|}_{H^{s-1/2}}\bigg{)}\bigg{|}_{\tau=1}\lesssim\big{\|} \mathfrak{h}\big{\|}_{H^{s}}+\big{\|}\mathcal{O}\big{\|}_{H^{s}}+\big{\|} \chi(0)\big{\|}_{H^{s-1/2}}\] \[\big{\|}\chi(1)\big{\|}_{H^{s+1/2}}\lesssim\big{\|}\mathfrak{h} \big{\|}_{H^{s}}+\big{\|}\mathcal{O}\big{\|}_{H^{s}}+\big{\|}\chi(0)\big{\|}_{ H^{s+1/2}}\]
Proof.: We notice that we can write \(\alpha=\sum_{l=0}^{\infty}\alpha_{l},\ \chi=\sum_{l=0}^{\infty}\chi_{l},\) and each \(\big{(}\alpha_{l},\chi_{l}\big{)}\) is the solution of (17)-(18) with asymptotic initial data given by \(\big{(}\chi_{l}(0),\mathcal{O}_{l},h_{l}\big{)}.\) Using the uniqueness result for smooth solutions proved above, we get that \(\alpha_{l}=(\alpha_{J})_{l}+(\alpha_{Y})_{l}\) and \(\chi_{l}=(\chi_{J})_{l}+(\chi_{Y})_{l},\) where \(\big{(}(\alpha_{J})_{l},(\chi_{J})_{l}\big{)}\) is the solution of (17)-(18) with asymptotic initial data given by \(\big{(}\frac{1}{2}\chi_{l}(0),0,\mathfrak{h}_{l}\big{)}\) and \(\big{(}(\alpha_{Y})_{l},(\chi_{Y})_{l}\big{)}\) is the solution of (17)-(18) with asymptotic initial data given by \(\big{(}\frac{1}{2}\chi_{l}(0),\mathcal{O}_{l},l\log 2\cdot\mathcal{O}_{l} \big{)}.\)
The first estimate follows by applying the estimates in Corollary 3.1 and Corollary 3.2 for each component, with \(m_{Y}=m_{J}=2^{l}.\) We note that (17)-(18) is linear with coefficients independent of space, so we can commute with spatial derivatives.
To prove the second estimate, we notice that for any fixed \(l\geq 0\) we have:
\[\big{\|}\chi_{l}(1)\big{\|}_{H^{s+1/2}}^{2}\lesssim\big{\|}\chi_{l}(0)\big{\|} _{H^{s+1/2}}^{2}+\int_{0}^{2^{-l-1}}\tau^{2}\big{\|}\alpha_{l}\big{\|}_{H^{s+1 /2}}^{2}d\tau+\int_{2^{-l-1}}^{1}\tau^{2}\big{\|}\alpha_{l}\big{\|}_{H^{s+1/2} }^{2}d\tau\lesssim\]
\[\lesssim\big{\|}\chi_{l}(0)\big{\|}_{H^{s+1/2}}^{2}+\bigg{(}\big{\|}\mathfrak{ h}_{l}\big{\|}_{H^{s+1/2}}^{2}+\big{\|}\mathcal{O}_{l}\big{\|}_{H^{s+1/2}}^{2}+ \big{\|}\chi_{l}(0)\big{\|}_{H^{s-1/2}}^{2}\bigg{)}\cdot\int_{0}^{2^{-l-1}} \tau^{2}|\log\tau|^{2}d\tau+\]
\[+\bigg{(}\big{\|}\mathfrak{h}_{l}\big{\|}_{H^{s}}^{2}+\big{\|}\mathcal{O}_{l} \big{\|}_{H^{s}}^{2}+\big{\|}\chi_{l}(0)\big{\|}_{H^{s-1/2}}^{2}\bigg{)}\cdot \int_{2^{-l-1}}^{1}\tau d\tau\lesssim\big{\|}\mathfrak{h}_{l}\big{\|}_{H^{s}}^ {2}+\big{\|}\mathcal{O}_{l}\big{\|}_{H^{s}}^{2}+\big{\|}\chi_{l}(0)\big{\|}_{H^{ s+1/2}}^{2}\]
where we used Proposition 3.1, Proposition 3.2, and Proposition 3.3. This completes the proof of the second estimate.
### Existence and Uniqueness of Scattering States
In this section we prove the existence and uniqueness of solutions of (17)-(18) with smooth asymptotic initial data in Theorem 3.2. Together with the estimates proved in Theorem 3.1, this completes the proof of the first statement in Theorem 1.2, establishing the _existence and uniqueness of scattering states_. We also introduce a notion of solutions with limited regularity, for which the asymptotic initial data is in suitable Sobolev spaces. For this class of solutions we prove analogous results to first statement in Theorem 1.2 by density arguments, obtaining existence, uniqueness, and energy estimates.
We begin by proving existence and uniqueness for smooth solutions:
**Theorem 3.2** (Existence and uniqueness of scattering states).: _For any asymptotic initial data \(\chi(0),\mathcal{O},h\in C^{\infty}(S^{n})\), there exists a unique smooth solution \(\big{(}\alpha,\chi\big{)}\) of (17)-(18) with this initial data._
Proof.: We point out that we already proved uniqueness in Proposition 3.4. We first prove the existence result for each component in the frequency decomposition using an iteration argument. For any \(l\), we introduce the time variable \(t=2^{l}\tau\). The main equation can be written as:
\[\partial_{t}^{2}\alpha_{l}+\frac{1}{t}\partial_{t}\alpha_{l}=-4q^{\prime} \frac{\alpha_{l}}{2^{2l}}+\frac{4}{(t^{2}/2^{2l}+1)^{2}}\cdot\Delta\frac{ \alpha_{l}}{2^{2l}}+\frac{f_{1}^{\prime}(t)}{2^{2l}}t\partial_{t}\alpha_{l}+ \frac{f_{2}^{\prime}(t)}{2^{4l}}t^{2}\alpha_{l}+\frac{f_{3}^{\prime}(t)}{2^{2l }}\cdot\bigg{(}\chi_{l}(0)+\frac{1}{2^{2l}}\int_{0}^{t}2t^{\prime}\alpha_{l}(t ^{\prime})dt^{\prime}\bigg{)}\]
where \(f_{1}^{\prime},f_{2}^{\prime},f_{3}^{\prime}\) are bounded functions of \(t.\) We treat the RHS as an inhomogeneous term, so we can write the equation as:
\[\partial_{t}^{2}\alpha_{l}+\frac{1}{t}\partial_{t}\alpha_{l}=F\bigg{(}\alpha_ {l},\partial_{t}\alpha_{l},\Delta\alpha_{l},\int_{0}^{t}t^{\prime}\alpha_{l}( t^{\prime})dt^{\prime},\chi_{l}(0)\bigg{)} \tag{26}\]
We prove local existence for this equation using the following iteration scheme:
\[\alpha_{l}^{0}=\mathcal{O}_{l}\log t+\mathfrak{h}_{l}\]
\[\alpha_{l}^{n+1}=\mathcal{O}_{l}\log t+\mathfrak{h}_{l} +\log t\int_{0}^{t}t^{\prime}F\bigg{(}\alpha_{l}^{n},\partial_{t} \alpha_{l}^{n},\Delta\alpha_{l}^{n},\int_{0}^{t^{\prime}}t^{\prime\prime} \alpha_{l}^{n}(t^{\prime\prime})dt^{\prime\prime},\chi_{l}(0)\bigg{)}dt^{ \prime}- \tag{27}\] \[-\int_{0}^{t}t^{\prime}\log t^{\prime}F\bigg{(}\alpha_{l}^{n}, \partial_{t}\alpha_{l}^{n},\Delta\alpha_{l}^{n},\int_{0}^{t^{\prime}}t^{\prime \prime}\alpha_{l}^{n}(t^{\prime\prime})dt^{\prime\prime},\chi_{l}(0)\bigg{)}dt ^{\prime}\]
We define the renormalized sequence:
\[\widetilde{\alpha_{l}^{n}}:=\alpha_{l}^{n}-\mathcal{O}_{l}\log t-\mathfrak{h}_ {l}\]
Note that \(\widetilde{\alpha_{l}^{n}}\in C_{t}^{1}([0,\infty))C^{\infty}(S^{n})\). For any \(s\geq 1\) we have the bound:
\[\bigg{\|}F\bigg{(}\alpha_{l}^{n},\partial_{t}\alpha_{l}^{n}, \Delta\alpha_{l}^{n},\int_{0}^{t^{\prime}}t^{\prime\prime}\alpha_{l}^{n}(t^{ \prime\prime})dt^{\prime\prime},\chi_{l}(0)\bigg{)}-F\bigg{(}\alpha_{l}^{n-1},\partial_{t}\alpha_{l}^{n-1},\Delta\alpha_{l}^{n-1},\int_{0}^{t^{\prime}}t^{ \prime\prime}\alpha_{l}^{n-1}(t^{\prime\prime})dt^{\prime\prime},\chi_{l}(0) \bigg{)}\bigg{\|}_{H^{s}}\lesssim\\ \lesssim\sup_{t^{\prime}\in[0,t]}\big{\|}\widetilde{\alpha_{l}^{n }}-\widetilde{\alpha_{l}^{n-1}}\big{\|}_{H^{s}}+t\big{\|}\partial_{t} \widetilde{\alpha_{l}^{n}}-\partial_{t}\widetilde{\alpha_{l}^{n-1}}\big{\|}_{ H^{s}}\]
Using this bound, we have for \(t\in[0,1]\) and any \(s\geq 1:\)
\[\big{\|}\widetilde{\alpha_{l}^{n+1}}-\widetilde{\alpha_{l}^{n}}\big{\|}_{H^{s}}( t)+\big{\|}\partial_{t}\widetilde{\alpha_{l}^{n+1}}-\partial_{t}\widetilde{ \alpha_{l}^{n}}\big{\|}_{H^{s}}(t)\lesssim\big{(}t^{2}|\log(t)|+t\big{)} \bigg{[}\sup_{t^{\prime}\in[0,t]}\big{\|}\widetilde{\alpha_{l}^{n}}- \widetilde{\alpha_{l}^{n-1}}\big{\|}_{H^{s}}+\sup_{t^{\prime}\in[0,t]}\big{\|} \partial_{t}\widetilde{\alpha_{l}^{n}}-\partial_{t}\widetilde{\alpha_{l}^{n-1} }\big{\|}_{H^{s}}\bigg{]}.\]
The constant in the above inequality is independent of \(l.\) Thus, there exists \(\epsilon>0\) such that for \(t\in[0,\epsilon],\) the sequence \(\big{\{}(\widetilde{\alpha_{l}^{n}},\partial_{t}\widetilde{\alpha_{l}^{n}}) \big{\}}\) is a contraction in \(H^{s},\) so there exists \(\widetilde{\alpha_{l}}\in C^{1}_{t}\big{(}[0,\epsilon];H^{s}(S^{n})\big{)}\) such that \(\widetilde{\alpha_{l}^{n}}\to\widetilde{\alpha_{l}}.\) We define \(\alpha_{l}=\widetilde{\alpha_{l}}+\mathcal{O}_{l}\log t+\mathfrak{h}_{l}.\) Taking the limit in (27), we obtain that \(\alpha_{l}\) satisfies the equation for \(t\in[0,\epsilon]\):
\[\alpha_{l}=\mathcal{O}_{l}\log t+\mathfrak{h}_{l} +\log t\int_{0}^{t}t^{\prime}F\bigg{(}\alpha_{l},\partial_{t} \alpha_{l},\Delta\alpha_{l},\int_{0}^{t^{\prime}}t^{\prime\prime}\alpha_{l}(t ^{\prime\prime})dt^{\prime\prime},\chi_{l}(0)\bigg{)}dt^{\prime}- \tag{28}\] \[-\int_{0}^{t}t^{\prime}\log t^{\prime}F\bigg{(}\alpha_{l}, \partial_{t}\alpha_{l},\Delta\alpha_{l},\int_{0}^{t^{\prime}}t^{\prime\prime} \alpha_{l}(t^{\prime\prime})dt^{\prime\prime},\chi_{l}(0)\bigg{)}dt^{\prime}\]
This implies that \(\alpha_{l}\) satisfies (26) for \(t\in[0,\epsilon].\) Since (26) is a linear hyperbolic equation, we obtain that \(\alpha_{l}\) extends as a \(C^{1}_{t}\big{(}(0,\infty);H^{s}(S^{n})\big{)}\) solution. This holds for any \(s\geq 1,\) so \(\alpha_{l}\) is a smooth solution on \((0,\infty)\times S^{n}.\) A direct computation shows that the RHS of (28) also defines a smooth solution on \((0,\infty)\times S^{n}.\) By uniqueness, we obtain that \(\alpha_{l}\) satisfies (28) for \(t\in[0,\infty).\) This also implies that \(\widetilde{\alpha_{l}}\in C^{1}_{t}([0,\infty))C^{\infty}(S^{n}).\) We notice that for \(t\in(0,2^{l}]\) we have the bound:
\[\bigg{\|}F\bigg{(}\alpha_{l},\partial_{t}\alpha_{l},\Delta\alpha_{l},\int_{0}^ {t}t^{\prime}\alpha_{l}(t^{\prime}),\chi_{l}(0)\bigg{)}\bigg{\|}_{H^{s}} \lesssim\|\mathcal{O}_{l}\|_{H^{s}}|\log t|+\|\mathfrak{h}_{l}\|_{H^{s}}+\| \chi_{l}(0)\|_{H^{s}}+\sup_{t^{\prime}\in[0,t]}\big{(}\|\widetilde{\alpha_{l} }\|_{H^{s}}+\|t\partial_{t}\widetilde{\alpha_{l}}\|_{H^{s}}\big{)}\]
Using this bound in (28), we obtain that \(\alpha_{l}\) satisfies the expansions:
\[\alpha_{l}(t)=\mathcal{O}_{l}\log(t2^{l})+\mathfrak{h}_{l}+O\big{(}t^{2}|\log (t)|^{2}\big{)},\ \partial_{t}\alpha_{l}(t)=\frac{\mathcal{O}}{t}+O\big{(}t|\log(t)|^{2}\big{)} \ \text{in}\ C^{\infty}(S^{n})\]
In particular, this proves that \(\big{(}\alpha_{l},\chi_{l}\big{)}\) defines a smooth solution of (17)-(18) with asymptotic initial data \(\big{(}\chi_{l}(0),\mathcal{O}_{l},h_{l}\big{)}.\)
In order to sum up all components in the frequency decomposition, we need to make the above estimates quantitative. According to the first part of the proof, we can now use the estimates from the previous section. By Proposition 3.1 and Proposition 3.2, we obtain that for any \(s\geq 1\) and any \(t\in[0,1/2]:\)
\[\bigg{\|}F\bigg{(}\alpha_{l},\partial_{t}\alpha_{l},\Delta\alpha_{l},\int_{0}^ {t}t^{\prime}\alpha_{l}(t^{\prime})dt^{\prime},\chi_{l}(0)\bigg{)}\bigg{\|}_{H^ {s}}\lesssim|\log t|\cdot\big{(}\|\mathcal{O}_{l}\|_{H^{s}}+\|\mathfrak{h}_{l} \|_{H^{s}}+\|\chi_{l}(0)\|_{H^{s}}\big{)}\]
Using the estimates in Proposition 3.3 (as in the proofs of Corollary 3.1 and Corollary 3.2), we have for any \(s\geq 1\) and any \(t\in[1/2,2^{l}]:\)
\[\big{\|}\alpha_{l}\big{\|}_{H^{s}}+\big{\|}t\partial_{t}\alpha_{l}\big{\|}_{H^ {s}}\lesssim\sqrt{t}\cdot\big{(}\|\mathcal{O}_{l}\|_{H^{s}}+\|\mathfrak{h}_{l} \|_{H^{s}}+\|\chi_{l}(0)\|_{H^{s}}\big{)}\]
This implies that for any \(s\geq 1\) and any \(t\in[1/2,2^{l}]:\)
\[\bigg{\|}F\bigg{(}\alpha_{l},\partial_{t}\alpha_{l},\Delta\alpha_{l},\int_{0}^ {t}t^{\prime}\alpha_{l}(t^{\prime})dt^{\prime},\chi_{l}(0)\bigg{)}\bigg{\|}_{H^{ s}}\lesssim 2^{l/2}\cdot\big{(}\|\mathcal{O}_{l}\|_{H^{s}}+\|\mathfrak{h}_{l} \|_{H^{s}}+\|\chi_{l}(0)\|_{H^{s}}\big{)}\]
Using these bounds on \(F\) in (28), we get for any \(s\geq 1\) and any \(t\in[0,2^{l}]:\)
\[\left\|\alpha_{l}-\mathcal{O}_{l}\log t-\mathfrak{h}_{l}\right\|_{H^{s}}\lesssim t ^{2}|\log t|^{2}2^{l/2}\cdot\left(\|\mathcal{O}_{l}\|_{H^{s}}+\|\mathfrak{h}_{l }\|_{H^{s}}+\|\chi_{l}(0)\|_{H^{s}}\right)\]
\[\left\|\partial_{t}\alpha_{l}-\mathcal{O}_{l}/t\right\|_{H^{s}}\lesssim t| \log t|^{2}2^{l/2}\cdot\left(\|\mathcal{O}_{l}\|_{H^{s}}+\|\mathfrak{h}_{l}\|_ {H^{s}}+\|\chi_{l}(0)\|_{H^{s}}\right)\]
In terms of the coordinate \(\tau\), we proved that for any \(s\geq 1\) and any \(\tau\in[0,1]:\)
\[\left\|\alpha_{l}-\mathcal{O}_{l}\log(2^{l}\tau)-\mathfrak{h}_{l}\right\|_{H^ {s}}\lesssim\tau^{2}|\log\tau|^{2}\cdot\left(\|\mathcal{O}_{l}\|_{H^{s+3}}+\| \mathfrak{h}_{l}\|_{H^{s+3}}+\|\chi_{l}(0)\|_{H^{s+3}}\right)\]
\[\left\|\partial_{\tau}\alpha_{l}-\frac{\mathcal{O}_{l}}{\tau}\right\|_{H^{s}} \lesssim\tau|\log\tau|^{2}\cdot\left(\|\mathcal{O}_{l}\|_{H^{s+3}}+\| \mathfrak{h}_{l}\|_{H^{s+3}}+\|\chi_{l}(0)\|_{H^{s+3}}\right)\]
The constants in the above inequalities are independent of \(l\), so we can define \(\alpha=\sum_{l=0}^{\infty}\alpha_{l}\) and we proved that for any \(s\geq 1\) and any \(\tau\in[0,1]:\)
\[\left\|\alpha(\tau)-\mathcal{O}\log(\tau)-h\right\|_{H^{s}}\lesssim\tau^{2}| \log\tau|^{2}\cdot\left(\|\mathcal{O}\|_{H^{s+3}}+\|\mathfrak{h}\|_{H^{s+3}}+ \|\chi(0)\|_{H^{s+3}}\right)\]
\[\left\|\partial_{\tau}\alpha-\frac{\mathcal{O}}{\tau}\right\|_{H^{s}}\lesssim \tau|\log\tau|^{2}\cdot\left(\|\mathcal{O}\|_{H^{s+3}}+\|\mathfrak{h}\|_{H^{s +3}}+\|\chi(0)\|_{H^{s+3}}\right)\]
We obtain that \(\alpha-\mathcal{O}\log(\tau)-h\in C^{1}_{\tau}([0,1])C^{\infty}(S)\) and that the expansions (19) hold. By propagation of regularity for the linear system (17)-(18) with initial data at \(\tau=1\), we also have \(\alpha\in C^{\infty}\big{(}(0,\infty)\times S\big{)}.\) In conclusion, \(\big{(}\alpha,\chi\big{)}\) defines a smooth solution of (17)-(18) with asymptotic initial data \(\big{(}\chi(0),\mathcal{O},h\big{)}.\)
We define a class of solutions with limited regularity:
**Definition 3.2**.: _Let any \(s\geq 2\). We consider any initial data \(\chi(0)\in H^{s},\ \mathcal{O}\in H^{s},\ \mathfrak{h}=h-(\log\nabla)\mathcal{O}\in H ^{s}.\) We say that \(\big{(}\alpha,\chi\big{)}\) is an "\(s\)-regularity solution" of (17)-(18) with asymptotic initial data \(\big{(}\chi(0),\mathcal{O},\mathfrak{h}\big{)}\) if:_
* \(\alpha\in L^{1}_{loc}\big{(}[0,\infty);H^{s}\big{)}\cap C^{0}\big{(}(0,\infty );H^{s}\big{)}\cap C^{1}\big{(}(0,\infty);H^{s-1}\big{)}\cap C^{2}\big{(}(0, \infty);H^{s-2}\big{)}\)__
* \(\chi\in C^{1}\big{(}[0,\infty);H^{s}\big{)}\cap C^{2}\big{(}(0,\infty);H^{s- 1}\big{)}\)__
* \(\alpha=\alpha_{J}+\alpha_{Y}\) _and_ \(\chi=\chi_{J}+\chi_{Y}\)__
* _The regular component of the solution_ \(\big{(}\alpha_{J},\chi_{J}\big{)}\) _solves (_17_)-(_18_) such that:_ \[\alpha_{J}\in C^{0}\big{(}[0,\infty);H^{s}\big{)}\cap C^{1}\big{(}[0,\infty);H ^{s-1}\big{)}\cap C^{2}\big{(}(0,\infty);H^{s-2}\big{)}\] \[\chi_{J}\in C^{1}\big{(}[0,\infty);H^{s}\big{)}\cap C^{2}\big{(}[0, \infty);H^{s-1}\big{)}\] \[\alpha_{J}(0)=\mathfrak{h},\ \partial_{\tau}\alpha_{J}(0)=0,\ \chi_{J}(0)= \frac{1}{2}\chi(0)\]
* _The singular component of the solution_ \(\big{(}\alpha_{Y},\chi_{Y}\big{)}\) _solves (_17_)-(_18_) such that:_ \[\alpha_{Y}\in L^{1}_{loc}\big{(}[0,\infty);H^{s}\big{)}\cap C^{0}\big{(}(0, \infty);H^{s}\big{)}\cap C^{1}\big{(}(0,\infty);H^{s-1}\big{)}\cap C^{2}\big{(} (0,\infty);H^{s-2}\big{)}\] \[\chi_{Y}\in C^{1}\big{(}[0,\infty);H^{s}\big{)}\cap C^{2}\big{(}(0, \infty);H^{s-1}\big{)}\] \[\lim_{\tau\to 0}\frac{\big{(}\alpha_{Y}\big{)}_{l}}{\log(2^{l}\tau)}= \mathcal{O}_{l},\ \lim_{\tau\to 0}\partial_{\tau}\bigg{(}\frac{\big{(}\alpha_{Y} \big{)}_{l}}{\log(2^{l}\tau)}\bigg{)}=0,\ \chi_{Y}(0)=\frac{1}{2}\chi(0)\]
A similar argument as in the case of smooth solutions gives uniqueness for this class of solutions:
**Proposition 3.5**.: _Let \(\big{(}\alpha,\chi\big{)}\) and \(\big{(}\alpha^{\prime},\chi^{\prime}\big{)}\) be \(s\)-regularity solutions of the system (17)-(18), with the same asymptotic initial data \(\chi(0)\in H^{s},\ \mathcal{O}\in H^{s},\ \mathfrak{h}\in H^{s}.\) Then \(\alpha_{J}\equiv\alpha_{J}^{\prime}\), \(\alpha_{Y}\equiv\alpha_{Y}^{\prime}\), \(\chi_{J}\equiv\chi_{J}^{\prime}\), and \(\chi_{Y}\equiv\chi_{Y}^{\prime}\)._
Proof.: By linearity, we have that \(\big{(}\alpha_{J}-\alpha_{J}^{\prime},\chi_{J}-\chi_{J}^{\prime}\big{)}\) is an \(s\)-regularity solution with vanishing initial data. The regularity assumptions allow us to repeat the argument of Proposition 3.1 with \(m_{J}=1\) in order to obtain \(\alpha_{J}(1)=\alpha_{J}^{\prime}(1),\ \partial_{\tau}\alpha_{J}(1)=\partial_{\tau} \alpha_{J}^{\prime}(1)\), and \(\chi_{J}(1)=\chi_{J}^{\prime}(1)\). We immediately get \(\alpha_{J}\equiv\alpha_{J}^{\prime}\) and \(\chi_{J}\equiv\chi_{J}^{\prime}\).
Similarly, we have that \(\big{(}\alpha_{Y}-\alpha_{Y}^{\prime},\chi_{Y}-\chi_{Y}^{\prime}\big{)}\) is an \(s\)-regularity solution with vanishing initial data, which implies that for each \(l\) we have that \(\big{(}(\alpha_{Y}-\alpha_{Y}^{\prime})_{l},(\chi_{Y}-\chi_{Y}^{\prime})_{l} \big{)}\) is an \(s\)-regularity solution with vanishing initial data. The regularity assumptions allow us to repeat the argument of Proposition 3.2 with \(m_{Y}=2^{l}\) in order to obtain \(\big{(}\alpha_{Y}\big{)}_{l}(2^{-l-1})=\big{(}\alpha_{Y}^{\prime}\big{)}_{l} (2^{-l-1}),\ \partial_{\tau}\big{(}\alpha_{Y}\big{)}_{l}(2^{-l-1})=\partial_{\tau} \big{(}\alpha_{Y}^{\prime}\big{)}_{l}(2^{-l-1})\), and \(\big{(}\chi_{Y}\big{)}_{l}(2^{-l-1})=\big{(}\chi_{Y}^{\prime}\big{)}_{l}(2^{- l-1})\). Thus, we get that for each \(l\) we have \(\big{(}\alpha_{Y}\big{)}_{l}\equiv\big{(}\alpha_{Y}^{\prime}\big{)}_{l}\) and \(\big{(}\chi_{Y}\big{)}_{l}\equiv\big{(}\chi_{Y}^{\prime}\big{)}_{l}\).
We use a standard density argument to prove existence of solutions of (17)-(18) with asymptotic initial data in \(H^{s}(S^{n}):\)
**Proposition 3.6** (Existence and uniqueness of scattering states).: _Let any \(s\geq 2\). For any asymptotic initial data \(\chi(0)\in H^{s},\ \mathcal{O}\in H^{s},\ \mathfrak{h}=h-(\log\nabla)\mathcal{O}\in H ^{s},\) there exists a unique \(s\)-regularity solution \(\big{(}\alpha,\chi\big{)}\) of (17)-(18) which achieves the initial data._
Proof.: We recall that according to our definition of a solution, we need to construct both the regular and the singular component of the solution. We begin by constructing the regular component \(\big{(}\alpha_{J},\chi_{J}\big{)}\). Let \(\{\chi_{J}^{n}(0)\},\ \{\mathfrak{h}^{n}\}\) be two sequences of smooth functions such that:
\[\chi_{J}^{n}(0)\longrightarrow\chi_{J}(0)=\frac{1}{2}\chi(0),\ \mathfrak{h}^{n} \longrightarrow\mathfrak{h}\ \text{in}\ H^{s}\]
Let \(\big{(}\alpha_{J}^{n},\chi_{J}^{n}\big{)}\) be the smooth solution of (17)-(18) with asymptotic initial data \(\big{(}\chi_{J}^{n}(0),0,\mathfrak{h}^{n}\big{)}\). For any \(n,k\) we take \(m_{J}=1\) in Proposition 3.1 to obtain:
\[\big{\|}\alpha_{J}^{n}-\alpha_{J}^{k}\big{\|}_{H^{s}}^{2}+\big{\|}\partial_{ \tau}\alpha_{J}^{n}-\partial_{\tau}\alpha_{J}^{k}\big{\|}_{H^{s-1}}^{2} \lesssim\big{\|}\mathfrak{h}^{n}-\mathfrak{h}^{k}\big{\|}_{H^{s}}^{2}+\big{\|} \chi_{J}^{n}(0)-\chi_{J}^{k}(0)\big{\|}_{H^{s-1}}^{2}\]
As a result, we obtain a solution \(\alpha_{J}\in C^{0}\big{(}[0,1];H^{s})\cap C^{1}\big{(}[0,1];H^{s-1}\big{)}\cap C ^{2}\big{(}(0,1];H^{s-2}\big{)},\) such that \(\alpha_{J}(0)=\mathfrak{h},\ \partial_{\tau}\alpha_{J}(0)=0,\) and \(\chi_{J}\in C^{1}\big{(}[0,1];H^{s}\big{)}\cap C^{2}\big{(}[0,1];H^{s-1}\big{)}.\) By the propagation of regularity for the linear system (17)-(18) with initial data at \(\tau=1\), we can extend the regularity statements from \([0,1]\) to \([0,\infty)\).
We now construct the singular component \(\big{(}\alpha_{Y},\chi_{Y}\big{)}\). Let \(\{\chi_{Y}^{n}(0)\},\ \{\mathcal{O}^{n}\}\) be two sequences of smooth functions such that:
\[\chi_{Y}^{n}(0)\longrightarrow\chi_{Y}(0)=\frac{1}{2}\chi(0),\ \mathcal{O}^{n} \longrightarrow\mathcal{O}\ \text{in}\ H^{s}\]
Then we also have:
\[\big{(}\chi_{Y}^{n}(0)\big{)}_{l}\longrightarrow\big{(}\chi_{Y}(0)\big{)}_{l},\ \mathcal{O}_{l}^{n} \longrightarrow\mathcal{O}_{l}\ \text{in}\ H^{s}\ \text{uniformly in}\ l\]
Let \(\big{(}\alpha_{Y}^{n},\chi_{Y}^{n}\big{)}\) be the smooth solution of (17)-(18) with asymptotic initial data \(\big{(}\chi_{Y}^{n}(0),\mathcal{O}^{n},(\log\nabla)\mathcal{O}^{n}\big{)}\). Then we also have that \(\big{(}(\alpha_{Y}^{n})_{l},(\chi_{Y}^{n})_{l}\big{)}\) is the smooth solution of (17)-(18) with asymptotic initial data given by \(\big{(}(\chi_{Y}^{n}(0))_{l},\mathcal{O}_{l}^{n},l\log 2\cdot\mathcal{O}_{l}^{n} \big{)}\).
We fix any \(l\), and we construct the \(l\)-frequency component of the singular solution on \([0,1]\). Then for any \(\tau\in[0,2^{-l-1}]\) and any \(n,k\) we have by Proposition 3.2 with \(m_{Y}=2^{l}\):
\[\bigg{\|}\frac{(\alpha_{Y}^{n})_{l}-(\alpha_{Y}^{k})_{l}}{\log(2^{l}\tau)} \bigg{\|}_{H^{s}}+\bigg{\|}\partial_{\tau}\bigg{(}\frac{(\alpha_{Y}^{n})_{l}- (\alpha_{Y}^{k})_{l}}{\log(2^{l}\tau)}\bigg{)}\bigg{\|}_{H^{s-1}}\lesssim\big{\|} \mathcal{O}_{l}^{n}-\mathcal{O}_{l}^{k}\big{\|}_{H^{s}}+\big{\|}(\chi_{Y}^{n}( 0))_{l}-(\chi_{Y}^{k}(0))_{l}\big{\|}_{H^{s-1}} \tag{29}\]
Thus, there exists \((\alpha_{Y})_{l}\in L^{1}\big{(}[0,2^{-l-1}];H^{s}\big{)}\cap C^{0}\big{(}(0, 2^{-l-1}];H^{s}\big{)}\cap C^{1}\big{(}(0,2^{-l-1}];H^{s-1}\big{)}\), such that:
\[\frac{(\alpha_{Y}^{n})_{l}}{\log(2^{l}\tau)}\to\frac{(\alpha_{Y})_{l}}{\log( 2^{l}\tau)}\text{ in }C^{0}\big{(}[0,2^{-l-1}];H^{s}\big{)}\cap C^{1}\big{(}[0,2^{-l-1}];H^{s-1} \big{)}\]
In particular, this implies:
\[\lim_{\tau\to 0}\frac{\big{(}\alpha_{Y}\big{)}_{l}}{\log(2^{l}\tau)}=\mathcal{O }_{l},\text{ }\lim_{\tau\to 0}\partial_{\tau}\bigg{(}\frac{\big{(}\alpha_{Y} \big{)}_{l}}{\log(2^{l}\tau)}\bigg{)}=0\]
We can write for \(\tau\in[0,2^{-l-1}]\):
\[\big{(}\chi_{Y}^{n}\big{)}_{l}(\tau)=\big{(}\chi_{Y}^{n}(0)\big{)}_{l}+\int_ {0}^{\tau}2\tau^{\prime}\log(2^{l}\tau^{\prime})\cdot\frac{\big{(}\alpha_{Y}^{ n}\big{)}_{l}}{\log(2^{l}\tau^{\prime})}d\tau^{\prime}\]
Using (29), we obtain that \(\big{(}\chi_{Y}^{n}\big{)}_{l}\to\big{(}\chi_{Y}\big{)}_{l}\) in \(C^{1}\big{(}[0,2^{-l-1}];H^{s}\big{)}\cap C^{2}\big{(}(0,2^{-l-1}];H^{s-1} \big{)}\). Moreover, since \(\big{(}\big{(}\alpha_{Y}^{n}\big{)}_{l},\big{(}\chi_{Y}^{n}\big{)}_{l}\big{)}\) solves (17)-(18), the above relations imply that \(\big{(}\big{(}\alpha_{Y}\big{)}_{l},\big{(}\chi_{Y}\big{)}_{l}\big{)}\) solves (17)-(18) on \((0,2^{-l-1}]\). As a result, we obtain that \(\big{(}(\alpha_{Y})_{l},(\chi_{Y})_{l}\big{)}\) is an \(s\)-regularity solution of (17)-(18) on \([0,2^{-l-1}]\) with asymptotic initial data \(\big{(}(\chi_{Y}(0))_{l},\mathcal{O}_{l},l\log 2\cdot\mathcal{O}_{l}\big{)}\). By the propagation of regularity for the linear system (17)-(18) with initial data at \(\tau=2^{-l-1}\), we obtain that \(\big{(}(\alpha_{Y})_{l},(\chi_{Y})_{l}\big{)}\) extends as an \(s\)-regularity solution on \([0,1]\), and is still approximated by \(\big{(}\big{(}\alpha_{Y}^{n}\big{)}_{l},\big{(}\chi_{Y}^{n}\big{)}_{l}\big{)}\).
In order sum over all frequencies, we need quantitative estimates on the above solutions. For any \(\tau\in[2^{-l-1},1]\) and any \(n,k\) we have by Proposition 3.2 and Proposition 3.3 with \(m_{Y}=2^{l}\) (as in the proof of Corollary 3.2):
\[\sqrt{\tau}\big{\|}(\alpha_{Y}^{n})_{l}-(\alpha_{Y}^{k})_{l}\big{\|}_{H^{s}}+ \sqrt{\tau}\big{\|}\partial_{\tau}\big{[}(\alpha_{Y}^{n})_{l}-(\alpha_{Y}^{k})_ {l}\big{]}\big{\|}_{H^{s-1}}\lesssim\big{\|}\mathcal{O}_{l}^{n}-\mathcal{O}_{l} ^{k}\big{\|}_{H^{s}}+\big{\|}(\chi_{Y}^{n}(0))_{l}-(\chi_{Y}^{k}(0))_{l}\big{\|} _{H^{s-1}} \tag{30}\]
We fix any \(\tau_{0}\in(0,1).\) For any \(\tau\in[\tau_{0},1]\) we add (29) for all \(2^{l+1}\leq\tau^{-1}\) and (30) for all \(2^{l+1}>\tau^{-1}:\)
\[\sum_{l}\big{\|}(\alpha_{Y}^{n})_{l}-(\alpha_{Y}^{k})_{l}\big{\|}_{H^{s}}^{2}+ \big{\|}\partial_{\tau}\big{[}(\alpha_{Y}^{n})_{l}-(\alpha_{Y}^{k})_{l}\big{]} \big{\|}_{H^{s-1}}^{2}\lesssim_{\tau_{0}}\big{\|}\mathcal{O}^{n}-\mathcal{O}^{k} \big{\|}_{H^{s}}^{2}+\big{\|}\chi_{Y}^{n}(0)-\chi_{Y}^{k}(0)\big{\|}_{H^{s-1}}^{2}\]
This implies that \(\alpha_{Y}=\sum_{l}\big{(}\alpha_{Y}\big{)}_{l}\in C^{0}\big{(}(0,1];H^{s}\big{)} \cap C^{1}\big{(}(0,1];H^{s-1}\big{)}\cap C^{2}\big{(}(0,1];H^{s-2}\big{)}\).
We remark that analogous estimates to (29) and (30) also imply for any \(l\) and \(\tau\in(0,1]\):
\[\big{\|}(\alpha_{Y})_{l}\big{\|}_{H^{s}}\lesssim\big{(}|\log\tau|+\tau^{-\frac{ 1}{2}}\big{)}\cdot\bigg{(}\big{\|}\mathcal{O}_{l}\big{\|}_{H^{s}}+\big{\|} \big{(}\chi_{Y}(0)\big{)}_{l}\big{\|}_{H^{s-1}}\bigg{)}\]
This implies that \(\alpha_{Y}\in L^{1}\big{(}[0,1];H^{s}\big{)}\) and \(\tau\alpha_{Y}\in C^{0}\big{(}[0,1];H^{s}\big{)}\). The regularity statements on \(\alpha_{Y}\) proved so far also imply \(\chi_{Y}\in C^{1}\big{(}[0,1];H^{s}\big{)}\cap C^{2}\big{(}(0,1];H^{s-1}\big{)}\).
Finally, by the propagation of regularity for the linear system (17)-(18) with initial data at \(\tau=1\), we obtain that the solution \(\big{(}\alpha_{Y},\chi_{Y}\big{)}\) is defined for \(\tau\in(0,\infty)\) and the above statements regarding the regularity of \(\alpha_{Y}\) and \(\chi_{Y}\) extend from \([0,1]\) to \([0,\infty)\).
By a similar density argument, we obtain that the improved spatial regularity result in Theorem 3.1 also holds for the \(s\)-regularity solutions:
**Corollary 3.3**.: _Let any \(s\geq 2\) and let \(\big{(}\alpha,\chi\big{)}\) be the \(s\)-regularity solution of (17)-(18) with asymptotic initial data \(\big{(}\chi(0),\mathcal{O},\mathfrak{h}\big{)}\), where \(\chi(0)\in H^{s+1/2},\ \mathcal{O}\in H^{s},\ \mathfrak{h}=h-(\log\nabla) \mathcal{O}\in H^{s}.\) We have improved spatial regularity \(\alpha\in C^{0}\big{(}(0,\infty);H^{s+\frac{1}{2}}(S)\big{)}\cap C^{1}\big{(} (0,\infty);H^{s-\frac{1}{2}}(S)\big{)}\) and we have the estimate:_
### Estimates from a Finite Time Hypersurface to \(\mathcal{I}^{+}\)
In this section we assume the existence of smooth solutions of (17)-(18) which satisfy the standard asymptotic expansion at \(\tau=0\), and prove estimates on the asymptotic data in terms of the initial data at time \(\tau=1\). In the context of (3), these correspond to estimates on the asymptotic data at \(\mathcal{I}^{-}\) in terms of \(\frac{n}{2}\) time derivatives of \(\tilde{\phi}\) at a finite time hypersurface. Since (3) is time reversible, we also obtain estimates on the asymptotic data at \(\mathcal{I}^{+}\) in terms of \(\frac{n}{2}\) time derivatives of \(\tilde{\phi}\) at a finite time hypersurface. The main result in Theorem 3.3 proves estimate (11) of Theorem 1.2.
As in Section 3.1, we treat separately the low frequency regime and the high frequency regime. In this case, the transition time is determined by the frequency strip that our solution is localized in. We begin with the following high frequency regime estimate, which is similar to Proposition 3.3:
**Proposition 3.7**.: _Let any \(s\geq 1.\) For any \(\tau_{0}\in(0,1],\) we set \(\alpha_{H}=P_{\geq 1/4\tau_{0}}\alpha.\) Then for any \(\tau\in[\tau_{0},1]\):_
\[\frac{1}{\tau}\big{\|}\alpha_{H}\big{\|}_{H^{s}}^{2}+\tau\big{\|}\nabla\alpha _{H}\big{\|}_{H_{s}}^{2}+\tau\big{\|}\partial_{\tau}\alpha_{H}\big{\|}_{H^{s} }^{2}\lesssim\left(\big{\|}\alpha_{H}\big{\|}_{H^{s+1}}^{2}+\big{\|}\partial_ {\tau}\alpha_{H}\big{\|}_{H^{s}}^{2}+\big{\|}\chi_{H}\big{\|}_{H^{s}}^{2} \right)\bigg{|}_{\tau=1}\]
Proof.: We set \(m=\frac{1}{\tau_{0}}\), and we introduce the new time variable \(t=m\tau\). As before, equation (17) can be written as:
\[\partial_{t}^{2}\big{(}\alpha_{H}\sqrt{t}\big{)}+\frac{1}{4t^{2}}\alpha_{H} \sqrt{t}+\frac{4q^{\prime}}{m^{2}}\alpha\sqrt{t}-\frac{4}{(t^{2}/m^{2}+1)^{2} }\cdot\Delta\frac{\alpha_{H}\sqrt{t}}{m^{2}}=\frac{f_{1}^{\prime}(t)}{m^{2}}t \partial_{t}\big{(}\alpha_{H}\sqrt{t}\big{)}+\frac{f_{2}^{\prime}(t)}{m^{2}} \alpha_{H}\sqrt{t}+\frac{f_{3}^{\prime}(t)}{m^{2}}\chi_{H}\sqrt{t}\]
where \(f_{1}^{\prime},f_{2}^{\prime},f_{3}^{\prime}\) are bounded functions of \(t.\) We also have from (18):
\[\chi_{H}(t)=\chi_{H}(m)-\frac{1}{m^{2}}\int_{t}^{m}2t^{\prime}\alpha_{H}(t^{ \prime})dt^{\prime}\]
which implies the bound for \(t\in[1,m]\):
\[\left\|\chi_{H}\right\|_{L^{2}}^{2}\lesssim\left\|\chi(m)\right\|_{L^{2}}^{2}+ \frac{1}{m^{2}}\int_{t}^{m}\left\|\alpha_{H}\sqrt{t^{\prime}}\right\|_{L^{2}}^{ 2}dt^{\prime}\]
We have the standard energy estimate:
\[\left\|\partial_{t}\big{(}\alpha_{H}\sqrt{t}\big{)}\right\|_{L^{2}}^{2}+\frac{ 1}{t^{2}}\big{\|}\alpha_{H}\sqrt{t}\big{\|}_{L^{2}}^{2}+\frac{1}{m^{2}}\big{\|} \nabla\alpha_{H}\sqrt{t}\big{\|}_{L^{2}}^{2}\lesssim\]
\[\lesssim\bigg{(}t\big{\|}\partial_{t}\alpha_{H}\big{\|}_{L^{2}}^{2}+\frac{1}{ t}\big{\|}\alpha_{H}\big{\|}_{L^{2}}^{2}+\frac{1}{m}\big{\|}\nabla\alpha_{H} \big{\|}_{L^{2}}^{2}\bigg{)}\bigg{|}_{t=m}+\int_{t}^{m}\frac{1}{t^{\prime 3}} \big{\|}\alpha_{H}\sqrt{t^{\prime}}\big{\|}_{L^{2}}^{2}dt^{\prime}+\]
\[+\frac{1}{m^{4}}\int_{t}^{m}t^{\prime}\big{\|}\nabla\alpha_{H}\sqrt{t^{\prime }}\big{\|}_{L^{2}}^{2}dt^{\prime}+\int_{t}^{m}\frac{t^{\prime}}{m^{2}}\big{\|} \partial_{t}\big{(}\alpha_{H}\sqrt{t^{\prime}}\big{)}\big{\|}_{L^{2}}^{2}+ \int_{t}^{m}\int_{S}\frac{\sqrt{t^{\prime}}}{m^{2}}\big{(}|\chi_{H}|+|\alpha_ {H}|\big{)}\cdot\big{|}\partial_{t}\big{(}\alpha_{H}\sqrt{t^{\prime}}\big{)} \big{|}\]
We remark that the bulk terms have an unfavorable sign, which forces us to restrict to high frequencies. We notice that we can bound:
\[\int_{t}^{m}\int_{S}\frac{\sqrt{t^{\prime}}}{m^{2}}\big{(}|\chi_{H}|+|\alpha_ {H}|\big{)}\cdot\big{|}\partial_{t}\big{(}\alpha_{H}\sqrt{t^{\prime}}\big{)} \big{|}\lesssim\int_{t}^{m}\frac{t^{\prime}}{m^{2}}\big{\|}\partial_{t}\big{(} \alpha_{H}\sqrt{t^{\prime}}\big{)}\big{\|}_{L^{2}}^{2}+\frac{1}{m^{2}}\int_{t }^{m}\big{\|}\alpha_{H}\big{\|}_{L^{2}}^{2}+\big{\|}\chi_{H}\big{\|}_{L^{2}}^{ 2}\lesssim\]
\[\lesssim\frac{1}{m}\big{\|}\chi_{H}(m)\big{\|}_{L^{2}}^{2}+\int_{t}^{m}\frac {t^{\prime}}{m^{2}}\big{\|}\partial_{t}\big{(}\alpha_{H}\sqrt{t^{\prime}} \big{)}\big{\|}_{L^{2}}^{2}dt^{\prime}+\frac{1}{m^{2}}\int_{t}^{m}\big{\|} \alpha_{H}\big{\|}_{L^{2}}^{2}dt^{\prime}\]
Combining the previous two inequalities we have:
\[+\int_{t}^{m}\frac{1}{t^{\prime 2}}\big{\|}\alpha_{H}\big{\|}_{L^{2}}^{2} dt^{\prime}+\frac{1}{m^{4}}\int_{t}^{m}t^{\prime 2}\big{\|}\nabla\alpha_{H} \big{\|}_{L^{2}}^{2}dt^{\prime}+\int_{t}^{m}\frac{t^{\prime}}{m^{2}}\big{\|} \partial_{t}\big{(}\alpha_{H}\sqrt{t^{\prime}}\big{)}\big{\|}_{L^{2}}^{2}\]
To deal with the first bulk term, we use the Poincare inequality:
\[\big{\|}\alpha_{H}\big{\|}_{L^{2}}\lesssim\frac{1}{m}\big{\|}\nabla\alpha_{H} \big{\|}_{L^{2}},\]
which follows from the definition of \(\alpha_{H}.\) Since \(1\leq t\leq m\), we can apply Gronwall to obtain:
Finally, we can rewrite this as:
As before, we note that (17)-(18) is linear with coefficients independent of space, so we can commute with spatial derivatives to complete the proof.
In the low frequency regime we use different multipliers from Section 3.1, in order to obtain favorable bulk terms. As pointed out in the introduction, we anticipate the need to renormalize \(h\) to \(\mathfrak{h}\) at the level of \(\alpha\), without explicit reference to the asymptotics of the solution. This can be seen in the second term controlled in the energy estimate below.
We prove the following low frequency estimate:
**Proposition 3.8**.: _For any \(\tau_{0}\in(0,1]\) and for all \(\tau\in[\tau_{0},2^{-l-1}]\subset(0,1]\) we have:_
\[\|\tau\partial_{\tau}\alpha_{l}\|_{H^{s}}^{2}+\|\alpha_{l}-\log(2^{l}\tau)\cdot \tau\partial_{\tau}\alpha_{l}\|_{H^{s}}^{2}\lesssim\left(\|\alpha_{l}\|_{H^{s} }^{2}+\|\tau\nabla\alpha_{l}\|_{H^{s}}^{2}+\|\tau\partial_{\tau}\alpha_{l}\|_ {H^{s}}^{2}+\big{\|}\tau\chi_{l}\big{\|}_{H^{s}}^{2}\right)\bigg{|}_{\tau=2^{- l-1}}\]
Proof.: We set \(m=2^{l}\) and we introduce the new time variable \(t=m\tau\). We point out that the first part of the proof does not require \(\alpha\) to be localized in frequency. Equation (17) can be written as:
\[\partial_{t}\big{(}t\partial_{t}\alpha\big{)}+4q^{\prime}t\frac{\alpha}{m^{2 }}-\frac{4t}{(t^{2}/m^{2}+1)^{2}}\cdot\Delta\frac{\alpha}{m^{2}}=\frac{f_{1}^ {\prime}(t)}{m^{2}}t^{2}\partial_{t}\alpha+\frac{f_{2}^{\prime}(t)}{m^{4}}t^ {3}\alpha+\frac{f_{3}^{\prime}(t)}{m^{2}}t\chi\]
where \(f_{1}^{\prime},f_{2}^{\prime},f_{3}^{\prime}\) are bounded functions of \(t.\) We also have from (18):
\[\chi(t)=\chi(1/2)-\frac{1}{m^{2}}\int_{t}^{1/2}2t^{\prime}\alpha(t^{\prime}) dt^{\prime}\]
which implies the bound for \(t\in[m\tau_{0},1/2]\):
\[\big{\|}\chi\big{\|}_{L^{2}}^{2}\lesssim\big{\|}\chi(1/2)\big{\|}_{L^{2}}^{2} +\frac{1}{m^{4}}\int_{t}^{1/2}\big{\|}t^{\prime}\alpha\big{\|}_{L^{2}}^{2}dt^ {\prime} \tag{31}\]
We multiply the equation by \(t\partial_{t}\alpha\) and integrate by parts. Since for \(t\in[m\tau_{0},1/2]\) we have:
\[\frac{d}{dt}\bigg{(}\frac{4q^{\prime}t^{2}}{m^{2}}\bigg{)}\geq 0,\ \frac{d}{dt}\bigg{(}\frac{4t^{2}/m^{2}}{(t^{2}/m^{2}+1)^{2}}\bigg{)}\geq 0,\]
we obtain that the bulk terms will have favorable signs in the energy estimate:
\[\big{\|}t\partial_{t}\alpha\big{\|}_{L^{2}}^{2}+\frac{t^{2}}{m^{2}}\big{\|} \alpha\big{\|}_{L^{2}}^{2}+\int_{t}^{1/2}\frac{t^{\prime}}{m^{2}}\big{\|} \alpha\big{\|}_{L^{2}}^{2}dt^{\prime}+\frac{t^{2}}{m^{2}}\big{\|}\nabla\alpha \big{\|}_{L^{2}}^{2}+\int_{t}^{1/2}\frac{t^{\prime}}{m^{2}}\big{\|}\nabla \alpha\big{\|}_{L^{2}}^{2}dt^{\prime}\lesssim\]
We remark that we can bound:
\[\int_{t}^{1/2}\int_{S}\bigg{(}\frac{t^{\prime}}{m^{2}}|\chi|+\frac{t^{\prime 3 }}{m^{4}}|\alpha|\bigg{)}\cdot\big{|}t^{\prime}\partial_{t}\alpha\big{|}\lesssim \int_{t}^{1/2}\frac{t^{\prime}}{m^{2}}\big{\|}t^{\prime}\partial_{t}\alpha \big{\|}_{L^{2}}^{2}+\frac{1}{m^{2}}\int_{t}^{1/2}\frac{1}{m^{2}}\big{\|}t^{ \prime}\alpha\big{\|}_{L^{2}}^{2}+\big{\|}\chi\big{\|}_{L^{2}}^{2}\lesssim\]
Combining the previous two inequalities we have:
\[\big{\|}t\partial_{t}\alpha\big{\|}_{L^{2}}^{2}+\frac{t^{2}}{m^{2}}\big{\|} \alpha\big{\|}_{H^{1}}^{2}\lesssim\bigg{(}\big{\|}t\partial_{t}\alpha\big{\|} _{L^{2}}^{2}+\frac{1}{m^{2}}\big{\|}\alpha\big{\|}_{H^{1}}^{2}+\frac{1}{m^{2}} \big{\|}\chi\big{\|}_{L^{2}}^{2}\bigg{)}\bigg{|}_{t=\frac{1}{2}}+\int_{t}^{1/2} \frac{t^{\prime}}{m^{2}}\big{\|}t^{\prime}\partial_{t}\alpha\big{\|}_{L^{2}}^ {2}+\frac{1}{m^{4}}\big{\|}t^{\prime}\alpha\big{\|}_{L^{2}}^{2}\]
By Gronwall we obtain that for all \(t\in[m\tau_{0},1/2]\):
\[\big{\|}t\partial_{t}\alpha\big{\|}_{L^{2}}^{2}+\frac{t^{2}}{m^{2}}\big{\|} \alpha\big{\|}_{H^{1}}^{2}\lesssim\bigg{(}\big{\|}t\partial_{t}\alpha\big{\|}_ {L^{2}}^{2}+\frac{1}{m^{2}}\big{\|}\alpha\big{\|}_{H^{1}}^{2}+\frac{1}{m^{2}} \big{\|}\chi\big{\|}_{L^{2}}^{2}\bigg{)}\bigg{|}_{t=\frac{1}{2}} \tag{32}\]
For the second part of the proof, it is essential that \(\alpha\) is localized in frequency. We write (17) as:
\[\partial_{t}\big{(}\alpha_{l}-t\log t\partial_{t}\alpha_{l}\big{)}-4q^{\prime}t \log t\frac{\alpha_{l}}{m^{2}}+\frac{4t\log t}{(t^{2}/m^{2}+1)^{2}}\cdot\Lambda \frac{\alpha_{l}}{m^{2}}=\frac{f_{1}^{\prime}(t)}{m^{2}}t^{2}\log t\partial_{t }\alpha_{l}+\frac{f_{2}^{\prime}(t)}{m^{4}}t^{3}\log t\alpha_{l}+\frac{f_{3}^{ \prime}(t)}{m^{2}}t\log t\chi_{l},\]
where \(f_{1}^{\prime},f_{2}^{\prime},f_{3}^{\prime}\) are bounded functions of \(t.\) It is convenient to introduce the notation:
\[\overline{\alpha}_{l}=\alpha_{l}-t\log t\partial_{t}\alpha_{l}\]
We multiply the equation by \(\overline{\alpha}_{l}\) and we obtain:
\[\big{\|}\overline{\alpha}_{l}\big{\|}_{L^{2}}^{2}(t)\lesssim \big{\|}\overline{\alpha}_{l}\big{\|}_{L^{2}}^{2}(1/2)+\int_{t}^{1/2}\int_{S }t^{\prime}|\log t^{\prime}|\cdot|\overline{\alpha}_{l}|\cdot\bigg{(}|\alpha_ {l}|+\frac{|\Delta\alpha_{l}|}{m^{2}}+\frac{|t^{\prime}\partial_{t}\alpha_{l}| }{m^{2}}+\frac{|\chi_{l}|}{m^{2}}\bigg{)}dt^{\prime}\lesssim\] \[\lesssim\big{\|}\overline{\alpha}_{l}\big{\|}_{L^{2}}^{2}(1/2)+ \int_{t}^{1/2}t^{\prime}|\log t^{\prime}|\cdot\big{\|}\overline{\alpha}_{l} \big{\|}_{L^{2}}^{2}dt^{\prime}+\int_{t}^{1/2}t^{\prime}|\log t^{\prime}|^{3} \cdot\big{\|}t^{\prime}\partial_{t}\alpha_{l}\big{\|}_{L^{2}}^{2}+\frac{t^{ \prime}|\log t^{\prime}|}{m^{4}}\cdot\big{\|}\chi_{l}\big{\|}_{L^{2}}^{2}dt^{\prime}\]
We now use (31) and (32) to obtain:
\[\big{\|}\overline{\alpha}_{l}\big{\|}_{L^{2}}^{2}(t)\lesssim\bigg{(}\big{\|} \overline{\alpha}_{l}\big{\|}_{L^{2}}^{2}+\big{\|}t\partial_{t}\alpha_{l} \big{\|}_{L^{2}}^{2}+\frac{1}{m^{2}}\big{\|}\alpha_{l}\big{\|}_{H^{1}}^{2}+ \frac{1}{m^{2}}\big{\|}\chi_{l}\big{\|}_{L^{2}}^{2}\bigg{)}\bigg{|}_{t=\frac{ 1}{2}}+\int_{t}^{1/2}t^{\prime}|\log t^{\prime}|\cdot\big{\|}\overline{\alpha }_{l}\big{\|}_{L^{2}}^{2}dt^{\prime}\]
Applying Gronwall and commuting by spatial derivatives, we obtain the desired conclusion.
We combine the low frequency and high frequency estimates for our solution in order to obtain:
**Corollary 3.4**.: _For any \(\tau_{0}\in(0,1],\) we have:_
\[\sum_{l=0}^{\infty}\boldsymbol{I}_{\{2^{l+1}\leq\tau_{0}^{-1}\}}\cdot\bigg{(} \big{\|}\tau\partial_{\tau}\alpha_{l}\big{\|}_{H^{s}}^{2}+\big{\|}\alpha_{l}- \log(2^{l}\tau)\cdot\tau\partial_{\tau}\alpha_{l}\big{\|}_{H^{s}}^{2}\bigg{)} \bigg{|}_{\tau=\tau_{0}}\lesssim\bigg{(}\big{\|}\alpha\big{\|}_{H^{s+1/2}}^{2}+ \big{\|}\partial_{\tau}\alpha\big{\|}_{H^{s-1/2}}^{2}+\big{\|}\chi\big{\|}_{H^{ s-1/2}}^{2}\bigg{)}\bigg{|}_{\tau=1}.\]
Proof.: For each \(l\) such that \(2^{l+1}\leq\tau_{0}^{-1},\) we apply Proposition 3.8 to obtain:
\[\bigg{(}\big{\|}\tau\partial_{\tau}\alpha_{l}\big{\|}_{H^{s}}^{2}+\big{\|} \alpha_{l}-\log(2^{l}\tau)\cdot\tau\partial_{\tau}\alpha_{l}\big{\|}_{H^{s}}^{ 2}\bigg{)}\bigg{|}_{\tau=\tau_{0}}\lesssim\bigg{(}\|\alpha_{l}\|_{H^{s}}^{2}+ \big{\|}\tau\nabla\alpha_{l}\big{\|}_{H^{s}}^{2}+\big{\|}\tau\partial_{\tau} \alpha_{l}\big{\|}_{H^{s}}^{2}+\big{\|}\tau\chi_{l}\big{\|}_{H^{s}}^{2}\bigg{)} \bigg{|}_{\tau=2^{-l-1}}.\]
Next, we remark that on \([2^{-l-1},1]\) we have \((\alpha_{l})_{H}=P_{\geq 2^{l-1}}\alpha_{l}=\alpha_{l},\) so we get by Proposition 3.7:
\[\bigg{(}\big{\|}\alpha_{l}\big{\|}_{H^{s}}^{2}+\big{\|}\tau\nabla\alpha_{l} \big{\|}_{H^{s}}^{2}+\big{\|}\tau\partial_{\tau}\alpha_{l}\big{\|}_{H^{s}}^{2} +\big{\|}\tau\chi_{l}\big{\|}_{H^{s}}^{2}\bigg{)}\bigg{|}_{\tau=2^{-l-1}} \lesssim 2^{-l}\cdot\bigg{(}\big{\|}\alpha_{l}\big{\|}_{H^{s+1}}^{2}+\big{\|} \partial_{\tau}\alpha_{l}\big{\|}_{H^{s}}^{2}+\big{\|}\chi_{l}\big{\|}_{H^{s}} ^{2}\bigg{)}\bigg{|}_{\tau=1}\]
The constants are independent of \(l,\) so we can sum the above inequalities.
Putting together the estimates proved so far, we complete the proof of (11):
**Theorem 3.3**.: _Let \(\big{(}\alpha,\chi\big{)}\) be a smooth solution of (17)-(18) with asymptotic data given by \(\chi(0),\mathcal{O},\) and \(h.\) Then we have the estimates:_
\[\big{\|}\chi(0)\big{\|}_{H^{s+1/2}}\lesssim\bigg{(}\big{\|}\alpha\big{\|}_{H^ {s+1/2}}+\big{\|}\partial_{\tau}\alpha\big{\|}_{H^{s-1/2}}+\big{\|}\chi\big{\|} _{H^{s+1/2}}\bigg{)}\bigg{|}_{\tau=1}\]
\[\big{\|}\chi(0)\big{\|}_{H^{s+1/2}}\lesssim\bigg{(}\big{\|}\alpha\big{\|}_{H^{s+ 1/2}}+\big{\|}\partial_{\tau}\alpha\big{\|}_{H^{s-1/2}}+\big{\|}\chi\big{\|}_{H^{ s+1/2}}\bigg{)}\bigg{|}_{\tau=1}\]
Proof.: The asymptotic expansion of the solution implies that:
\[\lim_{\tau_{0}\to 0}\mathbf{1}_{\{2^{l+1}\leq\tau_{0}^{-1}\}}\cdot\left(\|\tau \partial_{\tau}\alpha_{l}\|_{H^{s}}^{2}+\|\alpha_{l}-\log(2^{l}\tau)\cdot\tau \partial_{\tau}\alpha_{l}\|_{H^{s}}^{2}\right)\bigg{|}_{\tau=\tau_{0}}=\big{\|} \mathcal{O}_{l}\big{\|}_{H^{s}}^{2}+\big{\|}\mathfrak{h}_{l}\big{\|}_{H^{s}}^{2}\]
We use Corollary 3.4 and Fatou's lemma to obtain the first estimate.
In order to prove the second estimate, we first notice that by the proof of Corollary 3.4, we have that for any \(l\) such that \(2^{l+1}\leq\tau^{-1}:\)
\[\|\tau\partial_{\tau}\alpha_{l}\|_{H^{s}}^{2}+\|\alpha_{l}-\log(2^{l}\tau) \cdot\tau\partial_{\tau}\alpha_{l}\|_{H^{s}}^{2}\lesssim 2^{-l}\cdot\left(\big{\|} \alpha_{l}\big{\|}_{H^{s+1}}^{2}+\big{\|}\partial_{\tau}\alpha_{l}\big{\|}_{ H^{s}}^{2}+\big{\|}\chi_{l}\big{\|}_{H^{s}}^{2}\right)\bigg{|}_{1}\]
In particular, this implies that for any \(l\) with \(2^{l+1}\leq\tau^{-1}:\)
\[\big{\|}\alpha_{l}\big{\|}_{H^{s}}^{2}\lesssim\big{(}1+|\log\tau|^{2}\big{)} \bigg{(}\big{\|}\alpha_{l}\big{\|}_{H^{s+1/2}}^{2}+\big{\|}\partial_{\tau} \alpha_{l}\big{\|}_{H^{s-1/2}}^{2}+\big{\|}\chi_{l}\big{\|}_{H^{s-1/2}}^{2} \bigg{)}\bigg{|}_{1} \tag{33}\]
For any fixed \(l\geq 0\) we use this relation and Proposition 3.7 to get:
\[\big{\|}\chi_{l}(0)\big{\|}_{H^{s+1/2}}^{2}\lesssim\big{\|}\chi_{l}(1)\big{\|} _{H^{s+1/2}}^{2}+\int_{0}^{2^{-l-1}}\tau^{2}\big{\|}\alpha_{l}\big{\|}_{H^{s+1 /2}}^{2}d\tau+\int_{2^{-l-1}}^{1}\tau^{2}\big{\|}\alpha_{l}\big{\|}_{H^{s+1/2}} ^{2}d\tau\lesssim\]
\[\lesssim\big{\|}\chi_{l}(1)\big{\|}_{H^{s+1/2}}^{2}+\bigg{(}\big{\|}\alpha_{l }\big{\|}_{H^{s+1}}^{2}+\big{\|}\partial_{\tau}\alpha_{l}\big{\|}_{H^{s}}^{2} +\big{\|}\chi_{l}\big{\|}_{H^{s}}^{2}\bigg{)}\bigg{|}_{\tau=1}\cdot\int_{0}^{2^ {-l-1}}\tau^{2}|\log\tau|^{2}d\tau+\]
This completes the proof of the second estimate.
Finally, we also prove some estimates which are not sharp in terms of spatial regularity, but will nevertheless be useful in the next section:
**Proposition 3.9**.: _Let \(\big{(}\alpha,\chi\big{)}\) be the smooth solution of the system (17)-(18) with initial data at \(\tau=1\) given by \(\chi(1),\alpha(1),\partial_{\tau}\alpha(1)\in C^{\infty}(S^{n}).\) Then we have the estimates for \(\tau\in(0,1]\):_
\[\big{\|}\alpha\big{\|}_{H^{s}}^{2}(\tau)\lesssim\big{(}1+|\log\tau|^{2}\big{)} \cdot\bigg{(}\big{\|}\alpha\big{\|}_{H^{s+1}}^{2}+\big{\|}\partial_{\tau} \alpha\big{\|}_{H^{s}}^{2}+\big{\|}\chi\big{\|}_{H^{s}}^{2}\bigg{)}\bigg{|}_{ \tau=1}\]
Proof.: We fix \(\tau_{0}\in(0,1].\) By the proof of Theorem 3.3 and (33), we have that for any \(l\) such that \(2^{l+1}\leq\tau_{0}^{-1}:\)
\[\big{\|}\alpha_{l}\big{\|}_{H^{s}}^{2}(\tau_{0})+\|\tau\partial_{\tau}\alpha_{ l}\|_{H^{s}}^{2}(\tau_{0})\lesssim\big{(}1+|\log\tau_{0}|^{2}\big{)}\bigg{(} \big{\|}\alpha_{l}\big{\|}_{H^{s+1/2}}^{2}+\big{\|}\partial_{\tau}\alpha_{l} \big{\|}_{H^{s-1/2}}^{2}+\big{\|}\chi_{l}\big{\|}_{H^{s-1/2}}^{2}\bigg{)} \bigg{|}_{\tau=1} \tag{34}\]
To deal with the high frequencies, we recall that \(\alpha_{H}=P_{\geq 1/4\tau_{0}}\alpha\) satisfies the estimate in Proposition 3.7:
\[\frac{1}{\tau}\big{\|}\alpha_{H}\big{\|}_{H^{s}}^{2}+\tau\big{\|}\nabla\alpha _{H}\big{\|}_{H_{s}}^{2}+\tau\big{\|}\partial_{\tau}\alpha_{H}\big{\|}_{H^{s}}^ {2}\lesssim\bigg{(}\big{\|}\alpha_{H}\big{\|}_{H^{s+1}}^{2}+\big{\|}\partial_{ \tau}\alpha_{H}\big{\|}_{H^{s}}^{2}+\big{\|}\chi_{H}\big{\|}_{H^{s}}^{2}\bigg{)} \bigg{|}_{\tau=1}\]
Combining these two inequalities, we obtain the conclusion.
### Asymptotic Completeness
In this section, we prove that smooth solutions defined on \((0,\infty)\) induce asymptotic data at \(\tau=0\). Together with the estimates proved in Theorem 3.3, this completes the proof of the second statement in Theorem 1.2, establishing _asymptotic completeness_. By a standard density argument, a similar result holds for \(s\)-regularity solutions.
**Proposition 3.10** (Asymptotic completeness).: _Let \(\big{(}\alpha,\chi\big{)}\) be the smooth solution of the system (17)-(18) with initial data at \(\tau=1\) given by \(\chi(1),\alpha(1),\partial_{\tau}\alpha(1)\in C^{\infty}(S^{n}).\) Then there exist smooth \(\chi(0),\mathcal{O},\) and \(h\), which determine the asymptotic data of the solution at \(\tau=0\)._
Proof.: We write equation (17) as:
\[\partial_{\tau}^{2}\alpha+\frac{1}{\tau}\partial_{\tau}\alpha=-4q^{\prime} \alpha+\frac{4}{(\tau^{2}+1)^{2}}\cdot\Delta\alpha+f_{1}(\tau)\tau\partial_{ \tau}\alpha+f_{2}(\tau)\tau^{2}\alpha+f_{3}(\tau)\cdot\bigg{(}\chi(1)-\int_{ \tau}^{1}2\tau^{\prime}\alpha(\tau^{\prime})d\tau^{\prime}\bigg{)}\]
As before, we treat the RHS as an inhomogeneous term, so we can write the equation as:
\[\partial_{\tau}^{2}\alpha+\frac{1}{\tau}\partial_{\tau}\alpha=F\bigg{(}\alpha,\partial_{\tau}\alpha,\Delta\alpha,\int_{\tau}^{1}\tau^{\prime}\alpha(\tau^{ \prime})d\tau^{\prime},\chi(1)\bigg{)} \tag{35}\]
We can solve this equation as an inhomogeneous ODE to obtain:
\[\alpha(\tau)=\alpha(1)+\partial_{\tau}\alpha(1)\cdot\log\tau-\log\tau\cdot \int_{\tau}^{1}\tau^{\prime}\cdot Fd\tau^{\prime}+\int_{\tau}^{1}\tau^{\prime} \log\tau^{\prime}\cdot Fd\tau^{\prime}\]
We notice that using the estimates in Proposition 3.9, we get for any \(s\geq 1\):
\[\bigg{\|}F\bigg{(}\alpha,\partial_{\tau}\alpha,\Delta\alpha,\int_{\tau}^{1} \tau^{\prime}\alpha(\tau^{\prime})d\tau^{\prime},\chi(1)\bigg{)}\bigg{\|}_{H^{ s}}\lesssim\big{(}1+|\log\tau|\big{)}\cdot\bigg{(}\big{\|}\alpha\big{\|}_{H^{s+3}}+ \big{\|}\partial_{\tau}\alpha\big{\|}_{H^{s+2}}+\big{\|}\chi\big{\|}_{H^{s+2} }\bigg{)}\bigg{|}_{\tau=1}\]
In particular, we can define the smooth functions:
\[\chi(0)=\chi(1)-\int_{0}^{1}2\tau^{\prime}\alpha(\tau^{\prime})d\tau^{\prime },\ \mathcal{O}:=\partial_{\tau}\alpha(1)-\int_{0}^{1}\tau^{\prime}\cdot Fd\tau^{ \prime},\ h:=\alpha(1)+\int_{0}^{1}\tau^{\prime}\log\tau^{\prime}\cdot Fd\tau ^{\prime}\]
As a result, we have:
\[\alpha(\tau)=\mathcal{O}\cdot\log\tau+h+\log\tau\cdot\int_{0}^{\tau}\tau^{ \prime}\cdot Fd\tau^{\prime}-\int_{0}^{\tau}\tau^{\prime}\log\tau^{\prime} \cdot Fd\tau^{\prime}\]
Using our previous bound on \(F,\) we see that \(\alpha-\mathcal{O}\log(\tau)-h\in C^{1}_{\tau}([0,\infty))C^{\infty}(S^{n})\) and:
\[\alpha(\tau)-\mathcal{O}\log(\tau)-h=O\big{(}\tau^{2}|\log(\tau)|^{2}\big{)}, \ \partial_{\tau}\alpha(\tau)-\frac{\mathcal{O}}{\tau}=O\big{(}\tau|\log(\tau)|^{ 2}\big{)}\ \text{in}\ C^{\infty}(S^{n}).\]
We conclude that \(\big{(}\chi(0),\mathcal{O},h\big{)}\) determine the asymptotic data of \(\big{(}\alpha,\chi\big{)}\) at \(\tau=0\).
We now prove the existence of an asymptotic data at \(\tau=0\) for \(s\)-regularity solutions:
**Proposition 3.11** (Asymptotic completeness).: _Let any \(s\geq 2.\) We consider \(\big{(}\alpha,\chi\big{)}\) to be the solution of the system (17)-(18) with initial data at \(\tau=1\) given by \(\chi(1)\in H^{s+1/2}(S^{n}),\alpha(1)\in H^{s+1/2}(S^{n}),\partial_{\tau}\alpha( 1)\in H^{s-1/2}(S^{n}).\) Then there exist functions \(\chi(0)\in H^{s+1/2}(S^{n}),\mathcal{O},\mathfrak{h}\in H^{s}(S^{n})\), which represent the asymptotic initial data at \(\tau=0\) for the solution, in the sense of Definition 3.2. Moreover, we have the estimate:_
\[\big{\|}\mathfrak{h}\big{\|}_{H^{s}}+\big{\|}\mathcal{O}\big{\|}_{H^{s}}+\big{\|} \chi(0)\big{\|}_{H^{s+1/2}}\lesssim\bigg{(}\big{\|}\alpha\big{\|}_{H^{s+1/2}}+ \big{\|}\partial_{\tau}\alpha\big{\|}_{H^{s-1/2}}+\big{\|}\chi\big{\|}_{H^{s+1 /2}}\bigg{)}\bigg{|}_{\tau=1}\]
Proof.: Let \(\{\chi^{n}(1)\},\ \{\alpha^{n}(1)\},\ \{\partial_{\tau}\alpha^{n}(1)\}\) be sequences of smooth functions such that:
\[\chi^{n}(1)\longrightarrow\chi(1)\ \text{in}\ H^{s+1/2},\ \alpha^{n}(1) \longrightarrow\alpha(1)\ \text{in}\ H^{s+1/2},\ \partial_{\tau}\alpha^{n}(1) \longrightarrow\partial_{\tau}\alpha(1)\ \text{in}\ H^{s-1/2} \tag{36}\]
Let \(\big{(}\alpha^{n},\chi^{n}\big{)}\) be the smooth solution of (17)-(18) with initial data at \(\tau=1\) given by \(\big{(}\chi^{n}(1),\alpha^{n}(1),\partial_{\tau}\alpha^{n}(1)\big{)}\). Using the previous proposition, we know that there exist smooth \(\chi^{n}(0),\mathcal{O}^{n},\) and \(h^{n}\), which determine the asymptotic expansion of \(\big{(}\alpha^{n},\chi^{n}\big{)}\) at \(\tau=0\). For any \(n,k\) we have by Theorem 3.3:
As a result, there exist \(\chi(0)\in H^{s+1/2}(S^{n}),\mathcal{O},\mathfrak{h}\in H^{s}(S^{n})\) the limits of the sequences \(\{\chi^{n}(0)\},\ \{\mathcal{O}^{n}\},\ \{\mathfrak{h}^{n}\}.\) We denote by \(\big{(}\overline{\alpha},\overline{\chi}\big{)}\) the \(s\)-regularity solution of (17)-(18) with asymptotic initial data \(\big{(}\chi(0),\mathcal{O},\mathfrak{h}\big{)}.\) Applying Corollary 3.3 to the solution \(\big{(}\overline{\alpha}-\alpha^{n},\overline{\chi}-\chi^{n}\big{)}\), and recalling relation (36), we conclude that \(\big{(}\overline{\alpha},\overline{\chi}\big{)}\equiv\big{(}\alpha,\chi\big{)}.\)
In conclusion, the results proved in this section imply the existence and uniqueness of scattering states, and the asymptotic completeness statements in Theorem 1.2. As a result, we can define the scattering map \(\big{(}\chi(0),\mathcal{O},\mathfrak{h}\big{)}\mapsto\big{(}\chi(1),\alpha(1 ),\partial_{\tau}\alpha(1)\big{)}\). The estimates (10) and (11) imply that the scattering map extends as a Banach space isomorphism from \(H^{s+\frac{1}{2}}(S^{n})\times H^{s}(S^{n})\times H^{s}(S^{n})\) to \(H^{s+\frac{1}{2}}(S^{n})\times H^{s+\frac{1}{2}}(S^{n})\times H^{s-\frac{1}{2} }(S^{n})\) for any \(s\geq 1\). Thus, we complete the proof of Theorem 1.2 in the case of smooth solutions. The results proved for \(s\)-regularity solutions with \(s\geq 2\) imply that an analogous result to Theorem 1.2 holds for this class of solutions.
## 4 Scattering for Self-similar Solutions of the Wave Equation in Minkowski Space
The main result of this section is the proof of Theorem 4.1, which establishes a scattering theory for smooth self-similar solutions of the wave equation in the \(\{u<0,\ v>0\}\) region of Minkowski space \(\mathbb{R}^{n+2}.\) A similar scattering result also holds in the case of \(s\)-regularity solutions with \(s\geq 2\), in Theorem 4.2.
The proof relies on the scattering result in Theorem 1.2 proved in Section 3, together with a series of compatibility relations which we briefly explain now. We recall that the system (8)-(9) models the equation for \(\partial_{v}^{\frac{n}{2}}\phi\) along \(u=-1\) for \(v\in[0,1]\), and similarly the equation for \(\partial_{u}^{\frac{n}{2}}\phi\) along \(v=1\) for \(u\in[-1,0]\). Integrating
these functions, we obtain a self-similar solution \(\phi\) of the wave equation (12) up to the choice of \(\frac{n}{2}\) integration constants. Determining the constants such that \(\phi\) solves (12) at \(\{v=0\}\) and \(\{u=0\}\) implies a series of compatibility conditions.
In Section 4.1 we derive the compatibility relations mentioned above. In Section 4.2 we prove the _existence and uniqueness of scattering states_. In Section 4.3 we prove _asymptotic completeness_. Finally, in Section 4.4 we complete the proofs of Theorem 4.1 and Theorem 4.2, by constructing the _scattering isomorphism_.
### Compatibility Relations
We assume that \(\phi\) is a self-similar solution of the wave equation (12) in the region \(\{u<0,\ v>0\}\subset\mathbb{R}^{n+2}\). In this section we prove that under some regularity conditions, \(\phi\) satisfies the compatibility relations referred to above. Based on this, we provide a description of the asymptotic behavior of the solution near \(v=0\) and near \(u=0\).
We assume for now that \(\phi\) satisfies the smoothness condition:
\[\phi(-1,v)\in W^{\frac{n}{2},1}_{loc}\big{(}[0,\infty);C^{\infty}(S^{n})\big{)} \cap C^{\infty}((0,\infty)\times S^{n})\]
By self-similarity, we have that \(\phi\) solves (13), the wave equation along \(u=-1.\) In particular, the computations in Section 2.2 imply that \(\chi=\partial_{v}^{\frac{n}{2}-1}\phi,\ \alpha=\partial_{v}^{\frac{n}{2}}\phi\) are smooth solutions of the model problem from Section 3. By Proposition 3.10, we know that there exist \(\chi(0),\mathcal{O},h\in C^{\infty}(S)\) which determine the asymptotic data of \((\alpha,\chi)\) at \(v=0.\) We note that we changed coordinates from \(\tau\) to \(v=\tau^{2}.\)
We notice that the regularity assumptions on \(\phi\) imply that along \(u=-1\):
\[\phi(-1,v)=\sum_{k=0}^{n/2-1}\frac{1}{k!}\phi_{k}v^{k}+\int_{0}^{v}\int_{0}^{v _{n/2}}\cdots\int_{0}^{v_{2}}\alpha(v_{1})dv_{1}dv_{2}\ldots dv_{n/2} \tag{37}\]
where we use the notation \(\phi_{k}=\partial_{v}^{k}\phi(0).\) The above equation implies that for any \(1\leq a\leq\frac{n}{2}-1\) :
\[\partial_{v}^{a}\phi(-1,v)=\sum_{k=a}^{n/2-1}\frac{1}{(k-a)!}\phi_{k}v^{k-a}+ \int_{0}^{v}\int_{0}^{v_{n/2-a}}\cdots\int_{0}^{v_{2}}\alpha(v_{1})dv_{1}dv_{2 }\ldots dv_{n/2-a} \tag{38}\]
We use the above relations in order to evaluate equation (13) at \(v=0\), and we obtain the compatibility relation:
\[\bigg{(}1-\frac{n}{2}\bigg{)}\phi_{1}=\Delta\phi_{0}\]
Similarly, we evaluate equation (14) at \(v=0\), and we obtain the compatibility relation:
\[\bigg{(}2-\frac{n}{2}\bigg{)}\phi_{2}+2\phi_{1}=\Delta\phi_{1}\]
We repeat the same process in equation (15) for each \(2\leq a\leq\frac{n}{2}-2\), and we obtain the compatibility relations:
\[\bigg{(}a+1-\frac{n}{2}\bigg{)}\phi_{a+1}+q_{a}^{\prime}\phi_{a}+q_{a}^{ \prime\prime}\phi_{a-1}=\Delta\phi_{a}\]
In summary, for each \(1\leq a\leq\frac{n}{2}-1\), we have the compatibility relation:
\[\phi_{a}=\Phi_{a}\big{(}\Delta^{a}\phi_{0},\ldots,\Delta\phi_{0}\big{)}, \tag{39}\]
where each \(\Phi_{a}\) is a first order multi-linear function such that the coefficient of \(\Delta^{a}\phi_{0}\) is nonzero. In particular, we point out that \(\chi(0)\) is determined by \(\phi_{0}\) as well:
\[\chi(0)=\phi_{\frac{n}{2}-1}=\Phi_{\frac{n}{2}-1}\big{(}\Delta^{\frac{n}{2}-1} \phi_{0},\ldots,\Delta\phi_{0}\big{)} \tag{40}\]
We consider now equation (15) for \(a=\frac{n}{2}-1\):
\[v(v+1)^{2}\partial_{v}\alpha+\big{[}n(v+1)+p_{a}v+q_{a}\big{]}v\alpha+(p_{a}^{ \prime}v+q_{a}^{\prime})\chi+q_{a}^{\prime\prime}\partial_{v}^{a-1}\phi- \Delta\chi=0\]
Using the expansion (19), we obtain the compatibility relation:
\[\mathcal{O}=\Phi_{\frac{n}{2}}\big{(}\Delta^{\frac{n}{2}}\phi_{0},\ldots, \Delta\phi_{0}\big{)}=2\Delta\chi(0)-2q_{\frac{n}{2}-1}^{\prime}\chi(0)-2q_{ \frac{n}{2}-1}^{\prime\prime}\phi_{\frac{n}{2}-2}, \tag{41}\]
where \(\Phi_{\frac{n}{2}}\) is a first order multi-linear function such that the coefficient of \(\Delta^{\frac{n}{2}}\phi_{0}\) is nonzero.
We prove the following result:
**Proposition 4.1**.: _For any \(\phi\) self-similar solution of (12) in the region \(\{u<0,\ v>0\},\) we have:_
* _If_ \(\phi(-1,v)\in W^{\frac{n}{2},1}_{loc}\big{(}[0,\infty);C^{\infty}(S^{n})\big{)} \cap C^{\infty}((0,\infty)\times S^{n})\)_, there exist_ \(\phi_{0},h\in C^{\infty}(S^{n})\) _such that for_ \(u<0\)_:_ \[\phi(u,v)=\sum_{k=0}^{n/2-1}\frac{1}{k!}\phi_{k}\bigg{(}\frac{v}{-u}\bigg{)}^{ k}+\int_{0}^{\frac{v}{-u}}\int_{0}^{v_{n/2}}\cdots\int_{0}^{v_{2}}\alpha(v_{1})dv_{1 }dv_{2}\ldots dv_{n/2},\] (42) _where_ \(\phi_{1},\ldots,\phi_{\frac{n}{2}-1}\in C^{\infty}(S^{n})\) _are defined by the compatibility relation (_39_),_ \(\chi(0),\mathcal{O}\in C^{\infty}(S^{n})\) _are defined by the compatibility relations (_40_)-(_41_), and_ \(\big{(}\alpha,\chi\big{)}\) _is the smooth solution of (_17_)-(_18_) with asymptotic initial data_ \(\big{(}\chi(0),\mathcal{O},h\big{)}.\) _We say that_ \(\big{(}\phi_{0},h\big{)}\) _determine the asymptotic data of_ \(\phi\) _at_ \(v=0.\)__
* _If_ \(\phi(u,1)\in W^{\frac{n}{2},1}_{loc}\big{(}(-\infty,0];C^{\infty}(S^{n})\big{)} \cap C^{\infty}((-\infty,0)\times S^{n})\)_, there exist_ \(\underline{\phi_{0}},\underline{h}\in C^{\infty}(S^{n})\) _such that for_ \(v>0\)_:_ \[\phi(u,v)=\sum_{k=0}^{n/2-1}\frac{1}{k!}\underline{\phi_{k}}\bigg{(}\frac{-u}{ v}\bigg{)}^{k}+\int_{0}^{\frac{-u}{v}}\int_{0}^{u_{n/2}}\cdots\int_{0}^{u_{2}} \underline{\alpha}(u_{1})du_{1}du_{2}\ldots du_{n/2},\] (43) _where_ \(\underline{\phi_{1}},\ldots,\underline{\phi_{\frac{n}{2}-1}}\in C^{\infty}(S^ {n})\) _are defined by the compatibility relation (_39_),_ \(\underline{\chi}(0),\underline{\mathcal{O}}\in C^{\infty}(S^{n})\) _are defined by the compatibility relations (_40_)-(_41_), and_ \(\big{(}\underline{\alpha},\underline{\chi}\big{)}\) _is the smooth solution of (_17_)-(_18_) with asymptotic initial data_ \(\big{(}\underline{\chi}(0),\underline{\mathcal{O}},\underline{h}\big{)}.\) _We say that_ \(\big{(}\underline{\phi_{0}},\underline{h}\big{)}\) _determine the asymptotic data of_ \(\phi\) _at_ \(u=0.\)__
Proof.: In the discussion preceding the proposition we already proved the first part of the statement. Equation (42) follows from (37) using the fact that:
\[\phi(u,v)=\phi\bigg{(}-1,\frac{v}{-u}\bigg{)}.\]
We define \(\psi(u,v)=\phi(-v,-u).\) Then \(\psi(-1,v)\in W^{\frac{n}{2},1}_{loc}\big{(}[0,\infty);C^{\infty}(S^{n})\big{)} \cap C^{\infty}((0,\infty)\times S^{n})\). By the first part, we know that there exist \(\underline{\phi}_{0}:=\psi_{0},\underline{h}\in C^{\infty}(S^{n})\) which determine the asymptotic data of \(\psi\) at \(v=0\). Let \(\underline{\phi}_{1}:=\psi_{1},\ldots,\underline{\phi}_{\frac{n}{2}-1}:=\psi_ {\frac{n}{2}-1},\underline{\chi}(0),\underline{\mathcal{O}}\in C^{\infty}(S^{n})\) satisfy the compatibility relations (39)-(41). We denote by \(\big{(}\underline{\alpha},\underline{\chi}\big{)}\) the smooth solution of (17)-(18) with asymptotic initial data \(\big{(}\underline{\chi}(0),\underline{\mathcal{O}},\underline{h}\big{)}.\) Then we have that for \(v>0:\)
\[\phi(-v,-u)=\psi(u,v)=\sum_{k=0}^{n/2-1}\frac{1}{k!}\phi_{k}\bigg{(}\frac{v}{- u}\bigg{)}^{k}+\int_{0}^{\frac{v}{-u}}\int_{0}^{v_{n/2}}\cdots\int_{0}^{v_{2}} \underline{\alpha}(v_{1})dv_{1}dv_{2}\ldots dv_{n/2}\]
### Existence and Uniqueness of Scattering States
We prove the existence and uniqueness of solutions with given asymptotic initial data. We remark that our previous result regarding the asymptotic behavior of smooth solutions provides us a guideline for constructing the solutions. The key ingredient is the existence and uniqueness of scattering states for the model problem (17)-(18).
**Proposition 4.2** (Existence and uniqueness of scattering states).: _For any \(\phi_{0},h\in C^{\infty}(S^{n})\), there exists a unique \(\phi\) self-similar solution of (12), which satisfies the smoothness condition_
\[\phi(-1,v)\in W^{\frac{n}{2},1}_{loc}\big{(}[0,\infty);C^{\infty}(S^{n})\big{)} \cap C^{\infty}((0,\infty)\times S^{n}),\]
_such that \(\big{(}\phi_{0},h\big{)}\) determine the asymptotic data of \(\phi\) at \(v=0\) in the sense of Proposition 4.1. Moreover, for any \(s\geq 1\) we have:_
\[\bigg{(}\sum_{a=0}^{\frac{n}{2}}\big{\|}\partial_{v}^{a}\phi\big{\|}_{H^{s+1/2 }}+\big{\|}\partial_{v}^{\frac{n}{2}+1}\phi\big{\|}_{H^{s-1/2}}\bigg{)}\bigg{|} _{(u,v)=(-1,1)}\lesssim\big{\|}\mathfrak{h}\big{\|}_{H^{s}}+\big{\|}\phi_{0} \big{\|}_{H^{s+n}} \tag{44}\]
_where \(\mathfrak{h}:=h-(\log\nabla)\mathcal{O}:=h-\sum_{l=1}^{\infty}l\log 2\cdot \mathcal{O}_{l}\) and \(\mathcal{O}\) is defined in terms of \(\phi_{0}\) by (41)._
Proof.: We consider the functions \(\phi_{1},\ldots,\phi_{\frac{n}{2}-1}\in C^{\infty}(S^{n})\) defined by the compatibility relation (39), and we consider \(\chi(0),\mathcal{O}\in C^{\infty}(S^{n})\) given by the compatibility relations (40)-(41). Let \(\big{(}\alpha,\chi\big{)}\) be the smooth solution of (17)-(18) with asymptotic initial data \(\big{(}\chi(0),\mathcal{O},h\big{)}.\) We define \(\phi\) in the region \(\{u<0,\ v\geq 0\}\) according to (42). By this formula, we already know that \(\phi\) is self-similar and \(\phi(-1,v)\in W^{\frac{n}{2},1}_{loc}\big{(}[0,\infty);C^{\infty}(S^{n}) \big{)}\cap C^{\infty}((0,\infty)\times S^{n}).\) Moreover, we compute using (18):
\[\partial_{v}^{\frac{n}{2}-1}\phi(-1,v)=\phi_{\frac{n}{2}-1}+\int_{0}^{v} \alpha(v_{1})dv_{1}=\chi(0)+\int_{0}^{v}\alpha(v_{1})dv_{1}=\chi(v)\]
We check that \(\phi\) is indeed a solution of (12). Since \(\phi\) is self-similar, we know that the wave equation (12) in \(\{u<0,\ v\geq 0\}\) is equivalent to (13) on \(\{u=-1,\ v\geq 0\}.\) Using our computation in Section 2.2, we have:
\[\partial_{v}^{\frac{n}{2}}\bigg{(}v(v+1)^{2}\partial_{v}^{2}\phi+\bigg{(}1- \frac{n}{2}\bigg{)}(v+1)^{2}\partial_{v}\phi+nv(v+1)\partial_{v}\phi-\Delta \phi\bigg{)}=\]
\[=v(v+1)^{2}\partial_{v}^{2}\alpha+(v+1)^{2}\partial_{v}\alpha+(pv+q)v\partial_{v} \alpha+(p^{\prime}v+q^{\prime})\alpha+q^{\prime\prime}\chi-\Delta\alpha=0\]
Using the compatibility relations from the previous section we also have that for any \(0\leq a\leq\frac{n}{2}-1:\)
\[\left.\left[\partial_{v}^{a}\bigg{(}v(v+1)^{2}\partial_{v}^{2}\phi+\bigg{(}1- \frac{n}{2}\bigg{)}(v+1)^{2}\partial_{v}\phi+nv(v+1)^{2}\partial_{v}\phi- \Delta\phi\bigg{)}\right]\right|_{v=0}=0\]
Moreover, we recall that:
\[\partial_{v}^{\frac{n}{2}-1}\bigg{(}v(v+1)^{2}\partial_{v}^{2}\phi+\bigg{(}1- \frac{n}{2}\bigg{)}(v+1)^{2}\partial_{v}\phi+nv(v+1)\partial_{v}\phi-\Delta \phi\bigg{)}=\]
\[=v(v+1)^{2}\partial_{v}\alpha+\big{[}n(v+1)+p_{a}v+q_{a}\big{]}v\alpha+(p_{a}^ {\prime}v+q_{a}^{\prime})\chi+q_{a}^{\prime\prime}\partial_{v}^{a-1}\phi-\Delta\chi\]
The above is in \(W^{1,1}_{loc}\big{(}[0,\infty);C^{\infty}(S^{n})\big{)}\cap C^{\infty}((0, \infty)\times S^{n}),\) since \(\partial_{v}\big{(}v\partial_{v}\alpha\big{)}\in L^{1}_{loc}\big{(}[0,\infty );C^{\infty}(S^{n})\big{)}\) by (16). Thus, we can apply the fundamental theorem of calculus and obtain that \(\phi\) defined by (42) solves (12).
We notice that the definition of \(\phi\) in the beginning of the proof implies that \(\big{(}\phi_{0},h\big{)}\) determine the asymptotic data of \(\phi\) at \(v=0\) in the sense of Proposition 4.1. To prove uniqueness we notice that given a solution with \(\phi_{0}=h=0,\) then for all \(1\leq a\leq\frac{n}{2}-1\) we have \(\phi_{a}=0,\) and also \(\alpha\equiv\chi\equiv 0.\) Thus, by Proposition 4.1 we conclude that the solution vanishes identically.
We notice that by Theorem 3.1 and the compatibility relations we have that:
\[\bigg{(}\big{\|}\partial_{v}^{\frac{n}{2}}\phi\big{\|}_{H^{s+1/2}}+\big{\|} \partial_{v}^{\frac{n}{2}+1}\phi\big{\|}_{H^{s-1/2}}\bigg{)}\bigg{|}_{(u,v)=(- 1,1)}\lesssim\big{\|}\mathfrak{h}\big{\|}_{H^{s}}+\big{\|}\mathcal{O}\big{\|} _{H^{s}}+\big{\|}\phi_{\frac{n}{2}-1}\big{\|}_{H^{s}}\lesssim\big{\|} \mathfrak{h}\big{\|}_{H^{s}}+\big{\|}\phi_{0}\big{\|}_{H^{s+n}}\]
By equation (38) we obtain that for any \(0\leq a\leq\frac{n}{2}-1\) and any \(l:\)
\[\big{\|}\partial_{v}^{a}\phi_{l}\big{\|}_{H^{s+1/2}}^{2}(-1,1)\lesssim\sum_{k= a}^{n/2-1}\big{\|}\big{(}\phi_{k}\big{)}_{l}\big{\|}_{H^{s+1/2}}^{2}+\bigg{(} \int_{0}^{1}\int_{0}^{v_{n/2-a}}\cdots\int_{0}^{v_{2}}\big{\|}\alpha_{l}(v_{1} )\big{\|}_{H^{s+1/2}}dv_{1}dv_{2}\ldots dv_{n/2-a}\bigg{)}^{2}\lesssim\]
where we denote \(\tau^{2}=v\) as in Section 3. Finally, this implies:
\[\big{\|}\partial_{v}^{a}\phi_{l}\big{\|}_{H^{s+1/2}}^{2}(-1,1)\lesssim\big{\|} \big{(}\phi_{0}\big{)}_{l}\big{\|}_{H^{s+n}}^{2}+\int_{0}^{1}\tau^{2}\big{\|} \alpha_{l}(\tau)\big{\|}_{H^{s+1/2}}^{2}d\tau\lesssim\big{\|}\mathfrak{h}_{l} \big{\|}_{H^{s}}^{2}+\big{\|}\big{(}\phi_{0}\big{)}_{l}\big{\|}_{H^{s+n}}^{2}\]
where the last inequality follows by the proof of Theorem 3.1. Summing over all \(l\) we obtain (44).
We prove a similar result in the case of asymptotic initial data with limited regularity:
**Proposition 4.3**.: _Let any \(s\geq 2,\) and \(\phi_{0}\in H^{s+n}(S^{n}),\ \mathfrak{h}\in H^{s}(S^{n})\). For every \(1\leq a\leq\frac{n}{2}-1,\) we consider the function \(\phi_{a}\in H^{s+n-2a}(S^{n})\) defined by the compatibility relation (39), and we consider \(\chi(0)\in H^{s+2}(S^{n}),\mathcal{O}\in H^{s}(S^{n})\) given by the compatibility relations (40)-(41). Let \(\big{(}\alpha,\chi\big{)}\) be the \(s\)-regularity solution of (17)-(18) with asymptotic initial data \(\big{(}\chi(0),\mathcal{O},\mathfrak{h}\big{)}.\) We have that \(\phi\) defined in the region \(\{u<0,\ v\geq 0\}\) by (42) is a self-similar solution of (12) such that:_
\[\phi(-1,v)\in W^{\frac{n}{2},1}_{loc}\big{(}[0,\infty);H^{s}\big{)}\cap C^{ \frac{n}{2}}\big{(}(0,\infty);H^{s+\frac{1}{2}}\big{)}\cap C^{\frac{n}{2}+1} \big{(}(0,\infty);H^{s-\frac{1}{2}}\big{)}\cap C^{\frac{n}{2}+2}\big{(}(0, \infty);H^{s-\frac{3}{2}}\big{)}\]
\[\Big{\|}\mathfrak{h}\big{\|}_{H^{s+1/2}}\lesssim\sum_{k=a}^{n/2-1}\big{\|}\partial_ {v}^{k}\phi\big{\|}_{H^{s+1/2}}\big{|}_{(-1,1)}+\big{\|}\alpha\big{\|}_{H^{s+1/ 2}}\big{|}_{v=1}+\big{\|}\partial_{v}\alpha\big{\|}_{H^{s-1/2}}\big{|}_{v=1} \tag{46}\]
The case \(a=\frac{n}{2}-1\) follows from Theorem 3.3. We assume now that (46) holds for all \(k\in[a+1,n/2-1]\). Using (38), we have that for any \(l:\)
\[\big{\|}(\phi_{a})_{l}\big{\|}_{H^{s+1/2}}^{2}\lesssim\] \[\lesssim\big{\|}\partial_{v}^{a}\phi_{l}\big{\|}_{H^{s+1/2}}^{2} \big{|}_{(-1,1)}+\sum_{k=a+1}^{n/2-1}\big{\|}\big{(}\phi_{k})_{l}\big{\|}_{H^ {s+1/2}}^{2}+\bigg{(}\int_{0}^{1}\int_{0}^{v_{n/2-a}}\cdots\int_{0}^{v_{2}} \big{\|}\alpha_{l}(v_{1})\big{\|}_{H^{s+1/2}}dv_{1}\ldots dv_{n/2-a}\bigg{)}^{ 2}\lesssim\]
\[\lesssim\sum_{k=a}^{n/2-1}\left\|\partial_{v}^{k}\phi_{l}\right\|_{H^{s+1 /2}}^{2}\big{|}_{(-1,1)}+\left\|\alpha_{l}\right\|_{H^{s+1/2}}^{2}\big{|}_{v=1} +\left\|\partial_{v}\alpha_{l}\right\|_{H^{s-1/2}}^{2}\big{|}_{v=1}+\left\| \partial_{v}\alpha_{l}\right\|_{H^{s-1/2}}^{2}\big{|}_{v=1},\]
where the last inequality follows by the proof of Theorem 3.3. As a result, we proved (46).
Finally, we remark that the compatibility relation (41) implies that:
\[\left\|\phi_{0}\right\|_{H^{s+n}}\lesssim\left\|\phi_{0}\right\|_{H^{s+1/2}}+ \left\|\mathcal{O}\right\|_{H^{s}}\]
Thus, Theorem 3.3 and the estimate (46) for \(a=0\) imply (45).
We prove a similar result in the case of solutions with limited regularity:
**Proposition 4.5**.: _Let \(s\geq 2\) and let \(\phi\) be a self-similar solution of (12) in the region \(\{u<0,v>0\}\) such that \(\phi(-1,v)\in C^{\frac{n}{2}}\big{(}(0,\infty);H^{s+\frac{1}{2}}(S^{n})\big{)} \cap C^{\frac{n}{2}+1}\big{(}(0,\infty);H^{s-\frac{1}{2}}(S^{n})\big{)}\cap C ^{\frac{n}{2}+2}\big{(}(0,\infty);H^{s-\frac{3}{2}}(S^{n})\big{)}\). Then we also have \(\phi(-1,v)\in W_{loc}^{\frac{n}{2},1}\big{(}[0,\infty);H^{s}(S^{n})\big{)}\), and there exist \(\phi_{0}\in H^{s+n}(S^{n}),\mathfrak{h}\in H^{s}(S^{n})\) such that \(\phi\) is the \(s\)-regularity self-similar solution of (12) with asymptotic initial data at \(v=0\) given by \(\phi_{0}\) and \(\mathfrak{h}.\) Moreover, the estimate (45) holds._
Proof.: The proof follows the same steps as the previous result, but using the limited regularity results such as Proposition 3.11 and Proposition 4.3.
### The Scattering Isomorphism
In this section we complete the proof of the scattering theory for self-similar solutions of the wave equation in the \(\{u<0,\ v>0\}\) region of Minkowski space \(\mathbb{R}^{n+2}\), by constructing the scattering isomorphism between asymptotic data at \(\{v=0\}\) and asymptotic data at \(\{u=0\}\). We prove the result in the case of smooth solutions in Theorem 4.1, and we prove a similar scattering result in the case of \(s\)-regularity solutions with \(s\geq 2\) in Theorem 4.2.
**Theorem 4.1**.: _For any \(n\geq 4\) even integer, we have a complete scattering theory for smooth self-similar solutions of the wave equation in the \(\{u<0,\ v>0\}\) region of Minkowski space \(\mathbb{R}^{n+2}\):_
1. _Existence and uniqueness of scattering states: for any_ \(\phi_{0},h\in C^{\infty}(S^{n})\)_, there exists a unique smooth self-similar solution of (_12_) with asymptotic data at_ \(v=0\) _given by_ \(\big{(}\phi_{0},h\big{)}\)_, such that:_ \[\phi(-1,v)\in W_{loc}^{\frac{n}{2},1}\big{(}[0,\infty);C^{\infty}(S^{n})\big{)} \cap C^{\infty}((0,\infty)\times S^{n}),\] _and the estimate (_44_) holds._
2. _Asymptotic completeness: any smooth self-similar solution of (_12_) satisfies:_ \[\phi(u,1)\in W^{\frac{n}{2},1}_{loc}\big{(}(-\infty,0];C^{\infty}(S^{n})\big{)} \cap C^{\infty}((-\infty,0)\times S^{n}),\] _induces asymptotic data at_ \(u=0\) _given by_ \(\underline{\phi_{0}},\underline{h}\in C^{\infty}(S^{n})\)_, and satisfies the estimate (_45_)._
3. _The scattering isomorphism: for the above solution, we define the scattering map_ \(\big{(}\phi_{0},\mathfrak{h}\big{)}\mapsto\big{(}\underline{\phi_{0}}, \underline{\mathfrak{h}}\big{)}\)_, where_ \(\mathfrak{h}:=h-(\log\nabla)\mathcal{O},\ \underline{\mathfrak{h}}:= \underline{h}-(\log\nabla)\underline{\mathcal{O}}\) _and_ \(\mathcal{O},\ \underline{\mathcal{O}}\) _are defined in terms of_ \(\phi_{0},\underline{\phi_{0}}\) _by (_41_). Then, for any_ \(s\geq 1\) _we have the estimate:_ \[\big{\|}\underline{\mathfrak{h}}\big{\|}_{H^{s}}+\big{\|}\underline{\phi_{0}} \big{\|}_{H^{s+n}}\lesssim\big{\|}\mathfrak{h}\big{\|}_{H^{s}}+\big{\|}\phi_ {0}\big{\|}_{H^{s+n}},\] (47) _and the scattering map extends as a Banach space isomorphism on_ \(H^{s+n}(S^{n})\times H^{s}(S^{n})\) _for any_ \(s\geq 1\)_._
Proof.: The existence and uniqueness of scattering states follows by Proposition 4.2. Moreover, for any \(s\geq 1\) we have the estimate (44) holds.
We define \(\psi\in C^{\infty}\) in the region \(\{u<0,\ v>0\}\) by \(\psi(u,v)=\phi(-v,-u)\), so \(\psi\) is also a smooth self-similar solution of (12). By Proposition 4.4, we have that \(\psi(-1,v)\in W^{\frac{n}{2},1}_{loc}\big{(}[0,\infty);C^{\infty}(S^{n})\big{)} \cap C^{\infty}((0,\infty)\times S^{n})\), and there exist \(\underline{\phi_{0}},\underline{h}\in C^{\infty}(S^{n})\) that determine the asymptotic data of \(\psi\) at \(v=0.\) Equivalently, we have that \(\phi(u,1)\in W^{\frac{n}{2},1}_{loc}\big{(}(-\infty,0];C^{\infty}(S^{n})\big{)} \cap C^{\infty}((-\infty,0)\times S^{n})\), and \(\big{(}\underline{\phi_{0}},\underline{h}\big{)}\) determine the asymptotic data of \(\phi\) at \(u=0\), as defined in Proposition 4.1. Thus, we proved asymptotic completeness.
According to Proposition 4.4, we also have the the estimate:
\[\big{\|}\underline{\mathfrak{h}}\big{\|}_{H^{s}}+\big{\|}\underline{\phi_{0}} \big{\|}_{H^{s+n}}\lesssim\bigg{(}\sum_{a=0}^{\frac{n}{2}}\big{\|}\partial_{v }^{a}\psi\big{\|}_{H^{s+1/2}}+\big{\|}\partial_{v}^{\frac{n}{2}+1}\psi\big{\|} _{H^{s-1/2}}\bigg{)}\bigg{|}_{(u,v)=(-1,1)} \tag{48}\]
where \(\underline{\mathfrak{h}}:=\underline{h}-(\log\nabla)\underline{\mathcal{O}}\) and \(\underline{\mathcal{O}}\) is defined in terms of \(\underline{\phi_{0}}\) by (41). We use the self-similarity of \(\phi\) to get:
\[\bigg{(}\sum_{a=0}^{\frac{n}{2}}\big{\|}\partial_{v}^{a}\psi\big{\|}_{H^{s+1/2 }}+\big{\|}\partial_{v}^{\frac{n}{2}+1}\psi\big{\|}_{H^{s-1/2}}\bigg{)}\bigg{|} _{(u,v)=(-1,1)}\lesssim\bigg{(}\sum_{a=0}^{\frac{n}{2}}\big{\|}\partial_{u}^{a }\phi\big{\|}_{H^{s+1/2}}+\big{\|}\partial_{u}^{\frac{n}{2}+1}\phi\big{\|}_{H^ {s-1/2}}\bigg{)}\bigg{|}_{(u,v)=(-1,1)}\lesssim\]
which proves the estimate (45).
Combining inequalities (44) and (45), we obtain the estimate (47) and we conclude that the scattering map is an isomorphism.
We prove a similar result for \(s\)-regularity self-similar solutions:
**Theorem 4.2**.: _For any \(n\geq 4\) even integer and \(s\geq 2\), we have a complete scattering theory for \(s\)-regularity self-similar solutions of the wave equation in the \(\{u<0,\ v>0\}\) region of Minkowski space \(\mathbb{R}^{n+2}\):_
1. _Existence and uniqueness of scattering states: for any_ \(\phi_{0}\in H^{s+n}(S^{n}),\ \mathfrak{h}\in H^{s}(S^{n})\)_, there exists a unique_ \(s\)_-regularity self-similar solution of (_12_) with asymptotic data at_ \(v=0\) _given by_ \(\big{(}\phi_{0},\mathfrak{h}\big{)}\)_, such that:_ \[\phi(-1,v)\in W^{\frac{n}{2},1}_{loc}\big{(}[0,\infty);H^{s}\big{)}\cap C^{ \frac{n}{2}}\big{(}(0,\infty);H^{s+\frac{1}{2}}\big{)}\cap C^{\frac{n}{2}+1} \big{(}(0,\infty);H^{s-\frac{1}{2}}\big{)}\cap C^{\frac{n}{2}+2}\big{(}(0, \infty);H^{s-\frac{3}{2}}\big{)}\] _and the estimate (_44_) holds._
2. _Asymptotic completeness: any_ \(\phi\) _self-similar solution of (_12_) with:_ \[\phi(-1,v)\in C^{\frac{n}{2}}\big{(}(0,\infty);H^{s+\frac{1}{2}}\big{)}\cap C ^{\frac{n}{2}+1}\big{(}(0,\infty);H^{s-\frac{1}{2}}\big{)}\cap C^{\frac{n}{2}+ 2}\big{(}(0,\infty);H^{s-\frac{3}{2}}\big{)},\] _also satisfies:_ \[\phi(u,1)\in W^{\frac{n}{2},1}_{loc}\big{(}(-\infty,0];H^{s}\big{)}\cap C^{ \frac{n}{2}}\big{(}(-\infty,0);H^{s+\frac{1}{2}}\big{)}\cap C^{\frac{n}{2}+1} \big{(}(-\infty,0);H^{s-\frac{1}{2}}\big{)}\cap C^{\frac{n}{2}+2}\big{(}(- \infty,0);H^{s-\frac{3}{2}}\big{)},\] _induces asymptotic data at_ \(u=0\) _given by_ \(\underline{\phi_{0}}\in H^{s+n}(S^{n}),\underline{\mathfrak{h}}\in H^{s}(S^{n})\)_, and satisfies the estimate (_45_)._
3. _The scattering isomorphism: for the above solution, we define the scattering map_ \(\big{(}\phi_{0},\mathfrak{h}\big{)}\mapsto\big{(}\underline{\phi_{0}}, \underline{\mathfrak{h}}\big{)}\)_. Then, the scattering map is a Banach space isomorphism on_ \(H^{s+n}(S^{n})\times H^{s}(S^{n})\) _and the estimate (_47_) holds._
Proof.: The proof follows the same steps as the previous result, but using the limited regularity results such as Proposition 4.3 and Proposition 4.5.
## 5 Scattering for Solutions of the Wave Equation on de Sitter Space
In this section we prove Theorem 1.1, by completing the construction of the scattering isomorphism between asymptotic data at \(\mathcal{I}^{-}\) and asymptotic data at \(\mathcal{I}^{+}\) for the wave equation on de Sitter space.
We first need to define a precise notion of asymptotic data for (3). This is based on the definitions from Section 4 and the correspondence from Section 2. Let \(\tilde{\phi}\) be a solution of (3). We recall that according to Lemma 2.1, the function \(\phi:\{u<0,v>0\}\subset\mathbb{R}^{n+2}\to\mathbb{R}\) defined by \(\phi=\tilde{\phi}\circ\pi\) is the corresponding self-similar solution of (12). In particular, we have:
\[\tilde{\phi}\circ\frac{1}{2}\log(v)=\phi(-1,v),\ \tilde{\phi}\circ\bigg{(}- \frac{1}{2}\log\bigg{)}(-u)=\phi(u,1)\]
Moreover, asymptotic data for \(\tilde{\phi}\) at \(\mathcal{I}^{-}\) corresponds to asymptotic data for \(\phi\) at \(v=0\), and similarly asymptotic data for \(\tilde{\phi}\) at \(\mathcal{I}^{+}\) corresponds to asymptotic data for \(\phi\) at \(u=0\).
**Definition 5.1**.: _Let \(\phi_{0},h\in C^{\infty}(S^{n})\). We say that \(\tilde{\phi}:\mathbb{R}\times S^{n}\to\mathbb{R}\) solves (3) with asymptotic data at \(\mathcal{I}^{-}\) given by \(\big{(}\phi_{0},h\big{)}\) if \(\tilde{\phi}\circ\frac{1}{2}\log\in W^{\frac{n}{2},1}_{loc}\big{(}[0,\infty); C^{\infty}(S^{n})\cap C^{\infty}((0,\infty)\times S^{n})\) and:_
\[\tilde{\phi}(T)=\sum_{k=0}^{n/2-1}\frac{1}{k!}\phi_{k}\cdot e^{2kT}+\int_{0}^{ e^{2T}}\int_{0}^{v_{n/2}}\cdots\int_{0}^{v_{2}}\alpha(v_{1})dv_{1}dv_{2}\ldots dv_{n/2}, \tag{49}\]
_where \(\phi_{1},\ldots,\phi_{\frac{n}{2}-1}\in C^{\infty}(S^{n})\) are defined by the compatibility relation (39), \(\chi(0),\mathcal{O}\in C^{\infty}(S^{n})\) are defined by the compatibility relations (40)-(41), and \(\big{(}\alpha,\chi\big{)}\) is the smooth solution of (17)-(18) with asymptotic initial data \(\big{(}\chi(0),\mathcal{O},h\big{)}\)._
**Definition 5.2**.: _Let \(\underline{\phi_{0}},\underline{h}\in C^{\infty}(S^{n})\). We say that \(\tilde{\phi}:\mathbb{R}\times S^{n}\to\mathbb{R}\) solves (3) with asymptotic data at \(\mathcal{I}^{+}\) given by \(\big{(}\underline{\phi_{0}},\underline{h}\big{)}\) if and:_
\[\tilde{\phi}(T)=\sum_{k=0}^{n/2-1}\frac{1}{k!}\underline{\phi_{k}}\cdot e^{-2 kT}+\int_{0}^{e^{-2T}}\int_{0}^{v_{n/2}}\cdots\int_{0}^{v_{2}}\underline{\alpha}(v_{1 })dv_{1}dv_{2}\ldots dv_{n/2}, \tag{50}\]
_where \(\underline{\phi_{1}},\ldots,\underline{\phi_{\frac{n}{2}-1}}\in C^{\infty}(S^ {n})\) are defined by the compatibility relation (39), \(\underline{\chi}(0),\underline{\mathcal{O}}\in C^{\infty}(S^{n})\) are defined by the compatibility relations (40)-(41), and \(\big{(}\underline{\alpha},\underline{\chi}\big{)}\) is the smooth solution of (17)-(18) with asymptotic initial data \(\big{(}\underline{\chi}(0),\underline{\mathcal{O}},\underline{h}\big{)}\)._
**Definition 5.3**.: _Let any \(s\geq 2\) and let \(\phi_{0}\in H^{s+n}(S^{n}),\mathfrak{h}\in H^{s}(S^{n})\). We say that \(\tilde{\phi}:\mathbb{R}\times S^{n}\to\mathbb{R}\) is the \(s\)-regularity solution of (3) with asymptotic data at \(\mathcal{I}^{-}\) given by \(\big{(}\phi_{0},\mathfrak{h}\big{)}\) if:_
\[\tilde{\phi}\circ\frac{1}{2}\log\in W^{\frac{n}{2},1}_{loc}\big{(}[0,\infty); H^{s}\big{)}\cap C^{\frac{n}{2}}\big{(}(0,\infty);H^{s+\frac{1}{2}}\big{)}\cap C ^{\frac{n}{2}+1}\big{(}(0,\infty);H^{s-\frac{1}{2}}\big{)}\cap C^{\frac{n}{2} +2}\big{(}(0,\infty);H^{s-\frac{3}{2}}\big{)}\]
_and (49) holds, such that \(\phi_{a}\in H^{s+n-2a}(S^{n})\) for every \(1\leq a\leq\frac{n}{2}-1,\,\chi(0)\in H^{s+2}(S^{n}),\mathcal{O}\in H^{s}(S^{n})\), and \(\big{(}\alpha,\chi\big{)}\) is the \(s\)-regularity solution of (17)-(18) with asymptotic initial data \(\big{(}\chi(0),\mathcal{O},\mathfrak{h}\big{)}\)._
_A similar definition holds for \(s\)-regularity solutions with asymptotic data at \(\mathcal{I}^{+}\)._
Using the correspondence between \(\tilde{\phi}\) and \(\phi\) that we explained above, the proof of Theorem 1.1 is a direct consequence of Theorem 4.1:
_Proof of Theorem 1.1._ We begin by proving the existence and uniqueness of scattering states. By Theorem 4.1, there exists a unique \(\phi\) self-similar solution of (12) with asymptotic data at \(v=0\) given by \(\big{(}\phi_{0},h\big{)}\). We define \(\tilde{\phi}:\mathbb{R}\times S^{n}\to\mathbb{R}\) to be \(\tilde{\phi}=\phi\circ\iota\), so \(\tilde{\phi}\) solves (3) by Section 2.1. Using the self-similarity of \(\phi\) we have:
\[\phi(-1,v)=\phi\bigg{(}x=1,T=\frac{1}{2}\log v\bigg{)}=\tilde{\phi}\bigg{(}T= \frac{1}{2}\log v\bigg{)}\]
As a result, all the requirements in the definition of a smooth solution of (3) with asymptotic data at \(\mathcal{I}^{-}\) are satisfied as a consequence of Theorem 4.1. Uniqueness also follows from Theorem 4.1, since to any smooth solution of (3) with asymptotic data at \(\mathcal{I}^{-}\) we associate \(\phi=\tilde{\phi}\circ\pi\) the corresponding self-similar solution of (12) with asymptotic data at \(v=0\).
Next, we prove asymptotic completeness. Note that by the self-similarity of \(\phi\) we have:
\[\phi(u,1)=\phi\bigg{(}x=1,T=-\frac{1}{2}\log(-u)\bigg{)}=\tilde{\phi}\bigg{(}T= -\frac{1}{2}\log(-u)\bigg{)}\]
Moreover, we have from Theorem 4.1 that \(\phi(u,1)\in W^{\frac{n}{2},1}_{loc}\big{(}(-\infty,0];C^{\infty}(S^{n})\big{)} \cap C^{\infty}((-\infty,0)\times S^{n})\), and there exist \(\underline{\phi_{0}},\underline{h}\in C^{\infty}(S^{n})\) which determine the asymptotic data of \(\phi\) at \(u=0.\) Thus, we obtain that \(\tilde{\phi}\) solves (3) with asymptotic data at \(\mathcal{I}^{+}\) given by \(\big{(}\underline{\phi_{0}},\underline{h}\big{)}\).
Finally, we can define the scattering map \(\big{(}\phi_{0},\mathfrak{h}\big{)}\mapsto\big{(}\underline{\phi_{0}},\underline{ \mathfrak{h}}\big{)}\). This is bounded by (47), and similarly its inverse is also bounded. Thus, the scattering map is an isomorphism on \(H^{s+n}(S^{n})\times H^{s}(S^{n})\).
We state a similar scattering result for \(s\)-regularity solutions:
**Theorem 5.1**.: _For any \(n\geq 4\) even integer and \(s\geq 2\), we have a complete scattering theory for \(s\)-regularity solutions of the wave equation on de Sitter space:_
1. _Existence and uniqueness of scattering states: for any_ \(\phi_{0}\in H^{s+n}(S^{n}),\ \mathfrak{h}\in H^{s}(S^{n})\)_, there exists a unique_ \(s\)_-regularity solution_ \(\tilde{\phi}:\mathbb{R}\times S^{n}\to\mathbb{R}\) _of (_3_) with asymptotic data at_ \(\mathcal{I}^{-}\) _given by_ \(\big{(}\phi_{0},\mathfrak{h}\big{)}\)_._
2. _Asymptotic completeness: any_ \(\tilde{\phi}:\mathbb{R}\times S^{n}\to\mathbb{R}\) _solution of (_3_) with:_ \[\tilde{\phi}\circ\frac{1}{2}\log\in C^{\frac{n}{2}}\big{(}(0,\infty);H^{s+ \frac{1}{2}}\big{)}\cap C^{\frac{n}{2}+1}\big{(}(0,\infty);H^{s-\frac{1}{2}} \big{)}\cap C^{\frac{n}{2}+2}\big{(}(0,\infty);H^{s-\frac{3}{2}}\big{)}\] _induces asymptotic data at_ \(\mathcal{I}^{-}\) _given by_ \(\phi_{0}\in H^{s+n}(S^{n}),\mathfrak{h}\in H^{s}(S^{n})\)_, and asymptotic data at_ \(\mathcal{I}^{+}\) _given by_ \(\underline{\phi_{0}}\in H^{s+n}(S^{n}),\underline{\mathfrak{h}}\in H^{s}(S^{n})\)_._
3. _The scattering isomorphism: for the above solution, we define the scattering map_ \(\big{(}\phi_{0},\mathfrak{h}\big{)}\mapsto\big{(}\underline{\phi_{0}}, \underline{\mathfrak{h}}\big{)}\)_. Then, the scattering map is a Banach space isomorphism on_ \(H^{s+n}(S^{n})\times H^{s}(S^{n})\)_._
Proof.: The proof follows the same steps as the previous result, but using the \(s\)-regularity results such as Proposition 4.3, Proposition 4.5, and Theorem 4.2.
## 6 Appendix
### Littlewood-Paley decomposition
We present some of the elementary properties of the Laplacian operator on the sphere that we use throughout the paper for frequency decomposition arguments. We point out that all these properties also hold in the case of a general compact Riemannian manifold \(\big{(}M^{n},g_{M}\big{)}\). As remarked in the introduction, this implies that our arguments also apply in the case of the generalized de Sitter space.
In the paper, we denote by \(\Delta=\Delta_{g^{n}}\) the Laplace operator on the standard sphere. This satisfies the following properties, according to [11]:
* Each eigenvalue of \(\Delta\) is real and has finite multiplicity.
* If we repeat each eigenvalue according to its multiplicity we have \(\Sigma=\{\lambda_{i}\}_{i=0}^{\infty}\). Moreover, we have: \[0=\lambda_{0}<\lambda_{1}\leq\lambda_{2}\leq\cdots\]
* There exists an orthonormal basis of \(L^{2}(S^{n})\) given by \(\{\varphi_{i}\}_{i=0}^{\infty}\), where \(\varphi_{i}\) is an eigenfunction of \(\lambda_{i}\). Moreover, we have \(\varphi_{i}\in C^{\infty}(S^{n})\).
We denote by \(\langle\cdot,\cdot\rangle\) the standard inner product on \(L^{2}(S^{n})\). Using the orthonormal basis of \(L^{2}(S^{n})\), we can write for any \(f\in L^{2}(S^{n}):\)
\[\left\|f\right\|_{L^{2}}^{2}=\sum_{i=0}^{\infty}\left|\langle f,\varphi_{i} \rangle\right|^{2}\]
Similarly, for any \(s\geq 0\) we define the Sobolev space:
\[H^{s}(S^{n})=\left\{f\in L^{2}(S^{n}):\ \left\|f\right\|_{H^{s}}^{2}:=\sum_{i=0}^ {\infty}\left(1+\lambda_{i}^{2}\right)^{s}\cdot\left|\langle f,\varphi_{i} \rangle\right|^{2}<\infty\right\}\]
We define the frequency projection operators for any \(f\in L^{2}(S^{n}):\)
\[P_{\leq a}f=\sum_{\lambda_{i}\leq a}\langle f,\varphi_{i}\rangle\varphi_{i},\ P_{\geq a}f=\sum_{\lambda_{i}\geq a} \langle f,\varphi_{i}\rangle\varphi_{i},\ P_{(a,b]}f=\sum_{a<\lambda_{i}\leq b }\langle f,\varphi_{i}\rangle\varphi_{i}\]
We also introduce the notation \(f_{l}=P_{(2^{l-1},2^{l}]}f\) for any \(l\geq 1\), and \(f_{0}=P_{\leq 1}f.\) We obtain the Littlewood-Paley decomposition for any \(f\in L^{2}(S^{n}):\)
\[f=f_{0}+\sum_{l=1}^{\infty}f_{l}\]
We point out that the sequence \(\{2^{l}\}\) can be replaced by something more general, without changing any of the arguments. Using this decomposition, we can equivalently write the norms on \(H^{s}(S^{n})\) as:
\[\left\|f\right\|_{H^{s}}^{2}\sim\sum_{l=0}^{\infty}2^{2ls}\left\|f_{l}\right\| _{L^{2}}^{2}\]
In the paper, we will construct Fourier multipliers at the level of \(f_{l}\), which is more robust than constructing Fourier multipliers at the level of each individual mode. For example, a key definition is:
\[\mathfrak{h}:=h-(\log\nabla)\mathcal{O}:=h-\sum_{l=1}^{\infty}l\log 2\cdot \mathcal{O}_{l}\]
Alternatively, one could instead define \(\mathfrak{h}^{\prime}:=h-\sum_{i=0}^{\infty}\log\langle\lambda_{i}\rangle \cdot\langle\mathcal{O},\varphi_{i}\rangle\varphi_{i}\). All our estimates would still hold in this case, since we have:
\[\mathfrak{h}-\mathfrak{h}^{\prime}=\sum_{\lambda_{i}\in[0,1]}\log\langle \lambda_{i}\rangle\cdot\langle\mathcal{O},\varphi_{i}\rangle\varphi_{i}+\sum _{l=1}^{\infty}\sum_{\lambda_{i}\in(2^{l-1},2^{l}]}\log\frac{\langle\lambda_{ i}\rangle}{2^{l}}\cdot\langle\mathcal{O},\varphi_{i}\rangle\varphi_{i}\]
which implies \(\left\|\mathfrak{h}-\mathfrak{h}^{\prime}\right\|_{H^{s}}^{2}\lesssim\left\| \mathcal{O}\right\|_{H^{s}}^{2}\).
|
2309.07356 | Dynamics of cell-type transition mediated by epigenetic modifications | Maintaining tissue homeostasis requires appropriate regulation of stem cell
differentiation. The Waddington landscape posits that gene circuits in a cell
form a potential landscape of different cell types, wherein cells follow
attractors of the probability landscape to develop into distinct cell types.
However, how adult stem cells achieve a delicate balance between self-renewal
and differentiation remains unclear. We propose that random inheritance of
epigenetic states plays a pivotal role in stem cell differentiation and present
a hybrid model of stem cell differentiation induced by epigenetic
modifications. Our comprehensive model integrates gene regulation networks,
epigenetic state inheritance, and cell regeneration, encompassing multi-scale
dynamics ranging from transcription regulation to cell population. Through
model simulations, we demonstrate that random inheritance of epigenetic states
during cell divisions can spontaneously induce cell differentiation,
dedifferentiation, and transdifferentiation. Furthermore, we investigate the
influences of interfering with epigenetic modifications and introducing
additional transcription factors on the probabilities of dedifferentiation and
transdifferentiation, revealing the underlying mechanism of cell reprogramming.
This \textit{in silico} model provides valuable insights into the intricate
mechanism governing stem cell differentiation and cell reprogramming and offers
a promising path to enhance the field of regenerative medicine. | Rongsheng Huang, Qiaojun Situ, Jinzhi Lei | 2023-09-13T23:54:56Z | http://arxiv.org/abs/2309.07356v1 | # Dynamics of cell-type transition mediated by epigenetic modifications
###### Abstract
Maintaining tissue homeostasis requires appropriate regulation of stem cell differentiation. The Waddington landscape posits that gene circuits in a cell form a potential landscape of different cell types, wherein cells follow attractors of the probability landscape to develop into distinct cell types. However, how adult stem cells achieve a delicate balance between self-renewal and differentiation remains unclear. We propose that random inheritance of epigenetic states plays a pivotal role in stem cell differentiation and present a hybrid model of stem cell differentiation induced by epigenetic modifications. Our comprehensive model integrates gene regulation networks, epigenetic state inheritance, and cell regeneration, encompassing multi-scale dynamics ranging from transcription regulation to cell population. Through model simulations, we demonstrate that random inheritance of epigenetic states during cell divisions can spontaneously induce cell differentiation, dedifferentiation, and transdifferentiation. Furthermore, we investigate the influences of interfering with epigenetic modifications and introducing additional transcription factors on the probabilities of dedifferentiation and transdifferentiation, revealing the underlying mechanism of cell reprogramming. This _in silico_ model provides valuable insights into the intricate mechanism governing stem cell differentiation and cell reprogramming and offers a promising path to enhance the field of regenerative medicine.
keywords: stem cell differentiation; epigenetic state; Waddington landscape; cell reprogramming; multi-scale model +
Footnote †: journal: Journal of Theoretical Biology
## 1 Introduction
Adult stem cells play a vital role in maintaining tissue homeostasis by replenishing dying cells and regenerating damaged tissues through controlled self-renewal and differentiation[1]. Understanding the mechanism underlying cell
fate decisions and the regulation of self-renewal and differentiation in stem cell biology is of utmost significance.
Waddington's epigenetic landscape is a fundamental concept in comprehending cell fate decisions and cell differentiation[2]. The landscape analogy visualizes a cell as a ball rolling on a mountain, with valleys representing stable cell phenotype and ridges signifying cell fate choice leading to new phenotypes. While Waddington's epigenetic landscape provides an intuitive understanding of the biological process, the mechanisms driving cellular development and reprogramming remain elusive[3]. Specifically, how gene circuits define Waddington's epigenetic landscape in a multicellular system, the driving force behind cell fate decision, and how adult stem cell systems balance self-renewal and differentiation pose intriguing questions.
Gene circuits consisting of two transcription factors, such as PU.1-GATA1 or SOX2-OCT4, have been extensively studied as they are associated with cell fate decisions[4; 5; 6; 7]. These circuits exhibit three stable equilibrium points corresponding to stem cells and two differentiated cell fates, validating the concepts of Waddington landscape. The regulatory interplay between these transcription factors determines cell fate. Despite such discoveries, the driving force triggering cell-type switches remains a topic of debate, with stochastic fluctuation, gene regulation, cell-cell communications, and artificial induction being proposed as potential mechanisms[8; 9; 10; 11; 12; 13; 14; 15; 16]. Remarkably, self-renewal and differentiation of stem cells maintain a state of dynamic equilibrium even in the face of random environmental changes. This highlights the importance of regulating the transitions between cell types during stem cell regeneration to ensure reliable tissue function.
Recent advancements in single-cell sequencing techniques have illuminated cell heterogeneity, revealing macro-heterogeneities in gene expression among cells with different phenotypes and even microscopic heterogeneity within cells of the same phenotype[17; 18; 19; 20; 21; 22; 23]. Epigenetic regulation, encompassing histone modification and DNA methylation, has emerged as a key player in cellular heterogeneity and phenotype switching[24; 25; 26; 27].
Histones are structural chromosome proteins, including H1 and H3, H4, H2A, and H2B subunits. There are various forms and multiple functions of histone modifications that regulate gene expression. For instance, trimethylation of lysine 4 on histone H3 protein subunit (H3K4me3) and H4K12 acetylation can promote gene expression, while ubiquitination, such as ub-H2A, can inhibit gene expression[28; 29]. Histone modifications are heritable during cell division. The parental modifications are recognized by a binding protein or a reading protein, which then recruits a chromatin modifier or writer protein to alter the histone modifications[30].
DNA methylation is a process where methyl(CH\({}_{3}\)) group is added to DNA, which can alter the function of genes and impact gene expression. Both histone modifications and DNA methylation inheritance are semi-conservative but inevitably accompanied by natural changes during each generation, which may lead to cell heterogeneity and plasticity[31].
In this study, we present a hybrid model of stem cell regeneration that in
corporates cell phenotype changes induced by epigenetic modifications. The model integrates a gene regulation network, epigenetic state inheritance, and cell regeneration, enabling multi-scale dynamics from transcription regulation to cell population. Through model simulations, we explore how random inheritance of epigenetic states during cell division can automatically induce cell differentiation, dedifferentiation, and transdifferentiation. Our _in silico_ model offers valuable insights into the mechanism of stem cell differentiation and cell reprogramming.
## 2 Hybrid model of stem cell differentiation
The hybrid model established in this study is illustrated in Figure 1. It consists of individual-based modeling of a multi-cellular system (Fig. 1A), a gene regulation network (GRN) dynamics of two genes that self-activate and repress each other (Fig. 1B), a G0 cell cycle model for cell regeneration (Fig. 1C), and stochastic inheritance of epigenetic states during cell divisions (Fig. 1D). The three types of dynamics are coupled with each other through the cell cycle processes and cell-type transitions. The detailed formulations of the model are given below.
### Gene regulation network
To investigate how epigenetic modifications can drive cell lineage commitment, we consider a GRN consisting of two master transcription factors (TFs)
Figure 1: Illustration of the hybrid model. (A) Individual-based model of a multi-cellular system. (B) Dynamic system of gene circuit motif. (C) G0 cell cycle model of cell regeneration. (D) Stochastic inheritance of epigenetic state during cell divisions.
\(X_{1}\) and \(X_{2}\), which self-activate and repress each other (Fig. 1B). This gene network frequently appears in many cell-fate decision-making systems and has been extensively studied[7, 32, 33, 34, 35, 36, 37]. Mathematically, the gene network can be modeled using two different approaches: additive and multiplicative regulations, where the production rates from positive and negative feedbacks are either added or multiplied, respectively[4, 9]. This study adopts the additive modeling approach, following the model described in [9]. However, it is essential to note that the model formulations and results presented in this study are also consistent with multiplicative feedback regulations.
Let \(x_{1}\) and \(x_{2}\) represent the expression level (protein concentration) of genes \(X_{1}\) and \(X_{2}\), respectively. The gene expression dynamics within one cell cycle are modeled with the following ordinary differential equations:
\[\begin{cases}\dfrac{\mathrm{d}x_{1}}{\mathrm{d}t}=a_{1}\left(\rho_{1}+(1-\rho _{1})\dfrac{x_{1}^{n}}{s_{1}^{n}+x_{1}^{n}}\right)+b_{1}\dfrac{s_{2}^{n}}{s_{2 }^{n}+x_{2}^{n}}-k_{1}x_{1},\\ \dfrac{\mathrm{d}x_{2}}{\mathrm{d}t}=a_{2}\left(\rho_{2}+(1-\rho_{2})\dfrac{x _{2}^{n}}{s_{2}^{n}+x_{2}^{n}}\right)+b_{2}\dfrac{s_{1}^{n}}{s_{1}^{n}+x_{1}^ {n}}-k_{2}x_{2},\end{cases} \tag{1}\]
where \(a_{1}\), \(a_{2}\), \(\rho_{1}\), \(\rho_{2}\), \(s_{1}\), \(s_{2}\), \(n\), \(b_{1}\), \(b_{2}\), \(k_{1}\), and \(k_{2}\) are non-negative parameters. The parameters \(a_{1}\) and \(a_{2}\) denote the maximum expression rates of the self-activation of the two genes, while \(\rho_{1}\) and \(\rho_{2}\) (\(0\leq\rho_{i}\leq 1\)) represent the ratios between the basal level to the maximum level of the regulation of each gene. The parameters \(b_{1}\) and \(b_{2}\) denote the basal expression rates of the two genes without repression. The parameters \(s_{1}\) and \(s_{2}\) represent the half-effective concentration of the two proteins \(X_{1}\) and \(X_{2}\), respectively, in the transcription regulation, and \(n\) represents the corresponding Hill coefficient. The degradation rates of the two proteins are represented by \(k_{1}\) and \(k_{2}\), respectively.
To address the influence of extrinsic noise perturbation, we introduced stochastic fluctuations to \(a_{1}\) and \(a_{2}\), resulting in the following expressions:
\[a_{i}(\eta_{i})=\alpha_{i}e^{\sigma_{i}\eta_{i}-\sigma_{i}^{2}/2},\quad i=1,2,\]
where \(\alpha_{1}\) and \(\alpha_{2}\) are positive parameters to represent the average expression rates. Here, \(\sigma_{1}\) and \(\sigma_{2}\) represent the intensities of the noise perturbations, and \(\eta_{1}\) and \(\eta_{2}\) are color noises defined by the Ornstein-Uhlenbeck processes:
\[\mathrm{d}\eta_{i}=-(\eta_{i}/\zeta_{i})\mathrm{d}t+\sqrt{2/\zeta_{i}}\mathrm{ d}W_{i}(t),\quad i=1,2, \tag{2}\]
where \(W_{1}(t)\) and \(W_{2}(t)\) are independent Wiener process, and \(\zeta_{1}\) and \(\zeta_{2}\) are relaxation coefficients. In the stationary state, we have \(\mathrm{E}(a_{i}(\eta_{i}))=\alpha_{i}\) and \(\mathrm{E}\left(\eta_{i}(t_{1})\eta_{i}(t_{2})\right)=e^{-|t_{1}-t_{2}|/\zeta_ {i}}\) (\(i=1,2\)), where \(\mathrm{E}(\cdot)\) represents the mathematical expectation.
To further account for the effects of epigenetic modification on gene regulation dynamics, it is known that epigenetic regulations, such as histone modification or DNA methylation, can interfere with chromatin structure and alter the basal expression rates. Therefore, we introduced the assumption that the expression levels \(a_{1}\) and \(a_{2}\) are dependent on the epigenetic modification state
of the two genes, denoted by \(u_{1}\) and \(u_{2}\), respectively. Here, by the epigenetic state, we mean the fractions of marked nucleosomes or methylated CpG sites in a DNA segment of interest. Hence, the epigenetic state \(\mathbf{u}=(u_{1},u_{2})\) is assumed to lie in the domain \(\Omega=[0,1]\times[0,1]\). Additionally, we considered that the epigenetic states \(u_{1}\) and \(u_{2}\) undergo random changes only during cell division, which is discussed in more detail below.
Since epigenetic states primarily affect chromatin structure, they might influence the chemical potential required to initiate the transcription process. Thus, along with the extrinsic noise perturbations, we can express the expression rates \(a_{1}\) and \(a_{2}\) as follows
\[a_{i}(u_{i},\eta_{i})=\alpha_{i}e^{\lambda_{i}u_{i}}e^{\sigma_{i}\eta_{i}- \sigma_{i}^{2}/2},\quad i=1,2, \tag{3}\]
where \(\lambda_{i}(i=1,2)\) represent the impact of the epigenetic modification states on the expression levels. Specifically, \(\lambda_{i}>0\) indicates an epigenetic modification that enhances the strength of self-activation, while \(\lambda_{i}<0\) indicates a modification that reduces this strength.
The expression state \(\mathbf{x}=(x_{1},x_{2})\) depends on the epigenetic state \(\mathbf{u}=(u_{1},u_{2})\) within one cell cycle through equations (1)-(3). With properly selected parameter values, the model exhibits three stable steady states that correspond to three cell types (see Fig. 2 below). For instance, the PU.1-GATA1 motif is involved in the gene circuit that determines the cell fate of erythroid/megakaryocyte lineages from granulocyte/monocyte lineages. PU.1 and GATA1 are expressed in the precursor cells (PC). Specifically, the granulocyte/monocyte lineage (GMC) cells exhibit low PU.1 expression and high GATA1 expression, while erythroid/megakaryocyte lineage (EMC) cells show high GATA1 expression and low PU.1 expression. For the sake of generality, we refer to the three cell types as stem cell (SC), transit-amplifying cell 1 (TA1), and transit-amplifying cell 2 (TA2). Additionally, we use transition cells (TC) for cells not classified as SC, TA1, or TA2. Mathematically, the phenotypes are defined from \(\mathbf{x}=(x_{1},x_{2})\) as (see Section 3.1 below):
\[\text{phenotype}=\begin{cases}\text{SC}:&\text{both $X_{1}$ and $X_{2}$ are medium expression},\\ \text{TA1}:&\text{$X_{1}$ is high expression and $X_{2}$ is low expression},\\ \text{TA2}:&\text{$X_{1}$ is low expression and $X_{2}$ is high expression},\\ \text{TC}:&\text{otherwise}.\end{cases} \tag{4}\]
We note that both positive and negative feedback parameters \(a_{i}\) and \(b_{i}\) are subjected to noise perturbation and epigenetic regulations. Here, we only consider the changes in the positive feedback parameters for the simplicity of model introduction and discussions. The methods and results in this study can be extended to general cases with modifications in both feedback parameters.
Our model formulates random perturbations to expression rate by multiplying the log-normal distribution random number defined by an Ornstein-Uhlenbeck process. This type of formulation was introduced in our previous studies[38] and differs from the conventional Langevin approach. Biologically,
gene expression rates have been observed to follow a log-normal distribution rather than a normal distribution[39]. Mathematically, using the expression in equation (3) can prevent the possible negative expression rate, which can be problematic when adding a Gaussian perturbation to the expression rate \(a_{i}\). This formulation is biologically more appropriate for modeling extrinsic random perturbations.
### G0 cell cycle model
To incorporate the above gene regulation network dynamics with cell division, we referred to the G0 cell cycle model of heterogeneous cell regeneration [40; 41; 42]. In this model, we only consider cells with the ability to undergo cell cycling, and each cell has different rates of proliferation and cell death dependent on its cell phenotypic (SC, TA1, TA2, or TC). Cells that have lost the ability to undergo cell cycling were not considered and were removed from the system. The cycling cells are classified into resting (G0) or proliferating phases. Resting phase cells can either re-enter the proliferating phase with a rate \(\beta\) or be removed from the resting phase with a rate \(\kappa\) due to cell death or senescence. Proliferating phase cells can either randomly exit with a rate \(\mu\) due to apoptosis or divide into two daughter cells after a time \(\tau\) following entry into the proliferative compartment (Fig. 1C). The kinetic rates of each cell, including the proliferation rate \(\beta\), the removal rate \(\kappa\), the apoptosis rate \(\mu\), and the proliferation duration \(\tau\) depend on the corresponding cell phenotype.
The SC, TA1, TA2, and TC cells differ in their regulation of cell proliferation. For SCs, the self-renewal ability is biologically associated with microenvironmental conditions and intracellular signaling pathways [43; 44]. Despite the complexity of signaling pathways, the phenomenological formulation of the Hill function dependence can be derived from simple assumptions regarding the interactions between signaling molecules and receptors [41; 45], and is given by
\[\beta_{\mathrm{SC}}=\beta_{0}\frac{\theta}{\theta+Q(t)},\]
where \(Q(t)\) represents the number of SC at time \(t\), \(\beta_{0}\) represents the maximum proliferation rate, and \(\theta\) is a constant for the half-effective cell number. We define the removal rate \(\kappa\), the apoptosis rate \(\mu\), and the proliferation duration \(\tau\) of stem cells as:
\[\kappa_{\mathrm{SC}}=\kappa_{0},\quad\mu_{\mathrm{SC}}=\mu_{0},\quad\tau_{ \mathrm{SC}}=\tau_{0}.\]
For TA1 or TA2 cells, we assumed that they have unconstrained proliferation rates, higher removal rates, and shortened proliferation durations than stem cells. Moreover, each TA1 or TA2 cell has a limited ability of cell divisions, _i.e._, a TA1 or TA2 cell will be removed from the system when it reaches the maximum cell division times (here we set it as 15). Hence, we have
\[\beta_{\mathrm{TA1}}=\beta_{\mathrm{TA2}}=\beta_{0},\quad\mu_{\mathrm{TA1}}= \mu_{\mathrm{TA2}}=\mu_{0},\quad\tau_{\mathrm{TA1}}=\tau_{\mathrm{TA2}}=\tau_{ 0}/2,\]
\[\kappa_{\mathrm{TA1}}=\kappa_{\mathrm{TA2}}=\left\{\begin{array}{ll}2\kappa _{0},&\mathrm{divisions}<15,\\ +\infty,&\mathrm{divisions}\geq 15,\end{array}\right.\]
The transition state is usually very short between cell divisions; hence, cell proliferation and apoptosis are not considered for TCs. Thus, we set
\[\beta_{\mathrm{TC}}=\mu_{\mathrm{TC}}=\tau_{\mathrm{TC}}=\kappa_{\mathrm{TC}}=0.\]
Table 1 summarizes the kinetic rates for different phenotypes.
From the above model description, given the epigenetic state \(\mathbf{u}=(u_{1},u_{2})\) of each cell, the gene expression state \(\mathbf{x}=(x_{1},x_{2})\) dynamically evolves according to the stochastic differential equations (1)-(3). Accordingly, the cell phenotype and the kinetic rates \(\beta\), \(\kappa\), \(\mu\) and \(\tau\) can change during a cell cycle. In stochastic simulations, we model each cell's random proliferation, apoptosis, and cell-type switches in a multiple-cell system. Each cell has its cell state and randomly undergoes proliferation, apoptosis, and death with a probability depending on the cell state. Finally, when a cell undergoes mitosis, the cell divides into two cells, and the epigenetic states of the two daughter cells are calculated based on the inheritance probability functions below.
The effect of cell volume change was not considered in the model. Cell growth and volume changes are important biological processes and play important roles in cell-fate decisions[46; 47]. Moreover, biological mechanisms controlling cell growth and division are complicated and unclear for mammalian cells[48; 49; 50]. In this study, to avoid the complexity and be more focused on the epigenetic regulations, we assumed that the volume is unchanged during cell cycling so that cell growth does not affect the protein concentration.
### Stochastic inheritance of epigenetic states
Histone modifications and DNA methylations in the daughter cells are reconstructed during cell division based on those in the mother cells. The epigenetic states of the daughter cells usually differ from those of their mother cells, and there is a random transition of the epigenetic states during cell division[30]. Moreover, the molecules (proteins and mRNAs) in the mother cells undergo random partition during mitosis, leading to their reallocation to the two daughter cells. Therefore, after mitosis, the epigenetic state \(\mathbf{u}\) and the expression level (protein concentration) \(\mathbf{x}\) of the model (1)-(3) for each newborn cell are reset. Here, for simplicity, we assumed symmetry division, meaning that the gene expression state \(\mathbf{x}\) of the two daughter cells is the same as that of the mother cell. However, the epigenetic state \(\mathbf{u}\) may undergo random transitions during cell division.
\begin{table}
\begin{tabular}{c|c c c c} \hline \hline cell phenotype & \(\beta\) & \(\kappa\) & \(\mu\) & \(\tau\) \\ \hline SC & \(\beta_{0}\theta/(\theta+Q)\) & \(\kappa_{0}\) & \(\mu_{0}\) & \(\tau_{0}\) \\ TA1 & \(\beta_{0}\) & \(2\kappa_{0}\) or \(+\infty\) & \(\mu_{0}\) & \(\tau_{0}/2\) \\ TA2 & \(\beta_{0}\) & \(2\kappa_{0}\) or \(+\infty\) & \(\mu_{0}\) & \(\tau_{0}/2\) \\ TC & 0 & 0 & 0 & 0 \\ \hline \hline \end{tabular}
\end{table}
Table 1: The kinetic rates for different phenotype
To model the stochastic inheritance of epigenetic states during cell division, we assumed that the epigenetic states of the two daughter cells are independent of each other, and we introduced an inheritance function \(p(\mathbf{u},\mathbf{v})\) to represent the conditional probability that a daughter cell of state \(\mathbf{u}\) comes from a mother cell of state \(\mathbf{v}\) after cell division. In other words,
\[p(\mathbf{u},\mathbf{v})=P(\text{state of daughter cell}=\mathbf{u}\ |\ \text{state of mother cell}=\mathbf{v}).\]
The inheritance function represents cell plasticity in each cell cycle, while the detailed biochemical processes of cell division are ignored. It is obvious to have
\[\int_{\Omega}p(\mathbf{u},\mathbf{v})\mathrm{d}\mathbf{u}=1,\quad\forall\mathbf{v}\in\Omega.\]
Biologically, the exact formulation of the inheritance function \(p(\mathbf{u},\mathbf{v})\) is difficult to determine, as it depends on the complex biochemical reactions during the cell division process. Nevertheless, while we consider \(p(\mathbf{u},\mathbf{v})\) as a conditional probability density, we focus on the epigenetic state before and after cell division and omit the intermediate complex process. This allows us to introduce a phenomenological function through numerical simulation based on a computational model of histone modification inheritance[41, 51, 52].
The states \(u_{1}\) and \(u_{2}\) represent the epigenetic states at two DNA segments corresponding to the gene \(X_{1}\) and \(X_{2}\), respectively, and we assumed that they vary independently during cell division. If otherwise, we need further assumptions about their interdependence. Thus, we have
\[p(\mathbf{u},\mathbf{v})=p_{1}(u_{1},\mathbf{v})p_{2}(u_{2},\mathbf{v}), \tag{5}\]
where \(p_{i}(u_{i},\mathbf{v})\) represents the transition function of \(u_{i}\), given the state \(\mathbf{v}\) of the mother cell. According to [51, 52], the normalized nucleosome modification level of daughter cells can be described by a random beta-distribution number dependent on the mother cell. Thus, we can write the inheritance function \(p_{i}(u_{i},\mathbf{v})\) through the density function of beta-distribution as
\[p_{i}(u_{i},\mathbf{v})=\frac{u_{i}^{g_{i}(\mathbf{v})-1}(1-u_{i})^{h_{i}(\mathbf{v})-1}}{ B(g_{i}(\mathbf{v}),h_{i}(\mathbf{v}))},\quad B(g,h)=\frac{\Gamma(g)\Gamma(h)}{ \Gamma(g+h)}, \tag{6}\]
where \(\Gamma(z)\) is the gamma function, \(g_{i}(\mathbf{v})\) and \(h_{i}(\mathbf{v})\) are shape parameters that depend on the epigenetic state of the mother cell. We assumed that the conditional expectation and conditional variance of \(u_{i}\) are (given the state \(\mathbf{v}\))
\[\mathrm{E}(u_{i}|\mathbf{v})=\phi_{i}(\mathbf{v}),\quad\mathrm{Var}(u_{i}|\mathbf{v})= \frac{1}{1+\psi_{i}(\mathbf{v})}\phi_{i}(\mathbf{v})(1-\phi_{i}(\mathbf{v})),\]
where the shape parameters can be expressed as
\[g_{i}(\mathbf{v})=\psi_{i}(\mathbf{v})\phi_{i}(\mathbf{v}),\quad h_{i}(\mathbf{v})=\psi_{i}( \mathbf{v})(1-\phi_{i}(\mathbf{v})).\]
Here, we note that \(\phi_{i}(\mathbf{v})\) and \(\psi_{i}(\mathbf{v})\) always satisfy
\[0<\phi_{i}(\mathbf{v})<1,\quad\psi_{i}(\mathbf{v})>0.\]
Hence, the inheritance function \(p(\mathbf{u},\mathbf{v})\) can be determined by the predefined conditional expectation and conditional variance, _i.e._, the functions \(\phi_{i}(\mathbf{u})\) and \(\psi_{i}(\mathbf{u})\). Here, we assumed that \(\psi_{i}(\mathbf{v})\) remains constant, while \(\phi_{i}(\mathbf{v})\) increases with \(v_{i}\) and is expressed by a Hill function as:
\[\psi_{i}(\mathbf{v})=m_{0},\quad\phi_{i}(\mathbf{v})=m_{1}+m_{2}\frac{(m_{3}v_{i})^{m_ {4}}}{1+(m_{3}v_{i})^{m_{4}}},\quad\mathbf{v}=(v_{1},v_{2}),\;i=1,2, \tag{7}\]
where \(m_{j}(j=0,1,2,3,4)\) are all positive parameters.
From the above formulation, given the functions \(\psi_{i}(\mathbf{v})\) and \(\phi_{i}(\mathbf{v})\), the inheritance function \(p(\mathbf{u},\mathbf{v})\) can be expressed as a density function of beta-distribution random numbers.
### Numerical scheme
The proposed hybrid model describes the dynamics of gene regulation networks and cell-type switches of individual cells in a multicellular system. Here, we present an individual-based numerical scheme aimed at simulating the dynamics of each cell in the system.
* **Initialization**: Set the time \(t=0\), the cell number \(N\), and the state of all cells \(\Sigma=\{[C_{i}(\mathbf{x}_{i},\mathbf{u}_{i})]_{i=1}^{N}\}\). Determine the phenotype (SC, TA1, TA2, or TC) and proliferation state (resting or proliferating phase) of each cell, and compute the count \(Q\) of stem cells. All cells are initially set as stem cells in the resting phase. Accordingly, set the division number of each cell as \(\text{div}_{i}=0\), and the corresponding age at the proliferating phase (starting from the entry of proliferating phase) as \(a_{i}=0\).
* **Iteration**: **for**\(t\) from \(0\) to \(T\) with a time step of \(\Delta t\), **do**: **for*
* each cell in \(\Sigma\), **do**:
* Numerically solve equations (1)-(3) for a step \(\Delta t\) and update the expression state \(\mathbf{x}\). If the cell is in the resting phase, update the phenotype of the cell based on the state \(\mathbf{x}\).
* Calculate the proliferation rate \(\beta\), the apoptosis rate \(\mu\), the terminate differentiation rate \(\kappa\), and the proliferation duration \(\tau\).
* Determine the cell fate during the time interval \((t,t+\Delta t)\):
* When the cell is at the resting phase, remove the cell from the simulating pool with a probability \(\kappa\Delta t\), or enter the proliferating phase with a probability \(\beta\Delta t\). If the cell enters the proliferating phase, set the age \(a_{i}=0\), and if the cell is a TA cell, set \(\text{div}_{i}=\text{div}_{i}+1\).
When the cell is in the proliferating phase, if the age \(a_{i}<\tau\), the cell is either removed (through apoptosis) with a probability \(\mu\Delta t\), or remains unchanged and \(a_{i}=a_{i}+\Delta t\). If the age \(a_{i}\geq\tau\), the cell undergoes mitosis and divides into two cells. When mitosis occurs, set the state of two daughter cells following the rules below: set the age \(a_{i}=0\); set the epigenetic state \(\mathbf{u}\) of each daughter cell according to the inheritance probability function \(p(\mathbf{u},\mathbf{v})\); set the gene expression state \(\mathbf{x}\) as that of the mother cell. * After mitosis, check the division number of TA cells. If the division number \(\text{div}_{i}\) of a TA cell is larger than the maximum value, remove the cell from the simulating pool. **end for Update** the system \(\Sigma\) with the cell number, epigenetic and gene expression states of all surviving cells, and the ages of the proliferating phase cells, and set \(t=t+\Delta t\). **end for**
In model simulations, we set the time step \(\Delta t=0.25\)h. The numerical scheme can be implemented by the object-oriented programming language C++1.
Footnote 1: The source codes are available at [https://github.com/jinzhilei/Cell-type-transition](https://github.com/jinzhilei/Cell-type-transition).
Table 2 lists the parameter values used in the current study. Here, the parameters illustrate the general process of phenotype switches due to epigenetic modifications along the cell regeneration process and are not specific to a particular type of cells. The parameters for the gene regulatory network were selected so that the corresponding gene network has multiple stable states for different cell types. The parameters for the epigenetic regulation were chosen so that cell-type transition can occur with a reasonable frequency. The parameter for cell regeneration (rate of proliferation, differentiation, and cell death) were taken per the biological restriction for the kinetic rates. Additionally, we only considered symmetric parameters, which means that \(b_{1}=b_{2},k_{1}=k_{2},s_{1}=s_{2},\rho_{1}=\rho_{2},\alpha_{1}=\alpha_{2}\). Nevertheless, the main results are insensitive to these parameter values.
## 3 Results
### Phenotype defined by the epigenetic state of cells
To quantitatively define the phenotypes of SC and TA cells based on the gene expression state \(\mathbf{x}=(x_{1},x_{2})\), we performed bifurcation analysis for the ordinary differential equation (1). Here, we assumed a symmetric situation so that \(a_{1}=a_{2}=a\) and consider the bifurcation concerning the expression rate \(a\). Figure 2A shows the dependence of the equilibrium state on the parameter \(a\). When \(a\) is large (\(a>1.65\)), there is a stable state steady with high expressions in both
\begin{table}
\begin{tabular}{l l l l} \hline \hline Parameter & Description & Value & Unit\({}^{(a)}\) \\ \hline \(b_{1},b_{2}\) & Strengths of the mutual inhibition & 1 & \(\mathrm{AU}\times\mathrm{h}^{-1}\) \\ \(k_{1},k_{2}\) & Degradation rates of the proteins & 1 & \(\mathrm{AU}\times\mathrm{h}^{-1}\) \\ \(s_{1},s_{2}\) & 50\% effective concentration of the feedback & 0.5 & \(\mathrm{AU}\) \\ & loops & & \\ \(\rho_{1},\rho_{2}\) & The ratio between the basal level to the maximum level of the regulation of each gene & 0.1 & - \\ \(\sigma_{1},\sigma_{2}\) & Intensity of the noise & 0.05 & - \\ \(\zeta_{1},\zeta_{2}\) & Relaxation coefficient of the Ornstein-Uhlenbeck process & 1 & \(\mathrm{h}^{-1}\) \\ \(\alpha_{1},\alpha_{2}\) & Expression rate of each gene & 0.4 & \(\mathrm{AU}\times\mathrm{h}^{-1}\) \\ \(\lambda_{1},\lambda_{2}\) & Impact coefficient of epigenetic modifications & 1.9 & - \\ \(n\) & Hill coefficient & 4 & - \\ \hline \(m_{0}\) & constant & 60 & - \\ \(m_{1}\) & constant & 0.08 & - \\ \(m_{2}\) & constant & 1.0 & - \\ \(m_{3}\) & constant & 2.0 & - \\ \(m_{4}\) & Hill coefficient & 2.0 & - \\ \hline \(\theta\) & Constant for the half-effective cell number & 200 & cells \\ & for SC population control & & \\ \(\beta_{0}\) & Maximum proliferation rate & 0.04 & \(\mathrm{h}^{-1}\) \\ \(\kappa_{0}\) & The rate of removing cells out of the resting phase & 0.01 & \(\mathrm{h}^{-1}\) \\ \(\mu_{0}\) & The apoptosis rate of cells in the proliferating phase & 0.0002 & \(\mathrm{h}^{-1}\) \\ & ating phase & & \\ \(\tau_{0}\) & Proliferating phase time & 4 & \(\mathrm{h}\) \\ \hline \hline \multicolumn{4}{c}{\({}^{(a)}\) AU means arbitrary unit.} \\ \end{tabular}
\end{table}
Table 2: Parameter values for model simulation
\(X_{1}\) and \(X_{2}\), which is denoted as \((+,+)\). When \(a\) decreases (\(0.75<a<1.65\)), in addition to the state \((+,+)\), there are two other stable steady states, one state has high expression in \(X_{1}\) and low expression in \(X_{2}\); the other one is opposite, with low expression in \(X_{1}\) and high expression in \(X_{2}\). We denote these two states as \((++,-)\) and \((-,++)\), respectively. When \(a\) further decreases (\(a<0.75\)), the state \((+,+)\) vanishes, and the two states \((++,-)\) and \((-,++)\) persist. Biologically, when \(a\) decreases from a large (\(a>1.65\)) to a small (\(a<0.75\)) value, the cell type \((+,+)\) emerge firstly, followed by the coexistence of the three states, and finally, transit to the cell type of either \((++,-)\) or \((-,++)\). This is akin to the differentiation of precursor cells to either granulocyte/monocyte lineage or erythroid/megakaryocyte lineage described with the PU.1-GATA1 gene circuit of hematopoietic stem cells. Thus, we consider the state \((+,+)\) as stem cells (SC), while the states \((++,-)\) and \((-,++)\) are downstream transit-amplifying cells (TA1 and TA2). Additionally, the states not included in the above steady states are termed transition cells (TC). These arguments define cell types given by equation (4).
Considering the effects of epigenetic modifications and extrinsic noise perturbations, the expression rates \(a_{i}\) are expressed as equations (3). To begin
Figure 2: Bifurcation analysis of the gene expression dynamics. (A) Dependence of the steady state solution (in \(x_{1}\)) on the expression rate \(a\) (\(a_{1}=a_{2}=a\)). Solid lines represent stable steady states, and dashed lines represent unstable steady states. (B) Dependence of the cell types with epigenetic states \(u_{1}\) and \(u_{2}\). The color shows the number of steady states. (C) Sample dynamics of cell state transition obtained by solving the stochastic differentiation equation (1)-(3). Here, \(u_{1}=u_{2}=0.34\) (the yellow dot in B and in D) and \(\sigma=0.05\). (D) Average duration of the SC state with the epigenetic state \((u_{1},u_{2})\).
with, we omitted the extrinsic noise by setting \(\sigma_{1}=\sigma_{2}=0\). The dependence of the phenotypes of steady state on the epigenetic state \(u_{1}\) and \(u_{2}\) is shown in Figure 2B. Figure 2B suggests that the three cell types SC, TA1, or TA2 may occur when the epigenetic states of the two genes vary. Specifically, the stem cell state can emerge when \(u_{1}>0.4\) and \(u_{2}>0.4\).
Next, we set \(\sigma=0.05\) to introduce the noise perturbation. In this case, the gene expression dynamics are described by the random differential equations (1)-(3). To explore the dynamics of cell-type transition under noise perturbations, given the epigenetic state \(u_{1}\) and \(u_{2}\), we set the initial condition following Figure 2B and numerically solve equations (1)-(3) for 100h. Simulations show that TA cells remained unchanged during the simulation. However, for SC with \((u_{1},u_{2})\) take values near the edge of the SC zone (\(u_{1}=u_{2}=0.34\)), the cell switched to TA cell following random perturbations (Fig. 2C). Figure 2D shows the average duration of the SC state with different epigenetic states \((u_{1},u_{2})\). These results indicate that the definition of cell types of SC, TA1, and TA2 is appropriate following the gene regulation dynamics (1)-(3), and the epigenetic state \((u_{1},u_{2})\) is important for the phenotype of cells.
To further explore the dynamics of cell differentiation induced by noise perturbations in the absence of epigenetic state changes, we investigated the dynamics with the coexistence of SC and TA cells by fixing four sets of epigenetic states: \((u_{1},u_{2})=(0.5,0.5),(0.5,0.6),(0.6,0.5)\), and \((0.6,0.6)\). We varied the noise perturbation strength \(\sigma\) ranging from 0 to 1 and ran the model 100 times for each combination of epigenetic states and noise perturbation strength, with an initial population of 100 stem cells and a simulation duration of 1000h.
Figure 3 shows the average ratios of SC and TA cells with different noise perturbation strengths and epigenetic states at the end of the simulation in 100 model runs. The results demonstrate that TA cells emerge only when the extrinsic noise strength is larger than a certain threshold. When \((u_{1},u_{2})=(0.5,0.5),(0.5,0.6)\) or \((0.6,0.5)\), the TA cells appear only when \(\sigma>0.15\), and when \((u_{1},u_{2})=(0.6,0.6)\), TA cells appear only when \(\sigma>0.25\). These imply that strong extrinsic noise is required to drive SC differentiation in the absence of epigenetic state changes. Moreover, Figure 3B and C show that the final fractions of cell types are highly sensitive to the epigenetic state \((u_{1},u_{2})\) and the extrinsic noise strength \(\sigma\). The interplay between these two factors plays a crucial role in determining the cell fate and type in the multicellular system.
### Population dynamics of stem cell regeneration and differentiation
To investigate the population dynamics of stem cell regeneration and differentiation, we considered the full model introduced in Section 2. Initially, we ran the model simulation with 100 cells, where the epigenetic state \(\mathbf{u}\) and gene expression state \(\mathbf{x}\) of each cell were randomly distributed over the ranges \(0<u_{i}<1\) and \(0<x_{i}<3\), respectively. To examine the population dynamics with different expression rates of the two genes, we tested four sets of parameters: \((\alpha_{1},\alpha_{2})=(0.4,0.4),(0.4,1.0),(1.0,0.4)\), and \((1.0,1.0)\). For each set of parameters, we ran the model 30 times up to 2000h to reach the stationary equilibrium. In each case, the cells underwent proliferation, differ
Figure 3: The dynamics of cell differentiation induced by noise perturbations in the absence of epigenetic state changes. Figures show the average ratios of SC and TA cells at the end of 100 model runs with different noise perturbation strengths for different sets of epigenetic states: (A) \((u_{1},u_{2})=(0.5,0.5)\), (B) \((u_{1},u_{2})=(0.5,0.6)\), (C) \((u_{1},u_{2})=(0.6,0.5)\), (D) \((u_{1},u_{2})=(0.6,0.6)\). Other parameters remained the same as in Table 2. The statistics are obtained from 100 model runs.
entiation, and cell death, eventually reaching a homeostatic state. Figure 4 shows the four types of population dynamics for different parameter sets. When \((\alpha_{1},\alpha_{2})=(0.4,0.4)\), both SC and TA1 and TA2 cells coexisted, indicating the process of self-renewal of stem cells and differentiation to TA1 and TA2 cells (Fig. 4A). When \((\alpha_{1},\alpha_{2})=(0.4,1.0)\), only SC and TA2 cells were presented at homeostasis, indicating the blockage of differentiation from stem cells to TA1 cells (Fig. 4B). Similarly, when \((\alpha_{1},\alpha_{2})=(1.0,0.4)\), TA2 cells were absent at homeostasis (Fig. 4C), suggesting the blockage of the differentiation from stem cells to TA2 cells. When \((\alpha_{1},\alpha_{2})=(1.0,1.0)\), only stem cells were presented at homeostasis, and no stem cell differentiation events occur during the simulation (Fig. 4D). These results indicate that the kinetic parameters of the underlying gene regulation dynamics are crucial for cell phenotypes in homeostasis. In this study, we were interested in the coexistence with SC, TA1, and TA2 cells; thus, we used \((\alpha_{1},\alpha_{2})=(0.4,0.4)\) in the following discussions.
Next, we further investigated the molecular-level dynamics of individual cells. Figure 5A shows the scatter plots of \(\mathbf{x}=(x_{1},x_{2})\) of all cells at different time points (\(t=5,50,100,1000\)h). The initially randomly distributed cell states rapidly developed into three clusters corresponding to SC, TA1, and TA2 cells. Accordingly, the epigenetic state \(\mathbf{u}=(u_{1},u_{2})\) of cells rapidly converged to a steady distribution at homeostasis (Fig. 5B). Interestingly, despite the continuous distribution of \((u_{1},u_{2})\), the expression states \((x_{1},x_{2})\) shown discrete
Figure 4: Population dynamics of stem cell regeneration and differentiation. Figures show the evolution dynamics of SC and TA cells number with different parameter sets: (A) \((\alpha_{1},\alpha_{2})=(0.4,0.4)\), (B) \((\alpha_{1},\alpha_{2})=(0.4,1.0)\), (C) \((\alpha_{1},\alpha_{2})=(1.0,0.4)\), (D) \((\alpha_{1},\alpha_{2})=(1.0,1.0)\). Other parameters remained the same as in Table 2. The dashed lines represent the averages of the total cell numbers of 30 model runs.
cell types at homeostasis. The bifurcation analysis in Figure 2 suggests that continuous changes in the epigenetic state \(\mathbf{u}\) can lead to transitions of cell types defined by the gene expression state \(\mathbf{x}\). These results indicated that continuous changes in the epigenetic state during stem cell regeneration can lead to discontinuous cell fate decisions, providing a mechanism of stem cell differentiation driven by random inheritance of epigenetic state during cell division.
### Dynamics of transdifferentiation and dedifferentiation
The above simulations demonstrated the differentiation dynamics from stem cells to TA cells, a process commonly occurring during development and tissue homeostasis. However, in various biological processes, transdifferentiation and dedifferentiation are also observed, wherein differentiated cells may lose their phenotype and convert to another cell type or revert to an undifferentiated state[11; 53; 54; 55]. In this context, we have observed that the random inheritance of epigenetic states can induce the differentiation of stem cells. We now explore whether this mechanism can also lead to transdifferentiation and dedifferentiation. To investigate this, we recorded the cell type changes over a long simulation and counted the number of events of cell type changes in each cell division. The average results of 30 model runs are summarized as the transition probability in each cell division in Table 3.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline & SC & TA1 & TA2 \\ \hline SC & 73.9402 & 12.9889 & 13.0709 \\ \hline TA1 & 0.0013 & 99.9955 & 0.0032 \\ \hline TA2 & 0.0013 & 0.0035 & 99.9952 \\ \hline \end{tabular}
\end{table}
Table 3: Cell state transition probabilities (%) from mother to daughter cells at one cell division. The probability values were calculated from the average over 30 model runs.
Figure 5: The evolution of gene and epigenetic modifications for the population dynamics shown in Figure 4A. (A) Scatter plots of \((x_{1},x_{2})\) at different time points (\(t=1,50,100,1000\)h). (B) Scatter plots of \((u_{1},u_{2})\) at different points (\(t=5,50,100,1000\)h). Parameters are the same as in Table 2.
From the results in Table 3, stem cells predominantly underwent self-renewal with a probability of 73.9402%, or transit to either TA1 or TA2 cells with probabilities of 12.9889% or 13.0709%, respectively, during each cell cycle. However, the TA cells are highly inclined self-renewal, with probabilities exceeding 99.99%. Nevertheless, rare events of transdifferentiation (from TA1 to TA2 or from TA2 to TA1) and dedifferentiation (from TA1 or TA2 to SC) did occur in our simulations, albeit with extremely low frequencies. It is important to note that the transition probabilities depend on the model parameters. In the next section, we discuss how changes in model parameters may affect the frequencies of transdifferentiation and dedifferentiation. Figure 6 shows the transition trajectories of dedifferentiation and transdifferentiation from model simulations in the phase plane of gene expressions. Figure 6C-D shows that the TA states can switch to each other without an intermediate SC state, which suggests evidence of transdifferentiation.
The effects of extrinsic noise and epigenetic state inheritance on homeostasis and cell-type transitions
In this section, we explore the impact of extrinsic noise and epigenetic state inheritance on the system's homeostasis and the probabilities of cell-type transitions. From Figure 3, cell-type transitions would not happen when the extrinsic
Figure 6: Transition trajectories of dedifferentiation, and transdifferentiation in the phase plane. (A) The dedifferentiation trajectory TA1-SC. (B) The dedifferentiation trajectory TA2-SC. (C) The transdifferentiation trajectory TA1-TA2. (D) The transdifferentiation trajectory TA2-TA1. Color bars show the timing in the simulation. Insets show the corresponding temporal dynamics \(x_{1}(t)\) and \(x_{2}(t)\). Parameters were the same as in Table 2.
noise is weak and in the absence of epigenetic state changes. Moreover, when the noise strength \(\sigma\) increases to \(\sigma=0.2\), stem cells may become extinct due to the large differentiation rate (data not shown). Here, we considered the combination of extrinsic noise and epigenetic state changes and assumed a weak extrinsic noise (\(\sigma\leq 0.1\)).
#### 3.4.1 Effect of extrinsic noise
We began by varying the noise perturbation strength \(\sigma\) from 0 to 0.1 and analyzing the homeostasis cell numbers and transition probabilities. For each value of \(\sigma\), we conducted model simulation for a time scope of 4000h so that there are enough data for statistics, and considered the results from 2000h to 4000h for further analysis. Similar to the previous simulations, the cells developed into SC or TA cells and approached homeostasis. The numbers of different cell types at homeostasis was insensitive to changes in the noise strength(Fig. 7A). The ratios of different types were largely unaffected by the noise strength (Fig. 7B). These findings indicate that changes in extrinsic noise have minimal impact on the system's state at homeostasis.
Figure 7: The impact of extrinsic noise on homeostasis and cell state transition. (A) Numbers of different cell types at homeostasis. (B) Ratios of SC and TA cells at homeostasis. (C) The probability of self-renewal in each cell division for different cell types. (D) The probability of differentiation (SC-TA1, SC-TA2) in each cell division. (E) The probability of dedifferentiation (TA1-SC, TA2-SC) in each cell division. (F) The probabilities of transdifferentiation (TA1-TA2, TA2-TA1) in each cell division. All values are calculated from the time scope of 2000h to 4000h in the model simulation. Other parameters were the same as in Table 2. The statistics are obtained from 30 model runs.
Next, we investigated the transition probabilities between different cell types under varying noise strengths (Figure 7C-F). The probabilities of self-renewal (SC-SC, TA1-TA1, TA2-TA2) was found to be unaffected by changes in the noise strength (Fig. 7C). The probabilities of differentiation (SC-TA1 and SC-TA2) showed increases with the noise strength \(\sigma\) (Fig. 7D); accordingly, the probabilities of dedifferentiation (TA1-SC and TA2-SC) increases with \(\sigma\) (Fig. 7E). Likewise, the probabilities of transdifferentiation (TA1-TA2 and TA2-TA1) showed slight increases with \(\sigma\) (Fig. 7F). These results suggest that alterations in the extrinsic noise perturbation have minor effects on cell-type transition rates. Therefore, weak extrinsic noise might not be the primary driving force behind cell-type transitions.
#### 3.4.2 Effect of epigenetic state inheritance
Next, we investigated how random changes in epigenetic modifications may affect the system's homeostasis and cell-type transitions. To this end, we considered the functions \(\phi_{i}(\mathbf{v})\) in (7), which define the inheritance function of epigenetic states. We varied the parameter \(m_{2}\) over the interval \([0.8,0.9]\) and investigated how the cell numbers and transition probabilities may change with \(m_{2}\). Here, we fixed \(\sigma=0.05\), while other parameters remained the same as in Table 2. The simulations revealed that as \(m_{2}\) increased, the numbers of TA cells decreased, while the SC number increased, leading to a decrease in the total cell number (Fig. 8A). Consequently, the ratio of SC increased while the ratio of TA1 and TA2 decreased (Fig. 8B). These results demonstrate that alterations in the inheritance functions of epigenetic states significantly impact the system's homeostasis.
Further examining the probabilities of cell differentiation, transdifferentiation, and dedifferentiation, we found that the probabilities of self-renewal for TA cells were mostly unaffected by changes in \(m_{2}\), while the self-renewal probability of SC increased \(m_{2}\) (Fig. 8C). Consequently, the differentiation probabilities (SC-TA1 and SC-TA2) markedly decreased with increasing \(m_{2}\) (Fig. 8D), and the dedifferentiation probabilities (TA1-SC and TA2-SC) increased with \(m_{2}\) (Fig. 8E). The transdifferentiation probabilities (TA1-TA2 and TA2-TA1) showed an initial increase followed by a decrease as \(m_{2}\) was raised. Biologically, the function \(\phi_{i}(\mathbf{v})\) represents the expectation of the epigenetic state of daughter cells given the state \(\mathbf{v}\) of the mother cells, and \(m_{2}\) can be used to quantify the activities of enzymes that regulate epigenetic modification. Increasing the parameter \(m_{2}\) upregulates the epigenetic states \(u_{i}\) of daughter cells. These results suggest that manipulating epigenetic modifications can be a viable strategy to induce transdifferentiation and dedifferentiation.
The aforementioned results were achieved assuming symmetric parameters. In order to explore the impact of asymmetric parameters, we manipulated the gene \(X_{1}\) parameters and conducted model simulations. The transition probabilities in each cell division under distinct parameters are shown in Figure 9. The results reveal that asymmetric parameters do not significantly deviate from the principal findings obtained using the assumption of symmetrical parameters. It is worth noting that increases in \(k_{1}\) and \(s_{1}\) correspond to an increase in the
Figure 8: The impact of epigenetic state inheritance function on homeostasis and cell state transition. (A) Numbers of different cell types at homeostasis. (B) Ratios of SC and TA cells at homeostasis. (C) The probability of self-renewal in each cell division of different cell types. (D) The probability of differentiation (SC-TA1, SC-TA2) in each cell division. (E) The probability of dedifferentiation (TA1-SC, TA2-SC) in each cell division. (F) The probabilities of transdifferentiation (TA1-TA2, TA2-TA1) in each cell division. All values are calculated from the time scope of 2000h to 4000h in the model simulation. Other parameters were the same as in Table 2. The statistics are obtained from 30 model runs.
differentiation rate SC-TA2 (Fig. 9B and a decrease in the dedifferentiation rate TA2-SC (Fig. 9D). Conversely, increases in \(\alpha_{1}\) and \(\rho_{1}\) values lead to a reduction in the differentiation rate SC-TA2 (Fig. 9B) and an increase in the dedifferentiation rate TA2-SC (Fig. 9D).
### Dynamics of cell reprogramming through the induction of transcription factors
The previous simulations explored the mechanisms of cell transdifferentiation and dedifferentiation induced by extrinsic noise and epigenetic modifications. In this section, we investigate the dynamics of cell reprogramming through the induction of transcription factors.
We considered introducing an external transcription factor that enhances the transcription of the self-activation of the \(X_{1}\) gene. Consequently, the equations governing the gene regulation network dynamics are modified as follows:
\[\begin{cases}\dfrac{\mathrm{d}x_{1}}{\mathrm{d}t}=(a_{1}+a_{TF}) \left(\rho_{1}+(1-\rho_{1})\dfrac{x_{1}^{n}}{s_{1}^{n}+x_{1}^{n}}\right)+b_{1} \dfrac{s_{2}^{n}}{s_{2}^{n}+x_{2}^{n}}-k_{1}x_{1},\\ \dfrac{\mathrm{d}x_{2}}{\mathrm{d}t}=a_{2}\left(\rho_{2}+(1-\rho_{2})\dfrac{x _{2}^{n}}{s_{2}^{n}+x_{2}^{n}}\right)+b_{2}\dfrac{s_{1}^{n}}{s_{1}^{n}+x_{1}^ {n}}-k_{2}x_{2}.\end{cases} \tag{8}\]
Figure 9: Transition probability with asymmetry parameters. (A) The transition probability of differentiation SC-TA1 in each cell division. (B) The transition probability of SC-TA2 in each cell division. (C) The probability of dedifferentiation rate TA1-SC in each cell division. (D) The probability of dedifferentiation rate TA2-SC in each cell division. Other parameters were the same as in Table 2. The statistics were obtained from 30 model runs.
Here, \(a_{TF}\) represents the augmentation factor for the activation rate caused by the introduced transcription factor.
To simulate the dynamics of cell reprogramming, we fixed \(\sigma=0.05\), \(m_{2}=0.8\), and varied \(a_{TF}\) from 0 to 0.5, keeping other parameters the same as in Table 2. In model simulations, the first event is presumably a transition from TA cells to stem cells, and then the system evolves towards the steady state distribution of cell types. We defined successful stem cell induction as the occurrence of 100 stem cells. First, we initialized the system with 500 differentiated cells (either 100% TA1 cells or 100% TA2 cells) and conducted model simulations for 10000h to examine the potential to induce stem cells. When the system initiated with TA1 cells, the probability of successful induction at the end of simulations showed a slight increase and then remained relatively low and stable with varying \(a_{TF}\) (Fig. 10A). Conversely, when initiated with TA2 cells, the probability of successful induction at the end of simulations increased with \(a_{TF}\), and reached a probability of near 100% when \(a_{TF}\) is large enough (Fig. 10A). Since the factor \(a_{TF}\) only affects the transcription of \(X_{1}\), the introduction of stem cells from TA1 cells is mainly due to the basal dedifferentiation shown in Table 3. Thus, the extra induction probability from TA2 cells comes from the effect of \(a_{TF}\). Moreover, we calculate the timing of successful induction in each situation. For the system initiated with TA1 cells, the time required for successful stem cell induction is about 600h with varying \(a_{TF}\) (Fig. 10B). However, when initiated with TA2 cells, the time of stem cell induction showed an obvious decrease with the increasing of \(a_{TF}\), with the time of 400h when \(a_{TF}>0.4\) (Fig. 10B). We further show the population dynamics after the induction of \(a_{TF}\) (Figure 10C). The dynamics reveal that for cells initialized with 100% TA2 cells, stem cells begin to appear after approximately 300h due to the dedifferentiation of TA2 cells. Subsequently, stem cells differentiate into TA1 and TA2 cells, insulting a cell population approaching homeostasis with all three cell types.
Furthermore, we examined the homeostasis cell numbers and the transition probabilities. We initialized 100 stem cells for statistical simplicity and varied \(a_{TF}\) from 0 to 0.2. We then conducted model simulations for a time scope of 4000h and analyzed the results from 2000h to 4000h. Consequently, the number of TA1 cells increased with higher values of \(a_{TF}\), while the number of TA2 cells decreased (Fig. 11A). The ratios of TA1 cells and stem cells increased with \(a_{TF}\), whereas the ratio of TA2 cells decreased with increasing \(a_{TF}\) (Fig. 11B). Subsequently, we analyzed the transition probabilities for different values of \(a_{TF}\). The self-renewal probability at each cell division remained unaffected by variations in \(a_{TF}\), while changes in \(a_{TF}\) significantly influenced the probabilities of differentiation, transdifferentiation, and dedifferentiation. Specifically, increasing \(a_{TF}\) reduced the differentiation from SC to TA2 cells, promote the dedifferentiation from TA2 cells to SC, and alter the transdifferentiation between TA1 and TA2 cells, resulting in an increase in the TA2-TA1 transition probability and a decrease in the TA1-TA2 transition probability (Fig. 11C-F).
Figure 10: Dynamics of cell reprogramming through the induction of transcription factors. (A) The probability of successful stem cell induction. (B) The time required for successful stem cell induction. (C) Time evolution of cell ratios starting from 100% TA2 cells. Other parameters were the same as in Table 2. The statistics were obtained from 600 model runs.
Figure 11: The effect of transcription factor introduction on cell number and cell state transition. (A) Numbers of different types of cells at homeostasis. (B) The cell ratios for different cell types over the period from 2000h to 4000h. (C)-(F)The cell transition probabilities. Other parameters were the same as in Table 2. The statistics were obtained from 30 model runs.
### Waddington landscape
To gain further insights into the influence of extrinsic noise and epigenetic state inheritance on cell fate decisions during tissue growth, we delve into the temporal evolution of Waddington landscape based on our simulation results. The numerical scheme described above yields the cell count \(N(t,\mathbf{x})\) at time \(t\) with state \(\mathbf{x}=(x_{1},x_{2})\). The total cell number is given by \(N(t)=\int N(t,\mathbf{x})\mathrm{d}\mathbf{x}\). Consequently, we defined \(f(t,\mathbf{x})=N(t,\mathbf{x})/N(t)\) as the frequency of cells with state \(\mathbf{x}\). This enables us to define the evolution of Waddington's epigenetic landscape as follows:
\[U(t,\mathbf{x})=-\log(1+f(t,\mathbf{x})), \tag{9}\]
where the introduction of the number 1 prevents issues arising from zero frequency.
Figure 12 illustrates landscapes that vary with extrinsic noise strength (\(\sigma\)), epigenetic regulation parameter (\(m_{2}\)), and the introduction of an extra factor (\(a_{TF}\)) in cell reprogramming. As demonstrated in Figure 12A, the landscape remains largely unaffected by changes in the extrinsic noise strength. This observation aligns with the above discussion, which revealed that alterations in the extrinsic noise do not impact the homeostatic state of the system (Fig. 7). However, the landscapes exhibit notable changes as the parameter \(m_{2}\) increases from 0.8 to 0.9 (Fig. 12B). Elevated values of \(m_{2}\) lead to a higher proportion of stem cells in the stationary state, indicating the induction of dedifferentiation through alterations in epigenetic regulation. Moreover, the introduction of the extra factor \(a_{TF}\) in (8) disrupts the balance between TA1 and TA2 cells, resulting in a greater fraction of TA1 cells (Fig. 12C). These findings underscore the substantial role of varying epigenetic regulation and introducing an extra transcription factor in reshaping the Waddington landscape.
## 4 Discussion
The intricate regulation of stem cell differentiation and tissue development is a cornerstone challenge in the realms of developmental biology and regenerative medicine. Over time, various mechanisms have been postulated to induce crucial biological processes like cell differentiation, dedifferentiation, and transdifferentiation[56; 57; 58; 59]. These mechanisms include stochastic fluctuations, modification to gene regulation networks, the induction of external transcription factors, and the manipulation of small molecules. Yet, while these mechanisms offer valuable insights, they fall short of capturing the inherent progression of cell lineage and the complexity of cellular heterogeneity.
Recent research challenges the prevailing model that assumes changing gene expression within a bistable region is sufficient to alter cell fate[60]. The study by Hoppe et al. demonstrated that random fluctuations of PU.1 and GATA1 expression are insufficient to initiate the cell-fate decision between megakaryocyte/erythroid (MegE) and granulocyte/monocyte (GM) lineages. Long-term absolute protein levels in single differentiating HSCs and their progeny were
Figure 12: Waddington landscape. (A) Temporal evolution of the Waddington landscape with varying extrinsic noises. (B) Temporal evolution of the Waddington landscape with varying epigenetic regulation parameter \(m_{2}\). (C) Temporal evolution of the Waddington landscape with varying extra factor \(a_{TF}\) in cell reprogramming. The Waddington landscapes are visualized as heatmaps for the function \(U(t,z)\) with \(z=x_{1}-x_{2}\). Other parameters were the same as in Table 2.
quantified, which showed that multiple generations are required to induce changes in PU.1 and GATA1 protein levels, and these transcription factors are only executing and reinforcing lineage choice once made[60]. These observations pointed to the significant roles of cell division in guiding cell fate decisions. Additionally, the profound impact of epigenetic regulation on cellular heterogeneity and phenotype transitions has garnered considerable attention[27, 61, 62, 63, 64, 65].
In our pursuit to quantitatively dissect the dynamics of cell-type transitions driven by epigenetic modifications, we developed a comprehensive hybrid model for stem cell regeneration. This model integrates gene regulation networks, epigenetic state inheritance, and cell regeneration dynamics. The essence of our model lies in its ability to simulate the biological process of cell population growth and the establishment of homeostasis, characterized by a delicate equilibrium among diverse cell types. Our simulations unravel a mechanism where stochastic inheritance of epigenetic modifications leads to spontaneous switches in cellular phenotypes during cell cycling. We argue that the natural biological process of the cell cycle and stochastic inheritance during cell division can serve as the driving force of cell-type transition. The interplay between random epigenetic transition and the intrinsic dynamics of gene networks emerges as a linchpin in steering cell-type switches and consolidating cellular stability. Remarkably, our findings underscore how manipulations in epigenetic regulation can yield influence over the epigenetic landscape, thereby augmenting the potential for cell dedifferentiation and transdifferentiation.
The Waddington landscape serves as a core concept in understanding the mechanisms of stem cell differentiation and cellular plasticity[66, 67]. Traditionally, the stochastic dynamics of gene regulatory networks find gene expressions through Langevin equations, or their associated formulations like the master equation or Fokker-Planck equation. At the heart of this landscape lies the Waddington potential, \(U=-\ln(P_{ss})\), intricately linked to the stationary distribution \(P_{ss}\) of the master equation or Fokker-Planck equation[37, 68, 69]. This potential is pivotal in revealing the phenotypic outcomes tethered to the underlying regulatory network. However, it falls short of encapsulating the dynamic processes of cell-type transitions during development and tissue growth without considering the cell cycle process.
In this context, our study charts a pioneering path by introducing a computational model framework that fuses the mechanics of cell regeneration with the ebbs and flows of epigenetic modifications during cell division. This model allows us to quantitatively map the relationship between alterations in the epigenetic landscape and corresponding phenotypic transitions. Our model simulations breathe life into the concept of temporal potential in a biological context, providing an innovative lens to explore the evolving dynamics of the Waddington landscape throughout tissue growth. With this endeavor, we not only unravel novel insights into the intricate mechanisms governing stem cell differentiation and cell reprogramming but also offers a promising path to enhance the field of regenerative medicine[53, 70].
In conclusion, our research sets forth a novel paradigm by integrating the interplay of gene networks, epigenetic state inheritance, and cell regeneration
dynamics. Through detailed computational simulations and biological insights, we shed light on the underlying forces steering cell fate decisions. The holistic understanding gained from our model has the potential to catalyze transformative breakthroughs in regenerative medicine, propelling us closer to unlocking the full regenerative potential of cells and tissues.
## Acknowledgements
This work is supported by the National Natural Science Foundation of China (No.11831015) and the Science and Technology Project (No.JAT200246) funded by the Education Department of Fujian Province, China.
|
2301.13501 | Auxiliary Learning as an Asymmetric Bargaining Game | Auxiliary learning is an effective method for enhancing the generalization
capabilities of trained models, particularly when dealing with small datasets.
However, this approach may present several difficulties: (i) optimizing
multiple objectives can be more challenging, and (ii) how to balance the
auxiliary tasks to best assist the main task is unclear. In this work, we
propose a novel approach, named AuxiNash, for balancing tasks in auxiliary
learning by formalizing the problem as generalized bargaining game with
asymmetric task bargaining power. Furthermore, we describe an efficient
procedure for learning the bargaining power of tasks based on their
contribution to the performance of the main task and derive theoretical
guarantees for its convergence. Finally, we evaluate AuxiNash on multiple
multi-task benchmarks and find that it consistently outperforms competing
methods. | Aviv Shamsian, Aviv Navon, Neta Glazer, Kenji Kawaguchi, Gal Chechik, Ethan Fetaya | 2023-01-31T09:41:39Z | http://arxiv.org/abs/2301.13501v2 | # Auxiliary Learning as an Asymmetric Bargaining Game
###### Abstract
Auxiliary learning is an effective method for enhancing the generalization capabilities of trained models, particularly when dealing with small datasets. However, this approach may present several difficulties: (i) optimizing multiple objectives can be more challenging, and (ii) how to balance the auxiliary tasks to best assist the main task is unclear. In this work, we propose a novel approach, named _AuxiNash_, for balancing tasks in auxiliary learning by formalizing the problem as generalized bargaining game with asymmetric task bargaining power. Furthermore, we describe an efficient procedure for learning the bargaining power of tasks based on their contribution to the performance of the main task and derive theoretical guarantees for its convergence. Finally, we evaluate AuxiNash on multiple multi-task benchmarks and find that it consistently outperforms competing methods.
Machine Learning, Bargaining Game, AuxiNash, Fetaya
## 1 Introduction
When training deep neural networks with limited labeled data, generalization can be improved by adding auxiliary tasks. In this approach, called _Auxiliary learning_ (AL), the auxiliary tasks are trained jointly with the main task, and their labels can provide a signal that is useful for the main task. AL is beneficial because auxiliary annotations are often easier to obtain than annotations for the main task. This is the case when the auxiliary tasks use self-supervision (Oliver et al., 2018; Hwang et al., 2020; Achituve et al., 2021), or their annotation process is faster. For example, learning semantic segmentation of an image may require careful and costly annotation, but can be improved if learned jointly with a depth prediction task, whose annotations can be obtained at scale (Standley et al., 2020).
Auxiliary learning has a large potential to improve learning in the low data regime, but it gives rise to two main challenges: Defining the joint optimization problem and performing the optimization efficiently. (1) First, given a main task at hand, it is not clear _which_ auxiliary tasks would benefit the main task and how tasks should be _combined_ into a joint optimization objective. For example, Standley et al. (2020) showed that depth estimation is a useful auxiliary task for semantic segmentation but not the opposite. In fact, adding semantic segmentation as an auxiliary task harmed depth estimation performance. This suggests that even close-related tasks may interfere with each other. (2) Second, training with auxiliary tasks involves optimizing multiple objectives simultaneously; While training with multiple tasks can potentially improve performance via better generalization, it often underperforms compared to single-task models.
Previous auxiliary learning research focused mainly on the first challenge: namely, weighting and combining auxiliary tasks (Lin et al., 2019). The second challenge, optimizing the main task in the presence of auxiliary tasks, has been less explored. Luckily, this problem can be viewed as a case of optimization in _Multi-Task Learning_ (MTL). In MTL, there is extensive research on controlling optimization such that every task would benefit from the others. Specifically, several studies proposed algorithms to aggregate gradients from multiple tasks into a coherent update direction (Yu et al., 2020; Liu et al., 2021; Navon et al., 2022). We see large potential in bringing MTL optimization ideas into auxiliary learning to address the optimization challenge.
Here we propose a novel approach named AuxiNash that takes inspiration from recent advances in MTL optimization as a cooperative bargaining game (Nash-MTL, Navon et al., 2022). The idea is to view a gradient update as a shared resource, view each task as a player in a game, and have players compete over making the joint gradient similar to their own task gradient. In Nash-MTL, tasks play a symmetric role, since no task is particularly favorable. This leads to a bargaining solution that is proportionally fair across tasks. In contrast, task symmetry no longer holds in auxiliary learning, where there is a clear distinction between the primary task and the auxiliary ones. As such, we propose to view auxiliary learning as an _asymmetric bargaining_
game_. Specifically, we consider gradient aggregation as a cooperative bargaining game where each player represents a task with varying bargaining power. We formulate gradient update using asymmetric Nash bargaining solution which takes into account varying task preferences. By generalizing Nash-MTL to asymmetric games with AuxiNash, we can efficiently direct optimization solution towards various areas of the Pareto front.
Determining the task preferences that result in optimal performance is a challenging problem for several reasons. First, the relationship between tasks can change during the optimization process, making it difficult to know in advance what preferences to use. This means that the process of preference tuning needs to be automated during training. Second, using a grid search to find the optimal preferences can be computationally expensive, and the complexity of the search increases exponentially with the number of tasks. To overcome these limitations, we propose a method for efficiently differentiating through the optimization process and using this information to automatically optimize task preferences during training. This can improve the performance of the primary task, and make it more efficient to find the optimal preferences.
We theoretically analyze the convergence of AuxiNash and show that even if the preference changes during the optimization process we are still guaranteed to converge to a Pareto stationary point. Finally, we show empirically on several benchmarks that AuxiNash achieves superior results to previous auxiliary learning and multi-task learning approaches.
Contributions:This paper makes the following contributions: (1) We introduce AuxiNash - a novel approach for auxiliary learning based on principles from asymmetric bargaining games. (2) We describe an efficient method to dynamically learn task preferences during training. (3) We theoretically show that AuxiNash is guaranteed to converge to a Pareto stationary point. (4) We conduct extensive experiments to demonstrate the superiority of AuxiNash against multiple baselines from MTL and auxiliary learning.
## 2 Related Work
Auxiliary Learning.Learning with limited amount of training data is challenging since deep learning models tend to overfit to the training data and as a result can generalize poorly (Ying, 2019). One approach to overcome this limitation is using auxiliary learning (Chen et al., 2022; Kung et al., 2021). Auxiliary learning aims to improve the model performance on primary task by utilizing the information of related auxiliary tasks (Dery et al., 2022; Chen et al.). Most auxiliary learning approaches use a linear combination of the main and auxiliary losses to form a unified loss (Zhai et al., 2019; Wen et al., 2020). Fine-tuning the weight of each task loss may be challenging as the search space of the grid search grows exponentially with the number of tasks. To find the beneficial auxiliary tasks, recent studies utilized the auxiliary task gradients and measure their similarity with the main task gradients (Lin et al., 2019; Du et al., 2018; Shi et al., 2020). Navon et al. (2021) proposed to learn a non-linear network that combines all losses into a single coherent objective function.
Multi-task Learning.In multi-task learning (MTL) we aim to solve multiple tasks by sharing information between them (Caruana, 1997; Ruder, 2017), usually through joint hidden representation (Zhang et al., 2014; Dai et al., 2016; Pinto and Gupta, 2017; Liu et al., 2019). Previous studies showed that optimizing a model using MTL helps boost performances while being computationally efficient (Sener and Koltun, 2018; Chen et al., 2018). However, MTL presents a number of optimization challenges such as: conflicting gradients (Wang et al., 2020; Yu et al., 2020), and flatten areas in the loss landscape (Schaul et al., 2019). These challenges may result with performance degradation compare with single task learning. Recent studies proposed novel architectures (Misra et al., 2016; Hashimoto et al., 2017; Liu et al., 2019; Chen et al., 2020) to improve MTL while others focused on aggregating the gradients of the tasks such that it is agnostic to the optimized model (Liu et al., 2021; Javaloy and Valera, 2021). Yu et al. (2020) proposed to overcome the conflicting gradients problem by subtracting normal projection of conflicted task before forming an update direction. Most gradient based methods aim to minimize the average loss function. Liu et al. (2021) suggested
Figure 1: _Illustrative example_: A regression problem in \(\mathbb{R}^{2}\) with two auxiliary tasks, one helpful and one harmful. AuxiNash succeeds in using the helpful auxiliary task and disregards the harmful one, as demonstrated by how it learns to weigh different tasks: The left panel shows the preference vector \(p\) during optimization. As a result, AuxiNash converges to a solution with large proximity to the optimal solution, far superior to that obtained from optimizing with the main tasks alone (right panel). See Section 6.1 for further details.
an approach that will decrease every task loss in addition to the average loss function. The closest work to our approach is Nash-MTL (Navon et al., 2022). The authors proposed a principled approach to dynamically weight the losses of different tasks by incorporating concepts from game theory.
Bi-level Optimization.Bi-Level Optimization (BLO) consists of two nested optimization problems (Liao et al., 2018; Liu et al., 2021; Vicol et al., 2022). The outer optimization problem is commonly referred to as the upper-level problem, while the inner optimization problem is referred to as the lower-level problem (Sinha et al., 2017). BLO is widely used in a variety of deep learning applications, spanning hyper-parameter optimization (Foo et al., 2007; MacKay et al., 2019), meta learning (Franceschi et al., 2018), reinforcement learning (Zhang et al., 2020; Yang et al., 2019), and multi-task learning (Liu et al., 2022; Navon et al., 2021). A common practice to derive the gradients of the upper-level task is using the implicit function theorem (IFT). However, applying IFT involves the calculation of an inverse-Hessian vector product which is infeasible for large deep learning models. Therefore, recent studies proposed diverse approaches to approximate the inverse-Hessian vector product. Luketina et al. (2016) proposed approximating the Hessian with the identity matrix, where other line of works used conjugate gradient (CG) to approximate the product (Foo et al., 2007; Pedregosa, 2016; Rajeswaran et al., 2019). We use a truncated Neumann series and efficient vector-Jacobian products, as it was empirically shown to be more stable than CG (Liao et al., 2018; Lorraine et al., 2020; Raghu et al., 2020).
## 3 Background
### Nash Bargaining Solution
We will first give a quick introduction to cooperative bargaining games. In a bargaining game, a set of \(K\) players jointly decide on an agreed-upon point in the set \(A\) of all agreement points. If failing to reach an agreement, the game default to the disagreement point \(D\). Each player \(i\in[K]:=\{1,...,K\}\) is equipped with a utility function \(u_{i}:A\cup\{D\}\rightarrow\mathbb{R}\), which they wish to maximize. Intuitively, each player has a different objective, and each tries to only maximize their own personal utility. However, we generally assume that there are points in the agreement set that are mutually beneficial to all players, compared to the disagreement point, and as such the players are incentivized to cooperate. The main question is on which point in the agreement set will they decide upon.
Denote by \(U=\{(u_{1}(x),...,u_{K}(x)):\:x\in A\}\subset\mathbb{R}^{K}\) the set of the utilities of all possible agreement points and \(d=(u_{1}(D),...,u_{K}(D))\). The set \(U\) is assumed to be convex and compact. Furthermore, we assume that there exists a point \(u\in U\) for which \(\forall i:u_{i}>d_{i}\). Nash (1953) showed that under these assumptions, there exists a unique solution to the bargaining game, which satisfies the following properties: Pareto optimality, symmetry, independence of irrelevant alternatives, and invariant to affine transformations (see Szep & Forgo, 1985, and the supplementary material for more details). This unique solution, referred to as the Nash Bargaining solution (NBS), is given by
\[u^{*}= \arg\max_{u\in U}\sum_{i}\log(u_{i}-d_{i}) \tag{1}\] \[s.t.\:\:\forall i:\:u_{i}>d_{i}\]
As shown in Navon et al. (2022), NBS properties are suitable for the multi-task learning setup, even if the invariance to affine transformations implies that gradient norms are ignored. The symmetry assumption, however, implies that each player is interchangeable which is not the case for auxiliary learning. Naturally, our main concern is the main task and the auxiliaries are there to support it, not compete with it. Thus, we wish to discard the symmetry assumption for the auxiliary learning setup.
### Generalized Bargaining Game
Kalai (1977) generalized the NBS to the asymmetric case. First, define a preference vector to control the relative trade-off between tasks \(p\in\mathbb{R}^{K}\) with \(p_{i}>0\) and \(\sum_{i}p_{i}=1\) (see Figure 2). Similar to the symmetric case, the Generalized Nash Bargaining Solution (GNBS) maximizes a weighted
Figure 2: _Task preferences_: By varying the preference vector \(p\), we show that AuxiNash can control the trafe-off between tasks. Compared with Nash-MTL, AuxiNash achieves a wider range of diverse solutions, an important property for auxiliary learning. See Section 6.2 for further details.
product of utilities,
\[u^{*}= \arg\max_{u\in U}\sum_{i}p_{i}\log(u_{i}-d_{i}) \tag{2}\] \[s.t. \ \forall i:\ u_{i}>d_{i}\quad.\]
The symmetric case is a special case of GNBS with uniform preferences \(p_{i}=1/K,\ \forall i\in[K]\).
### Bargaining Game for Multi-task learning
Recently, Navon et al. (2022) formalized multi-task learning as a bargaining game as follows. Let \(\theta\in\mathbb{R}^{d}\) denote the parameters of a network \(f(\cdot;\theta)\). At each MTL optimization step, we search for an update direction \(\Delta\theta\). Define the agreement set \(U=\{\Delta\theta\ \ |\ \|\Delta\theta\|\leq 1\}\) as the set of all possible update directions. The disagreement point \(d\) is defined to be equal to zero, i.e., to stay with the current parameters and terminate the optimization process. Let \(g_{i}\) denote the gradient of \(\theta\) w.r.t. the loss of task \(i\) (for each \(i\in[K]\)). The utility function for task \(i\) is defined as \(u_{i}(\Delta\theta)=g_{i}^{\top}\Delta\theta\), i.e., the directional derivative in direction \(\Delta\theta\). Navon et al. (2022) assumed that the gradients \(g_{1},...,g_{K}\) are linearly independent if \(\theta\) is not Pareto stationary, and we adopt that assumption in our analysis. Under this assumption, Navon et al. (2022) show that the solution for the bargaining game at any non-Pareto stationary point \(\theta\) is given by \(\Delta\theta=\sum_{i}\alpha_{i}g_{i}\), where the weight vector \(\alpha\) satisfies
\[G^{\top}G\alpha=1/\alpha. \tag{3}\]
Here, \(G\) is the \(d\times K\) matrix whose \(i\)-th column is the \(i\)-th task gradient \(g_{i}\), and \(1/\alpha\) is the element-wise reciprocal.
## 4 Generalized Bargaining Game for Auxiliary Learning
In this section, we first extend the result from Navon et al. (2022) to the asymmetric case. Next, we describe a method to learn the preference vector \(p\).
### Generalized Bargaining Solution
We prove the following claim, which generalizes the claim of Navon et al. (2022) to asymmetric games.
**Claim 4.1**.: _Let \(p\in\mathbb{R}^{K}_{+}\) with \(\sum_{i}p_{i}=1\). The solution to the generalized bargaining problem \(\Delta\theta^{*}=\arg\max_{\Delta\theta\in U}\sum_{i}p_{i}\log(\Delta\theta^ {\top}g_{i})\) is given by (up to scaling) \(\sum_{i}\alpha_{i}g_{i}\) at any non-Pareto stationary point \(\theta\), where \(\alpha\in\mathbb{R}^{K}_{+}\) is the solution to \(G^{\top}G\alpha=p/\alpha\) where \(p/\alpha\) is the element-wise reciprocal._
Proof.: We define \(F(\Delta\theta)=\sum_{i}p_{i}\log(\Delta\theta^{\top}g_{i})\) and have \(\nabla F=\sum_{i=1}^{K}\frac{p_{i}}{\Delta\theta^{\top}g_{i}}g_{i}\). Note that for all \(\Delta\theta\) with \(u_{i}(\Delta\theta)>0\) for all \(i\in[K]\) the utilities are monotonically increasing in \(\|\Delta\theta\|\), hence the optimal solution lies on the boundary of \(U\), and \(\nabla F|_{\Delta\theta^{*}}\) is parallel to \(\Delta\theta^{*}\). This implies that \(\sum_{i=1}^{K}\frac{p_{i}}{\Delta\theta^{\top}g_{i}}g_{i}=\lambda\Delta\theta\) for some \(\lambda>0\). From the linear independence assumption, we have for the optimal solution \(\Delta\theta=\sum_{i}\alpha_{i}g_{i}\), thus \(\forall i,\Delta\theta^{\top}g_{i}=\frac{p_{i}}{\lambda\alpha_{i}}\). Setting \(\lambda=1\) (as we ignore scale), the solution to the bargaining game is reduced to finding \(\alpha\in\mathbb{R}^{K}_{+}\) for which \(\forall i,\ \Delta\theta^{\top}g_{i}=\sum_{j}\alpha_{j}g_{j}^{\top}g_{j}=p_{i}/\alpha_{i}\). Equivalently, the optimal solution is given by \(\alpha\in\mathbb{R}^{K}_{+}\) such that \(G^{\top}G\alpha=p/\alpha\) where \(p/\alpha\) is the element-wise reciprocal.
Given a preference vector \(p\), we solve \(G^{\top}G\alpha=p/\alpha\) by expressing it as the solution to an optimization problem. We first solve a convex relaxation which we follow by a concave-convex procedure (CCP) (Yuille and Rangarajan, 2003; Lipp and Boyd, 2016), similar to Navon et al. (2022) for solving \(G^{\top}G\alpha=p/\alpha\) w.r.t. \(\alpha\). See Appendix C for full details.
### Optimizing the Preference Vector
The derivation in the previous section allows us to learn using a known preference vector \(p\). Unfortunately, in most cases, the preference vector is not known in advance. One simple solution is to treat the preferences \(p_{i}\) as hyperparameters and set them via grid search. However, this approach has two significant limitations. First, as the number of hyperparameters increases, grid search becomes computationally expensive as it scales exponentially. Second, it is possible that the optimal preference vector would vary during optimization, hence using a fixed \(p\) would be sub-optimal.
To address these issues, we develop an approach for dynamically learning the task preference vector during training. This reduces the number of hyperparamters the user needs to tune to one (the preference update rate) and dynamically adjusts the preference to improve generalization. We do this by formulating the problem as bi-level optimization, which we discuss next.
Let \(\mathcal{L}_{T}\) denote the training loss and \(\mathcal{L}_{V}\) denote the _generalization_ loss, given by the loss of the main task on unseen data, i.e., \(\mathcal{L}_{V}=\ell_{main}^{val}\). In the auxiliary learning setup, we wish to optimize \(p\) such that a network \(f(\cdot;\theta)\) optimized with \(\mathcal{L}_{T}(\cdot;p)\) would minimize \(\mathcal{L}_{V}\). Formally,
\[p^{*}=\arg\min_{p}\mathcal{L}_{V}(\theta^{*}(p)),\ \ \text{s.t.}\ \ \theta^{*}(p)=\arg\min_{ \theta}\mathcal{L}_{T}(\theta,p)\]
Using the chain rule to get the derivative of the outer problem, we get
\[\frac{\partial\mathcal{L}_{V}(p,\theta^{*}(p))}{\partial p} = \underbrace{\partial\mathcal{L}_{V}}_{\sum\limits=0}+\frac{ \partial\mathcal{L}_{V}}{\partial\theta}\frac{\partial\theta^{*}}{\partial p }= \frac{\partial\mathcal{L}_{V}}{\partial\theta}\frac{\partial\theta^{*}}{\partial \alpha(p)}\frac{\partial\alpha(p)}{\partial p}\]
As we can compute \(\frac{\partial\mathcal{L}_{V}}{\partial\theta}\) by simple backpropagation, the main challenge is to compute \(\frac{\partial\theta^{*}}{\partial\alpha(p)}\) and \(\frac{\partial\alpha(p)}{\partial p}\).
To compute \(\frac{\partial\theta^{*}}{\partial\alpha(p)}\) we can (indirectly) differentiate through the optimization process using the implicit function theorem (IFT) (Liao et al., 2018; Lorraine et al., 2020; Navon et al., 2021):
\[\frac{\partial\theta^{*}}{\partial\alpha(p)}=-\left[\frac{\partial^{2}\mathcal{ L}_{T}}{\partial\theta\partial\theta^{\top}}\right]^{-1}\frac{\partial^{2} \mathcal{L}_{T}}{\partial\theta\partial\alpha(p)^{\top}} \tag{4}\]
Since computing the inverse Hessian directly is intractable, we use the algorithm proposed by Lorraine et al. (2020) to efficiently estimate the inverse-Hessian vector product. This approach uses the Neumann approximation with an efficient vector-Jacobian product. Thus, we can efficiently approximate the first term, \(\frac{\partial\mathcal{L}_{Y}}{\partial\theta}\frac{\partial\theta^{*}}{ \partial\alpha(p)}\). We note that in practice, as in customary, we do not optimize till convergence but perform a few gradient updates from the previous value. For further details, see Vicol et al. (2022) that recently examined how this affects the bi-level optimization process.
For the second term, \(\frac{\partial\alpha(p)}{\partial p}\), we derive a simple analytical expression using the IFT in the following proposition:
**Proposition 4.2**.: _For any \((p,\alpha)\) satisfying \(G^{\top}G\alpha=p/\alpha\), there exists an open set \(U\subseteq\mathbb{R}^{K}\) containing \(p\) such that there exists a continuously differentiable function \(\hat{\alpha}:U\rightarrow\mathbb{R}^{K}\) satisfying all of the following properties: (1) \(\hat{\alpha}(p)=\alpha\), (2) \(G^{\top}G\hat{\alpha}(\bar{p})=\bar{p}/\hat{\alpha}(\bar{p})\) for all \(\bar{p}\in U\), and (3)_
\[\frac{\partial\hat{\alpha}(p)}{\partial p}=\left[G^{\top}G+\Lambda_{0}\right]^ {-1}\Lambda_{1}. \tag{5}\]
_Here \(\Lambda_{0},\Lambda_{1}\in\mathbb{R}^{K\times K}\) are the diagonal matrices defined by \((\Lambda_{0})_{ii}=p_{i}/\alpha_{i}^{2}\in\mathbb{R}\) and \((\Lambda_{1})_{ii}=1/\alpha_{i}\in\mathbb{R}\) for \(i\in[K]\)._
We refer the readers to Appendix B for the proof.
Putting everything together, we obtain the following efficient approximation,
\[\begin{split}\frac{\partial\mathcal{L}_{V}(p,\theta^{*}(p))}{ \partial p}&=-\frac{\partial\mathcal{L}_{V}}{\partial\theta} \left[\frac{\partial^{2}\mathcal{L}_{T}}{\partial\theta\partial\theta^{ \top}}\right]^{-1}\frac{\partial^{2}\mathcal{L}_{T}}{\partial\theta\partial \alpha(p)^{\top}}\times\\ &\quad\quad\quad\quad\quad\quad\left[G^{\top}G+\Lambda_{0}\right] ^{-1}\Lambda_{1}\end{split} \tag{6}\]
We note that this approximation can be computed in a relatively efficient manner, with the cost of only several backpropagation operations to estimate the vector-Jacobian product (we use 3 in our experiments). We also note that the matrix \(\left(G^{\top}G+\Lambda_{0}\right)\) that we invert is of size \(K\times K\), where \(K\) is the number of tasks that is generally relatively small.
In practice, we use a separate batch from the training set to estimate the generalization loss \(\mathcal{L}_{V}\). We further discuss this design choice and provide an empirical evaluation in Section 6.3. During the optimization process, we alternate between optimizing \(\theta\) and optimizing \(p\). Specifically, we update \(p\) once every \(N_{p}\) optimization steps over \(\theta\). We set \(N_{p}=25\) in our experiments. The AuxiNash algorithm is summarized in Alg. 1.
```
Input:\(\theta\) - initial parameter vector, \(p\) - initial preference vector, \(\{\ell_{i}\}_{i=1}^{K}\) - differentiable loss functions, \(\eta,\eta_{p}\) - learning rates for\(T=1,...,N\)do for\(t=1,...,N_{p}\)do Compute task gradients \(g_{i}=\nabla_{\theta}\ell_{i}\) Set \(G\) the matrix with columns \(g_{i}\) Solve for \(\alpha\): \(G^{\top}G\alpha=p/\alpha\) Update the parameters \(\theta\leftarrow\theta-\eta G\alpha\) endfor Evaluate \(\nabla_{p}\mathcal{L}_{V}\) using Eq. 6 Update \(p\gets p-\eta_{p}\nabla_{p}\mathcal{L}_{V}\) endfor Return:\(\theta\).
```
**Algorithm 1** AuxiNash
## 5 Analysis
We analyze the convergence properties of our proposed method in nonconvex optimization. We adopt the following three assumptions from Navon et al. (2022):
**Assumption 5.1**.: We assume that for a sequence \(\{\theta^{(t)}\}_{t=1}^{\infty}\) generated by our algorithm, the set of the gradient vectors \(g_{1}^{(t)},...,g_{K}^{(t)}\) at any point on the sequence and at any partial limit are linearly independent unless that point is a Pareto stationary point.
Figure 3: Visualization of the update direction: We show the update direction (blue) obtained by AuxiNash on three gradients in \(\mathbb{R}^{3}\). We rescaled the update directions for better visibility, showing only the direction. We further show the size of the projection (red) of the update to each gradient direction (**black**). By varying the preference vector, we observe the change in the obtained update direction. Importantly, we note that the effect on the update direction is non-trivial, as \(p\) only affects the update implicitly through the bargaining solution \(\alpha\).
**Assumption 5.2**.: We assume that all loss functions are differentiable, bounded below and that all sub-level sets are bounded. The input domain is open and convex.
**Assumption 5.3**.: We assume that all the loss functions are \(L\)-smooth,
\[\|\nabla\ell_{i}(x)-\nabla\ell_{i}(y)\|\leq L\|x-y\|. \tag{7}\]
Since even single-task non-convex optimization might only admits convergence to a stationary point, the following theorem proves convergence to a Pareto stationary point when both \(\theta\) and \(p\) are optimized concurrently:
**Theorem 5.4**.: _Suppose that Assumptions 5.1, 5.2, and 5.3 hold. Let \(\{\theta^{(t)}\}_{t=1}^{\infty}\) be the sequence generated by the update rule \(\theta^{(t+1)}=\theta^{(t)}-\mu^{(t)}\Delta\theta^{(t)}\) where \(\Delta\theta^{(t)}=\sum_{i=1}^{K}\alpha_{i}^{(t)}g_{i}^{(t)}\) is the weighted Nash bargaining solution \((G^{(t)})^{\top}G^{(t)}\alpha^{(t)}=p^{(t)}/\alpha^{(t)}\) where \(p^{(t)}\) can be any discrete distribution. Set \(\mu^{(t)}=\frac{1}{K}\sum_{i=1}^{K}p_{i}^{(t)}(L\alpha_{i}^{(t)})^{-1}\). The sequence \(\{\theta^{(t)}\}_{t=1}^{\infty}\) has a subsequence that converges to a Pareto stationary point \(\theta^{*}\). Moreover all the loss functions \((\ell_{1}(\theta^{(t)}),...,\ell_{K}(\theta^{(t)}))\) converge to \((\ell_{1}(\theta^{*}),...,\ell_{K}(\theta^{*}))\)._
See full proof in Appendix B.
## 6 Experiments
In this section, we compare AuxiNash with different approaches from multi-task and auxiliary learning. We use variety of datasets and learning setups to demonstrate the superiority of AuxiNash. To encourage future research and reproducibility, we will make our source code publicly available. Additional experimental results and details are provided in Appendix D.
Baselines.We compare AuxiNash with natural baselines from recent auxiliary and multi-task learning works. The compared methods includes (1) Single-task learning (STL), which trains a model using the main task only; (2) Linear scalarization (LS) that minimizes the sum of losses \(\sum_{k}\ell_{k}\); (3) GCS (Du et al., 2018), an auxiliary learning approach that uses gradient similarity between primary and auxiliary tasks; (4) OL-AUX (Lin et al., 2019), an auxiliary learning approach that adaptively changes the loss weight based on the gradient inner product w.r.t the main task; (5) AuxiLearn (Navon et al., 2021), an auxiliary learning approach that dynamically learns non-linear combinations of different tasks; (6) PCGrad (Yu et al., 2020), an MTL method that removes gradient components that conflict with other tasks; (7) CAGrad (Liu et al., 2021), an MTL method that optimizes for the average loss while explicitly controlling the minimum decrease rate across tasks; (8) Nash-MTL (Navon et al., 2022), an MTL approach that is equivalent to AuxiNash but with a fixed \(p_{i}=1/K\) weighting.
EvaluationWe report the common evaluation metrics for each task. Since MTL methods treat each task equally, and these may vary in scale, we also report the overall relative multi-task performance \(\Delta\%\). \(\Delta\%\) is defined as the performance drop compared to the STL performance. Formally, \(\Delta\%=\frac{1}{K}\sum_{k=1}^{K}(-1)^{\delta_{k}}(M_{m,k}-M_{b,k})/M_{b,k}\). We denote \(M_{b,k}\) and \(M_{m,k}\) as the performance of STL and the compared method on task \(k\), respectively. \(\delta_{k}=0\) if a lower value is better for the metric \(M_{k}\) and \(1\) otherwise (Maninis et al., 2019). In all experiments, we report the mean value based on \(3\) random seeds.
It is important to note that for MTL models, we present the results of a single model trained on all tasks. For auxiliary learning methods, we trained a unique model per task, treating it as the main task and using the remaining tasks as auxiliaries.
\begin{table}
\begin{tabular}{c c c c c c c c c c c} \hline \hline & \multicolumn{2}{c}{Segmentation} & \multicolumn{2}{c}{Depth} & \multicolumn{4}{c}{Surface Normal} & \multirow{2}{*}{\(\mathbf{\Delta\%\downarrow}\)} \\ \cline{3-3} \cline{5-10} & mIoU \(\uparrow\) & Pix Acc \(\uparrow\) & Abs Err \(\downarrow\) & Rel Err \(\downarrow\) & \multicolumn{2}{c}{Angle Distance \(\downarrow\)} & \multicolumn{2}{c}{Within \(t^{\circ}\uparrow\)} & \\ \cline{5-10} & & & & & Mean & Median & 11.25 & 22.5 & 30 & \\ \hline STL & \(38.30\) & \(63.76\) & \(0.6754\) & \(0.2780\) & \(25.01\) & \(19.21\) & \(30.14\) & \(57.20\) & \(69.15\) & \\ \hline LS & \(38.43\) & \(64.36\) & \(0.5472\) & \(0.2184\) & \(29.57\) & \(25.42\) & \(20.50\) & \(44.85\) & \(58.20\) & \(8.69\) \\ PCGrad & \(39.25\) & \(64.95\) & \(0.5389\) & \(0.2141\) & \(28.66\) & \(24.26\) & \(21.99\) & \(47.00\) & \(60.31\) & \(5.66\) \\ CAGrad & \(39.25\) & \(65.15\) & \(0.5385\) & \(0.2155\) & \(26.11\) & \(20.95\) & \(26.96\) & \(53.66\) & \(66.37\) & \(-1.46\) \\ Nash-MTL & \(39.83\) & \(66.00\) & \(0.5235\) & \(0.2075\) & \(25.32\) & \(19.87\) & \(28.86\) & \(55.87\) & \(68.27\) & \(-4.76\) \\ \hline GCS & \(38.96\) & \(64.35\) & \(0.5769\) & \(0.2293\) & \(29.57\) & \(25.53\) & \(20.64\) & \(44.68\) & \(57.99\) & \(9.54\) \\ OL-AUX & \(40.51\) & \(65.49\) & \(0.6652\) & \(0.2614\) & \(\mathbf{24.65}\) & \(\mathbf{18.72}\) & \(\mathbf{30.92}\) & \(\mathbf{58.37}\) & \(\mathbf{70.12}\) & \(-2.88\) \\ AuxiLearn & \(38.63\) & \(64.20\) & \(0.5415\) & \(0.2173\) & \(29.98\) & \(25.29\) & \(20.03\) & \(43.94\) & \(57.17\) & \(9.15\) \\ \hline AuxiNash (ours) & \(\mathbf{40.79}\) & \(\mathbf{66.79}\) & \(\mathbf{0.5092}\) & \(\mathbf{0.2042}\) & \(24.90\) & \(19.31\) & \(29.83\) & \(57.07\) & \(69.27\) & \(\mathbf{-6.80}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: _NYUv2._ Test performance for three tasks: semantic segmentation, depth estimation, and surface normal. Values are averages over 3 random seeds.
### Illustrative Example
We start with an illustrative example, showing that Auxish can utilize helpful auxiliaries while ignoring harmful ones.
We adopt a similar problem setup as in Navon et al. (2021) and consider a regression problem with parameters \(W^{T}=(w_{1},w_{2})\in\mathbb{R}^{2}\), fully shared among tasks. The optimal parameters for the main and helpful auxiliary tasks are \(W^{\star}\), while the optimal parameters for the harmful auxiliary are \(\tilde{W}\neq W^{\star}\). The main task is sampled from a Normal distribution \(N({W^{\star}}^{T}x,\sigma_{\text{main}})\), with \(\sigma_{\text{main}}>\sigma_{\text{h}}\) where \(\sigma_{\text{h}}\) denotes the standard deviation for the noise of the helpful auxiliary.
The change in the task preference throughout the optimization process is depicted in the left panel of Figure 1. Auxish identify the helpful tasks and fully ignore the harmful ones. In addition, Figure 1 right panel presents the main task's loss landscape, along with the optimal solution (\(W^{\star}\), marked \(\blacktriangle\)), the optimal training set solution of the main task alone (\(\blacksquare\)) and the solution obtained by Auxish (marked \(\bullet\)). While using the training data alone with no auxiliary information yields a solution that generalizes poorly, AuxiNash converges to a solution with large proximity to the optimal solution \(W^{\star}\),
### Controlling Task Preference
In this section, we wish to better understand the relationship between the preference \(p\) and the obtained solution. We note the preference vector only implicitly affects the optimization solution through the bargaining solution \(\alpha\).
Here, we show that controlling the preference vector can be used to steer the optimization outcome to different parts of the Pareto front, compared to the NashMTL baseline. We consider MTL setup with 2 tasks and use the Multi-MNIST (Sabour et al., 2017) dataset. In Multi-MNIST two images from the original MNIST dataset are merged into one by placing one at the top-left corner and the other at the bottom-right corner. The tasks are defined as image classification of the merged images. We run AuxiNash \(11\) times with varying preference vector values \(p\) and fix it throughout the training. For both tasks we report the classification accuracy. For Nash-MTL we run the experiments with different seed values. For both methods we train a variant of LeNet model for \(50\) epochs with Adam optimizer and \(1e-4\) as the learning rate.
Figure 2 shows the results. AuxiNash reaches a diverse set of solutions across the Pareto front while Nash-MTL solutions are all relatively similar due to its symmetry property.
### Analyzing the Effect of Auxiliary Set
One important question is on what data to evaluate the generalization loss \(\mathcal{L}_{V}\). It would seem intuitive that one would need a separate validation set for that since estimating \(\mathcal{L}_{V}\) on the training data may be biased. In practice, some previous works use a held-out auxiliary set (Navon et al., 2021), while others use a separate batch from the training set (Liu et al., 2019, 2022). While using an auxiliary set might be more intuitive, it requires reducing the available amount of training data which can be detrimental in the low-data regime we are interested in.
We empirically evaluate this using the NYUv2 (Silberman et al., 2012) and Cityscapes (Cordts et al., 2016) datasets. See Section 6.4 for more details. We choose semantic segmentation as the main task for both datasets. We compare the following methods (i) _STL_: single task learning using the main task only, (ii) _STL Partial_: STL using only \(90\%\) of the training data, (iii) _AuxiNash_: our proposed method where we optimize the preference vector using the entire training set, (iv) _AuxiNash Aux. Set_: our proposed method, where we optimize the preference vector using \(10\%\) of the data, allocated from the training set.
We report the mean-IoU metric (higher is better) along with the relative change from the performance of the STL method. The results suggest that the drawback of sacrificing some of the training data overweighs the benefit of using an auxiliary set. This result aligns with the observation in Liu et al. (2022).
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline & \multicolumn{2}{c}{Semantic Seg.} & \multicolumn{2}{c}{Part Seg.} & \multicolumn{2}{c}{Disparity} \\ \cline{2-7} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \cline{2-7} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \hline STL & \(48.64\) & \(91.01\) & \(53.60\) & \(97.62\) & \(1.108\) & \\ \hline LS & \(37.66\) & \(88.63\) & \(40.92\) & \(96.98\) & \(1.105\) & \(9.84\) \\ PCGrad & \(39.10\) & \(89.31\) & \(41.71\) & \(97.14\) & \(1.133\) & \(9.28\) \\ CAGrad & \(39.45\) & \(89.04\) & \(51.95\) & \(97.54\) & \(1.098\) & \(4.66\) \\ Nash-MTL & \(51.14\) & \(91.59\) & \(56.99\) & \(97.87\) & \(1.066\) & \(-3.23\) \\ \hline GCS & \(37.45\) & \(88.62\) & \(41.14\) & \(96.97\) & \(1.124\) & \(10.19\) \\ OL-AUX & \(27.63\) & \(89.34\) & \(51.12\) & \(97.52\) & \(1.397\) & \(15.16\) \\ AuxiLearn & \(36.18\) & \(88.24\) & \(40.51\) & \(96.95\) & \(1.141\) & \(11.3\) \\ \hline AuxiNash & \(\mathbf{52.52}\) & \(\mathbf{91.91}\) & \(\mathbf{58.53}\) & \(\mathbf{97.93}\) & \(\mathbf{1.027}\) & \(\mathbf{-5.15}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Cityscapes. Test performance for three tasks: 19-class semantic segmentation, 10-class part segmentation, and disparity.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & \multicolumn{2}{c}{Cityscapes} & \multicolumn{2}{c}{NYUv2} \\ \cline{2-5} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \cline{2-5} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \cline{2-5} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \hline STL & \(48.64\) & \(38.30\) & \(-4.59\) \\ STL Partial & \(45.97\) & \(-5.48\) & \(36.54\) & \(-4.59\) \\ \hline AuxiNash & \(52.52\) & \(7.97\) & \(40.79\) & \(6.50\) \\ AuxiNash Aux. Set & \(51.81\) & \(6.51\) & \(38.94\) & \(1.67\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: The effect of auxiliary set: We report the mean IoU, along with the \(\%\) change w.r.t STL performance.
### Scene Understanding
We follow the setup from Liu et al. (2019, 2022); Navon et al. (2022) and evaluate AuxiNash on the NYUv2 and Cityscapes datasets (Silberman et al., 2012; Cordts et al., 2016). The indoor scene NYUv2 dataset (Silberman et al., 2012) contains 3 tasks: 13 classes semantic segmentation, depth estimation, and surface normal prediction. The dataset consists of \(1449\) RGBD images captured from diverse indoor scenes.
We also use the Cityscapes dataset (Cordts et al., 2016) with 3 tasks (similar to Liu et al. (2022)): 19-class semantic segmentation, disparity (inverse depth) estimation, and 10-class part segmentation (de Geus et al., 2021). To speed up the training phase, all images and label maps were resized to \(128\times 256\).
For all methods, we train SegNet (Badrinarayanan et al., 2017), a fully convolutional model based on VGG16 architecture. We follow the training and evaluation procedure from Liu et al. (2019, 2022); Navon et al. (2022) and train the network for 200 epochs with Adam optimizer (Kingma & Ba, 2015). We set the learning rate to \(1e-4\) and halved it after \(100\) epochs.
The results are presented in Table 1 and Table 2. Observing the results, we can see our approach AuxiNash outperforms other approaches by a significant margin. It is also important to note that several methods achieve a positive \(\%\Delta\) score, meaning they performed worse than simply ignoring the auxiliary tasks and training on the main task alone. We believe this is due to the difficulties presented by optimizing with multiple objectives.
### Semi-supervised Learning with SSL Auxiliaries
In semi-supervised learning one generally trains a model with a small amount of labeled data, while utilizing self-supervised tasks as auxiliaries to be optimized using unlabeled training data.
We follow the setup from Shi et al. (2020) and evaluate AuxiNash on a Self-supervised Semi-supervised Learning setting (Zhai et al., 2019). We use CIFAR-10 dataset to form 3 tasks. We set the supervised classification as the main task along with two self-supervised learning (SSL) tasks used as auxiliaries: (i) Rotation (ii) Exempler-MT. In Rotation, we randomly rotate each image by \([0^{\circ},90^{\circ},180^{\circ},270^{\circ}]\) and optimize the network to predict the angle. In Exempler-MT we apply a combination of three transformations: horizontal flip, gaussian noise, and cutout. Similarly to contrastive learning, the model is trained to extract invariant features by encouraging the original and augmented images to be close in their feature space. For the supervised task we randomly allocate samples from the training set. We repeat this experiment twice with \(5K\) and \(10K\) labeled training examples. The results are presented in Table 4. AuxiNash significantly outperforms most baselines.
### Audio Classification
We evaluate AuxiNash on the speech commands (SC) dataset (Warden, 2018), which consists of \(\sim 50\)K speech samples of specific keywords. The data contains 30 different keywords, and each speech sample is one second long. We use a subset of the SC containing audio samples for only the 10 numbering keywords (zero to nine). As a pre-processing step, we use a short-time Fourier transform (STFT) to extract a spectrogram for each example, which we then fed to a convolutional neural network (CNN). We evaluate AuxiNash on 10 one-vs-all binary classification tasks. We repeat the experiment with a training set of sizes 500 and 1000. The results are presented in Table 5.
\begin{table}
\begin{tabular}{l c c} \hline \hline & CIFAR10-SSL-5K & CIFAR10-SSL-10K \\ \hline STL & \(79.31\pm 0.31\) & \(83.75\pm 0.18\) \\ \hline LS & \(83.17\pm 0.54\) & \(86.16\pm 0.39\) \\ PCGrad & \(82.71\pm 0.16\) & \(86.17\pm 0.34\) \\ CAGrad & \(85.89\pm 0.63\) & \(87.82\pm 0.28\) \\ Nash-MTL & \(\mathbf{86.69\pm 0.14}\) & \(\mathbf{88.68\pm 0.14}\) \\ \hline GCS & \(83.09\pm 0.34\) & \(86.47\pm 0.56\) \\ OL-AUX & \(81.44\pm 1.06\) & \(85.49\pm 0.73\) \\ AuxiLearn & \(82.83\pm 0.57\) & \(85.52\pm 0.57\) \\ \hline AuxiNash (ours) & \(\mathbf{87.01\pm 0.52}\) & \(\mathbf{88.81\pm 0.34}\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: _CIFAR10-SSL_. Test performance for classification with a varying number of labeled data. Values are averages over 3 random seeds.
\begin{table}
\begin{tabular}{l c c} \hline \hline & SC-500 & SC-1000 \\ \hline STL & \(95.8\pm 0.1\) & \(96.4\pm 0.1\) \\ \hline LS & \(95.7\pm 0.2\) & \(96.7\pm 0.1\) \\ PCGrad & \(95.7\pm 0.1\) & \(96.7\pm 0.1\) \\ CAGrad & \(95.7\pm 0.2\) & \(95.7\pm 0.1\) \\ Nash-MTL & \(95.7\pm 0.2\) & \(96.6\pm 0.3\) \\ \hline GCS & \(96.3\pm 0.1\) & \(96.9\pm 0.1\) \\ OL-AUX & \(96.2\pm 0.2\) & \(96.9\pm 0.1\) \\ AuxiLearn & \(96.0\pm 0.1\) & \(97.0\pm 0.1\) \\ \hline AuxiNash (ours) & \(\mathbf{96.4\pm 0.1}\) & \(\mathbf{97.2\pm 0.1}\) \\ \hline \hline \end{tabular}
\end{table}
Table 5: _Speech Commands_. Test accuracy for speech classification, for models trained with \(1000\) and \(500\) training examples.
## 7 Conclusion and Future Work
In this work, we formulate auxiliary learning as an asymmetric bargaining game and use game-theoretical tools to derive an efficient algorithm. We adapt and generalize recent advancements in multi-task learning to auxiliary learning and show how they can be automatically tuned to get a significant improvement in performance.
We evaluated AuxiNash on multiple datasets with different learning setups and show that it outperforms previous approaches by a significant margin. Across all experiments, it is noticeable that MTL methods perform better than auxiliary learning ones although the former treat equally the primary task and the auxiliary tasks. We suspect that this is caused by conflicting gradients and by the fact that gradient norms may vary significantly across tasks. These results emphasize the connection between auxiliary learning and multi-task optimization. In many examples, the benefit of the auxiliary task was diminished or even completely negated by poor optimization. Thus, we suggest that auxiliary learning research should be closely aligned with MTL optimization research to effectively utilize auxiliary tasks.
## 8 Acknowledgements
This study was funded by a grant to GC from the Israel Science Foundation (ISF 737/2018), and by an equipment grant to GC and Bar-Ilan University from the Israel Science Foundation (ISF 2332/18). AN and AS are supported by a grant from the Israeli higher-council of education, through the Bar-Ilan data science institute (BIU DSI).
|
2309.06048 | Determination of Lower Order Perturbations of a Polyharmonic Operator in
Two Dimensions | We study an inverse boundary value problem for a polyharmonic operator in two
dimensions. We show that the Cauchy data uniquely determine all the anisotropic
perturbations of orders at most $m-1$ and several perturbations of orders $m$
to $2m-2$ under some restriction. The uniqueness proof relies on the
$\bar{\partial}$-techniques and the method of stationary phase. | Rajat Bansal, Venkateswaran P. Krishnan, Rahul Raju Pattar | 2023-09-12T08:34:13Z | http://arxiv.org/abs/2309.06048v2 | # Determination of lower order perturbations of a polyharmonic operator in two dimensions
###### Abstract.
We study an inverse boundary value problem for a polyharmonic operator in two dimensions. We show that the Cauchy data uniquely determine all the anisotropic perturbations of orders at most \(m-1\) and several perturbations of orders \(m\) to \(2m-2\) under some restriction. The uniqueness proof relies on the \(\bar{\partial}\)-techniques and the method of stationary phase.
Key words and phrases:Inverse boundary value problem; Perturbed two-dimensional polyharmonic operator; Cauchy data; Uniqueness 2020 Mathematics Subject Classification: 35R30; 35J40
## 1. Introduction
Let \(\Omega\) be a bounded domain with smooth boundary in \(\mathbb{R}^{2}\). This paper aims to study the inverse boundary value problem in two dimensions for a polyharmonic operator of the form:
\[\mathcal{L}=\partial^{m}\bar{\partial}^{m}+\sum_{j,k=0}^{m-1}A_{j,k}\partial^ {j}\bar{\partial}^{k},\quad m\geq 2. \tag{1.1}\]
Note that \(x=(x_{1},x_{2})\in\Omega\subset\mathbb{R}^{2}\) is identified with \(z=x_{1}+ix_{2}\in\mathbb{C}\) and \(\partial=\frac{1}{2}\left(\partial_{x_{1}}-i\partial_{x_{2}}\right),\)\(\bar{\partial}=\frac{1}{2}\left(\partial_{x_{1}}+i\partial_{x_{2}}\right)\).
In this paper we uniquely determine the coefficients \(A_{j,k}\) from the set of Cauchy data
\[\mathcal{C}(\mathcal{L})=\left\{\left(u|_{\partial\Omega},\partial_{\nu}u|_{ \partial\Omega},\partial_{\nu}^{2}u|_{\partial\Omega}\ldots,\partial_{\nu}^{( 2m-1)}u|_{\partial\Omega}\right):u\in H^{2m}(\Omega),\mathcal{L}u=0\right\}, \tag{1.2}\]
where \(\nu\) is an outer unit normal to \(\partial\Omega\).
To the best of our knowledge, there are few prior works investigating inverse problems for lower order perturbation of polyharmonic operator (especially for \(m\geq 3\)) in two dimensions compared to higher dimensions. For \(m=2\), in [11] Ikehata proved a local uniqueness of a potential \(V\in L^{2}(\Omega)\) associated to the Calderon problem for \((\Delta^{2}+V)u=0\) with \(\|V\|_{L^{2}}\) small. In [11], the same author studied a local uniqueness theorem for the Calderon problem for a perturbed biharmonic operator related to Love-Kirchhoff plate theory. In [11], the author studied the relationship between two D-N maps in the linear theory
of elasticity in two dimensions and the linearized Calderon problem for anisotropic fourth order equation. In [15, 16], the authors have studied Navier-Stokes equation in two dimensions using the biharmonic operator.
In dimensions \(n\geq 3\), Krupchyk, Lassas, and Uhlmann [13] established that the Cauchy data for a polyharmonic operator uniquely determines first order perturbations. This work was extended by many authors; see, for instance, [11, 12, 13, 14]. Till now, the perturbations considered for the polyharmonic operator are of order at most \(m\) in \(n\geq 3\). Moreover, in [18], the authors considered the linearized Calderon inverse problem for polyharmonic operator and recovered several lower order coefficients up to order \(2m-1\) in \(n\geq 3\).
In this paper, we establish that the Cauchy data for a polyharmonic operator in two dimensions uniquely determines all anisotropic perturbations of order at most \(m-1\) and several perturbations of orders \(m\) to \(2m-2\) with some restrictions. This restriction is captured in the following representation of the operator \(\mathcal{L}\) as
\[(\partial\bar{\partial})^{m}+A_{m-1,m-1}(\partial\bar{\partial})^{m-1}+\sum_{ l=1}^{m-2}\left(\sum_{j+k=m-l-1}A_{j+l,k+l}\partial^{j}\bar{\partial}^{k} \right)(\partial\bar{\partial})^{l}+\sum_{l=0}^{m-1}\sum_{j+k=l}A_{j,k} \partial^{j}\bar{\partial}^{k}.\]
The constraint on the coefficients of orders \(m\) to \(2m-2\) is required for the techniques employed in this paper to work, mainly, to make the equation for the amplitude of complex geometric optics (CGO) solutions to be independent of the coefficients.
An early study of inverse boundary value problems for second-order operators was carried out by Calderon [13]. Sylvester and Uhlmann [14] obtained the first uniqueness result for the Calderon problem. In two dimensions, the Calderon problem was studied by Nachman [12], Brown-Uhlmann [1], and finally Astala-Pauvarinta [1]. Nachman required two derivatives to convert the conductivity equation into the Schrodinger equation. The paper of Astala and Pauvarinta solved Calderon's problem most generally for \(L^{\infty}\) conductivity.
Our approach relies on two main techniques - the \(\bar{\partial}\)-techniques and the method of stationary phase. These techniques were first used by Bukhgeim in his seminal work [1] to recover the zeroth order perturbation of the Laplacian in two dimensions that has led
to many developments in the study of two-dimensional inverse boundary value problems. However, his proof only gives uniqueness for potentials in the class \(W^{1,p}\), \(p>2\) as pointed out in Blasten's licentiate thesis [1]. The corresponding partial data problem was considered by Imanuvilov, Uhlmann and Yamamoto in [10] and Guillarmou and Tzou in [11]. The inverse problem for magnetic Schrodinger equation in \(n=2\) using Bukhgeim approach was studied by Lai in [14] and for partial data by Imanuvilov, Uhlmann and Yamamoto in [10] and by Tzou in [11]. We refer the interested reader to the survey article [11] for a detailed overview of results and techniques for inverse problems in two dimensions.
Now we state the main theorem of this paper.
**Theorem 1.1**.: Let \(\Omega\) be a bounded domain with smooth boundary in \(\mathbb{R}^{2}\). Let \(\mathcal{L}\) and \(\tilde{\mathcal{L}}\) be two operators of the form (1.1) with coefficients \(A_{j,k},\tilde{A}_{j,k}\in W^{j+k+1,p}(\Omega),\ p>2\), respectively. Assume that
\[\partial_{\nu}^{l}A_{j,k}=\partial_{\nu}^{l}\tilde{A}_{j,k}\text{ and }A_{0,k}= \tilde{A}_{0,k}\text{ on }\partial\Omega,\quad\text{ for }0\leq l\leq j-1,\ 0\leq j,k\leq m-1. \tag{1.3}\]
Then \(\mathcal{C}(\mathcal{L})=\mathcal{C}(\tilde{\mathcal{L}})\) implies that \(A_{j,k}=\tilde{A}_{j,k}\) on \(\Omega\) for \(0\leq j,k\leq m-1\).
We require the condition (1.3) on the coefficients to make the boundary terms _zero_ when we apply integration by parts on the integral identity and also to apply the method of stationary phase as explained in Section 3.
Alternatively, one can consider a Cauchy data set of the form
\[\mathcal{N}(\mathcal{L})=\left\{\big{(}u|_{\partial\Omega},\ldots,(-\Delta)^{ m-1}u|_{\partial\Omega},(\partial_{\nu}u)|_{\partial\Omega},\cdots\partial_{\nu}(- \Delta)^{m-1}u|_{\partial\Omega}\big{)}:u\in H^{2m}(\Omega),\mathcal{L}u=0 \right\},\]
as \(\mathcal{C}(\mathcal{L})\) can be obtained from \(\mathcal{N}(\mathcal{L})\) by an explicit description for the Laplacian in the boundary normal coordinates, see [13]. We have the following corollary.
**Corollary 1.2**.: With the hypothesis as in Theorem 1.1, \(\mathcal{N}(\mathcal{L})=\mathcal{N}(\tilde{\mathcal{L}})\) implies that \(A_{j,k}=\tilde{A}_{j,k}\) on \(\Omega\) for \(0\leq j,k\leq m-1\).
To prove Theorem 1.1, we need to construct so-called complex geometric optics (CGO) solutions. The next theorem gives the existence of such solutions in our setting. Let us fix some notation before stating the theorem. Let \(\Phi=i(z-z_{0})^{2}\) where \(z_{0}\in\Omega\) and \(dS\) be the surface measure on \(\partial\Omega\).
**Theorem 1.3**.: Let \(a\) be smooth function such that \(\bar{\partial}^{m}a=0\) in \(\Omega\). If \(A_{j,k}\in W^{j+k+1,p}(\Omega)\), for some \(p>2\), then for all small \(h>0\) there exist solutions \(u\in H^{2m}(\Omega)\) to
\[\mathcal{L}u=\partial^{m}\bar{\partial}^{m}u+\sum_{j,k=0}^{m-1}A_{j,k} \partial^{j}\bar{\partial}^{k}u=0,\quad\text{ in }\Omega, \tag{1.4}\]
of the form
\[u=e^{\Phi/h}(a+r_{h}), \tag{1.5}\]
where the correction term \(r_{h}\) satisfies \(\|r_{h}\|_{H^{m}(\Omega)}=O(h^{\frac{1}{2}+\epsilon})\) for some \(\epsilon>0\).
## 2. CGO Solutions
In this section, we prove Theorem 1.3 as stated in Introduction. We begin by writing
\[\mathcal{L}u=\partial^{m}\bar{\partial}^{m}u+\sum_{j,k=0}^{m-1}A_{j,k} \partial^{j}\bar{\partial}^{k}u=0\]
in the following form
\[\mathcal{L}u=\partial^{m}\bar{\partial}^{m}u+\sum_{j,k=0}^{m-1} \partial^{j}(A^{\prime}_{j,k}\bar{\partial}^{k}u)=0, \tag{2.1}\]
where we can define \(A^{\prime}_{j,k}\in W^{j+k+1,p}(\Omega)\) uniquely satisfying
\[A_{j,k}=\sum_{l=j}^{m-1}\binom{l}{j}\partial^{l-j}A^{\prime}_{l,k}. \tag{2.2}\]
Substituting \(u=e^{\Phi/h}f\) in (2.1), we have
\[\partial^{m}\left(e^{\Phi/h}\bar{\partial}^{m}f\right)+\sum_{j,k=0}^{m-1} \partial^{j}\left(e^{\Phi/h}A^{\prime}_{j,k}\bar{\partial}^{k}f\right)=0.\]
Now, we write \(G=e^{\Phi/h}\bar{\partial}^{m}f\) and the above expression takes the form
\[\begin{split}\bar{\partial}^{m}f&=e^{-\Phi/h}G\\ \partial^{m}G&=-\sum_{j,k=0}^{m-1}\partial^{j}\left( e^{\Phi/h}A^{\prime}_{j,k}\bar{\partial}^{k}f\right)\end{split} \tag{2.3}\]
The problem here is that \(|e^{\pm\Phi/h}|\) grows too fast when \(h\to 0\). This can be solved by choosing \(G=e^{\bar{\Phi}/h}g\) to get
\[\bar{\partial}^{m}f=e^{(\bar{\Phi}-\Phi)/h}g \tag{2.4}\]
\[\partial^{m}g=-\sum_{j,k=0}^{m-1}\partial^{j}\left(e^{(\Phi-\bar{\Phi})/h}A^{ \prime}_{j,k}\bar{\partial}^{k}f\right) \tag{2.5}\]
For (2.4), we take the solution
\[f=a+\bar{\partial}^{-m}\left(e^{(\bar{\Phi}-\Phi)/h}g\right),\quad\text{ where }\bar{\partial}^{m}a=0 \tag{2.6}\]
and for (2.5) we choose the solution
\[g=-\sum_{j,k=0}^{m-1}\partial^{j-m}\left(e^{(\Phi-\bar{\Phi})/h}A^{\prime}_{j, k}\bar{\partial}^{k}f\right). \tag{2.7}\]
By combining these two, we get an integral equation for \(g\) of the form
\[g+\sum_{j,k=0}^{m-1}\partial^{j-m}\left(e^{(\Phi-\bar{\Phi})/h}A^{\prime}_{j, k}\bar{\partial}^{k-m}\left(e^{(\bar{\Phi}-\Phi)/h}g\right)\right)=-\sum_{j,k=0}^ {m-1}\partial^{j-m}\left(e^{(\Phi-\bar{\Phi})/h}A^{\prime}_{j,k}\bar{\partial} ^{k}a\right)\!. \tag{2.8}\]
The above expression for \(g\) can be written in the form
\[(I-\mathcal{S}_{h})g=w, \tag{2.9}\]
where
\[\mathcal{S}_{h}(v) =-\sum_{j,k=0}^{m-1}\partial^{j-m}\left(e^{(\Phi-\bar{\Phi})/h}A^ {\prime}_{j,k}\bar{\partial}^{k-m}\left(e^{(\bar{\Phi}-\Phi)/h}v\right)\right) \tag{2.10}\] \[w =-\sum_{j,k=0}^{m-1}\partial^{j-m}\left(e^{(\Phi-\bar{\Phi})/h}A^ {\prime}_{j,k}\bar{\partial}^{k}a\right)\!.\]
The existence of a CGO solution of the form (1.5) to the equation (1.4) depends on the solvability of (2.9). To this end, we estimate the norm of \(\mathcal{S}_{h}\) for which we need the following crucial operator bound from [11, Lemma 2.3] and [11, Lemma 5.4].
**Lemma 2.1**.: Let \(q\in(1,\infty)\) and \(p>2\), then there exists \(C>0\) independent of \(h\) such that for all \(\omega\in W^{1,p}(\Omega)\)
\[\|\partial^{-1}(e^{(\Phi-\bar{\Phi})/h}\omega)\|_{L^{q}(\Omega)} \leq Ch^{2/3}\|\omega\|_{W^{1,p}(\Omega)} \text{ if }1<q<2,\] \[\|\partial^{-1}(e^{(\Phi-\bar{\Phi})/h}\omega)\|_{L^{q}(\Omega)} \leq Ch^{1/q}\|\omega\|_{W^{1,p}(\Omega)} \text{ if }2\leq q\leq p.\]
There exist \(\epsilon>0\) and \(C>0\) such that for all \(\omega\in W^{1,p}(\Omega)\)
\[\|\partial^{-1}(e^{(\Phi-\bar{\Phi})/h}\omega)\|_{L^{2}(\Omega)}\leq Ch^{\frac {1}{2}+\epsilon}\|\omega\|_{W^{1,p}(\Omega)}.\]
**Lemma 2.2**.: For any \(1<r\leq p\), the operator \(\mathcal{S}_{h}\) is bounded on \(L^{r}(\Omega)\) and satisfies \(\left\|\mathcal{S}_{h}\right\|_{L^{r}\to L^{r}}=O(h^{1/r})\) for \(r>2\) and \(\left\|\mathcal{S}_{h}\right\|_{L^{2}\to L^{2}}=O(h^{\frac{1}{2}-\epsilon})\) for any \(0<\epsilon<1/2\) small.
Proof.: Firstly, for \(2<r\leq p\), we obtain
\[\left\|S_{h}(v)\right\|_{L^{r}(\Omega)} \leq\sum_{j,k=0}^{m-1}\left\|\partial^{j-m}\left(e^{(\Phi-\bar{ \Phi})/h}A^{\prime}_{j,k}\bar{\partial}^{k-m}\left(e^{(\bar{\Phi}-\Phi)/h}v \right)\right)\right\|_{L^{r}(\Omega)}\] \[\leq C\sum_{j,k=0}^{m-1}\left\|\partial^{j-m+1}\left(e^{(\Phi- \bar{\Phi})/h}A^{\prime}_{j,k}\bar{\partial}^{k-m}\left(e^{(\bar{\Phi}-\Phi)/h }v\right)\right)\right\|_{L^{r}(\Omega)}\] \[\leq C\sum_{j,k=0}^{m-1}\left\|\partial^{-1}\left(e^{(\Phi-\bar {\Phi})/h}A^{\prime}_{j,k}\bar{\partial}^{k-m}\left(e^{(\bar{\Phi}-\Phi)/h}v \right)\right)\right\|_{L^{r}(\Omega)}\] \[\leq Ch^{\frac{1}{r}}\sum_{j,k=0}^{m-1}\left\|A^{\prime}_{j,k} \bar{\partial}^{k-m}\left(e^{(\bar{\Phi}-\Phi)/h}v\right)\right\|_{W^{1,r}( \Omega)}\] \[\leq Ch^{\frac{1}{r}}\sum_{k=0}^{m-1}\left\|\bar{\partial}^{k-m} \left(e^{(\bar{\Phi}-\Phi)/h}v\right)\right\|_{W^{1,r}(\Omega)}\] \[\leq Ch^{\frac{1}{r}}\|v\|_{L^{r}(\Omega)}.\]
Further, for \(1<r<2\),
\[\left\|S_{h}(v)\right\|_{L^{r}(\Omega)} \leq\sum_{j,k=0}^{m-1}\left\|\partial^{j-m}\left(e^{(\Phi-\bar{ \Phi})/h}A^{\prime}_{j,k}\bar{\partial}^{k-m}\left(e^{(\bar{\Phi}-\Phi)/h}v \right)\right)\right\|_{L^{r}(\Omega)}\] \[\leq C\sum_{j,k=0}^{m-1}\left\|\partial^{-1}\left(e^{(\Phi-\bar{ \Phi})/h}A^{\prime}_{j,k}\bar{\partial}^{k-m}\left(e^{(\bar{\Phi}-\Phi)/h}v \right)\right)\right\|_{L^{r}(\Omega)}\] \[\leq C\sum_{j,k=0}^{m-1}\left\|A^{\prime}_{j,k}\bar{\partial}^{k- m}\left(e^{(\bar{\Phi}-\Phi)/h}v\right)\right\|_{L^{r}(\Omega)}\] \[\leq C\sum_{k=0}^{m-1}\left\|\bar{\partial}^{k-m}\left(e^{(\bar{ \Phi}-\Phi)/h}v\right)\right\|_{L^{r}(\Omega)}\] \[\leq C\|v\|_{L^{r}(\Omega)}.\]
For all \(\varepsilon>0\) small, interpolating between \(r=1+\varepsilon\) and \(r=2+\varepsilon\), gives the desired result for \(r=2\)
**Proposition 2.3**.: For all sufficiently small \(h>0\), there exist a solution \(g\in H^{m}(\Omega)\) to the equation
\[(I-\mathcal{S}_{h})g=w,\]
where \(\mathcal{S}_{h}\) and \(w\) defined in (2.10) which satisfies \(\|g\|_{L^{2}}=O(h^{\frac{1}{2}+\epsilon}).\)
Proof.: In view of Lemma 2.2, equation (2.9) can be solved by using Neumann series by setting (for small \(h>0\))
\[g=\sum_{j=0}^{\infty}\mathcal{S}_{h}^{j}w.\]
as an element of \(L^{2}(\Omega)\). Indeed \(\|w\|_{L^{2}(\Omega)}=O(h^{\frac{1}{2}+\epsilon})\) by Lemma 2.1 and \(\|\mathrm{S}_{h}\|_{L^{2}\to L^{2}}=O(h^{\frac{1}{2}-\epsilon})\) by Lemma 2.2 we obtain \(\left\|\mathrm{S}_{h}^{j}w\right\|_{L^{2}}=O\left(h^{(\frac{1}{2}-\epsilon)j} h^{\frac{1}{2}+\epsilon}\right)\) which implies \(\|g\|_{L^{2}(\Omega)}=O(h^{\frac{1}{2}+\epsilon}).\)
Now that \(g\in L^{2}(\Omega)\) and \(A^{\prime}_{j,k}\in W^{j+k+1,p}(\Omega)\), we use bootstrapping argument on the expression (2.8) to conclude that \(g\in H^{m}(\Omega)\).
Proof of Theorem 1.3.: Choose \(g\) as in Proposition 2.3 and let
\[r_{h}=\bar{\partial}^{-m}\left(e^{(\bar{\Phi}-\Phi)/h}g\right)\]
as observed in (2.6). Clearly, \(r_{h}\in H^{2m}(\Omega)\) and \(\|r_{h}\|_{H^{m}(\Omega)}=O(h^{\frac{1}{2}+\varepsilon})\). Then we see that \(u=e^{\Phi/h}(a+r_{h})\in H^{2m}(\Omega)\) where \(\bar{\partial}^{m}a=0\) solves \(\mathcal{L}u=0.\) This proves Theorem 1.3.
The adjoint operator \(\mathcal{L}^{*}\) has a similar form as the operator \(\mathcal{L}.\) Hence, by following similar arguments, we can show that the adjoint equation has the same type of CGO solutions as given in Theorem 1.3.
**Remark 2.4**.: One can define an integral equation for \(f\) instead of \(g\) by substituting (2.7) in (2.6) and following the above procedure one can obtain that \(\|r_{h}\|_{H^{m}(\Omega)}=O(h^{\frac{1}{2}-\varepsilon}).\)
**Remark 2.5**.: From the techniques used in the proof of Theorem 1.3 one can even construct CGO solutions to the equation
\[\mathcal{P}u=\mathcal{L}u+\sum_{j=0}^{m-1}A_{j,m}\partial^{j}\bar{\partial}^{ m}u=\partial^{m}\bar{\partial}^{m}u+\sum_{j,k=0}^{m-1}A_{j,k}\partial^{j}\bar{ \partial}^{k}u+\sum_{j=0}^{m-1}A_{j,m}\partial^{j}\bar{\partial}^{m}u=0\]
of the form (1.5). But the difficulty lies in the construction of CGO solutions to the adjoint \(P^{*}\) as the transport equation defining the amplitude of the CGO solution to \(P^{*}v=0\) will depend on the coefficients \(A_{j,m}\).
## 3. Proof of uniqueness of coefficients
In this section, we prove Theorem 1.1. We start by deriving an integral identity. Let \(u_{1},v\in H^{2m}(\Omega)\) be such that
\[\mathcal{L}u_{1}=0,\quad\tilde{\mathcal{L}}^{*}v=0. \tag{3.1}\]
By assuming \(\mathcal{C}(\mathcal{L})=\mathcal{C}(\tilde{\mathcal{L}})\) there exists \(u_{2}\in H^{2m}(\Omega)\) satisfying
\[\left.\begin{aligned} \tilde{\mathcal{L}}u_{2}& =0\\ u_{2}|_{\partial\Omega}&=u_{1}|_{\partial\Omega},\\ (\partial_{\nu}u_{2})|_{\partial\Omega}&=(\partial_{ \nu}u_{1})|_{\partial\Omega},\\ \vdots&\vdots\\ (\partial_{\nu}^{(2m-1)}u_{2})|_{\partial\Omega}&=( \partial_{\nu}^{(2m-1)}u_{1})|_{\partial\Omega}.\end{aligned}\right\} \tag{3.2}\]
Note that
\[\tilde{\mathcal{L}}(u_{1}-u_{2})=\sum_{j,k=0}^{m-1}(\tilde{A}_{j,k}-A_{j,k}) \partial^{j}\bar{\partial}^{k}u_{1}.\]
Now we use integration by parts and (3.2) to obtain the following integral identity
\[0 =\int_{\Omega}\left(u_{1}-u_{2}\right)\overline{\tilde{\mathcal{L }}^{*}v}\,dx\] \[=\int_{\Omega}\tilde{\mathcal{L}}(u_{1}-u_{2})\bar{v}\,dx\] \[=\int_{\Omega}\left[\sum_{j,k=0}^{m-1}(\tilde{A}_{j,k}-A_{j,k}) \partial^{j}\bar{\partial}^{k}u_{1}\right]\bar{v}\,dx.\]
By our assumption (1.3), we get the following integral identity
\[\sum_{j,k=0}^{m-1}\left((-1)^{j}\int_{\Omega}\left(\tilde{A}^{\prime}_{j,k}-{A ^{\prime}}_{j,k}\right)\bar{\partial}^{k}u_{1}\partial^{j}\bar{v}\,dx\right)=0. \tag{3.3}\]
where \(\mathcal{L}(u_{1})=0\) and \(\tilde{\mathcal{L}}^{*}(v)=0\).
By using Theorem 1.3 we consider \(u_{1}\) and \(v\) of the form
\[\begin{split} u_{1}&=e^{\Phi/h}(a+r_{h}),\text{ where }\ \bar{\partial}^{m}a=0,\\ v&=e^{-\Phi/h}(b+s_{h}),\text{ where }\ \bar{\partial}^{m }b=0,\end{split} \tag{3.4}\]
with \(r_{h}\) and \(s_{h}\) satisfy \(\|r_{h}\|_{H^{m}(\Omega)}=O(h^{\frac{1}{2}+\epsilon})\) and \(\|s_{h}\|_{H^{m}(\Omega)}=O(h^{\frac{1}{2}+\epsilon})\) for some \(\epsilon>0\).
By using \(u_{1}\) and \(v\), the integral identity takes the form
\[0=\sum_{j,k=0}^{m-1}(-1)^{j}\int_{\Omega}\left[e^{(\Phi-\bar{ \Phi})/h}(\tilde{A}^{\prime}_{j,k}-A^{\prime}_{j,k})\bar{\partial}^{k}a\partial ^{j}\bar{b}\right]\\ +\sum_{j,k=0}^{m-1}(-1)^{j}\int_{\Omega}\left[e^{(\Phi-\bar{\Phi} )/h}(\tilde{A}^{\prime}_{j,k}-A^{\prime}_{j,k})(\bar{\partial}^{k}a\partial^{ j}\bar{s}_{h}+\bar{\partial}^{k}r_{h}\partial^{j}\bar{b}+\bar{\partial}^{k}r_{h} \partial^{j}\bar{s}_{h})\right]\]
By using the assumption (1.3) and the method of stationary phase one can obtain that
\[\sum_{j,k=0}^{m-1}\left((-1)^{j}\int_{\Omega}e^{2i\psi/h}(\tilde{ A}^{\prime}_{j,k}-A^{\prime}_{j,k})(\bar{\partial}^{k}a)(\partial^{j}\bar{b})\right)\\ =\sum_{j,k=0}^{m-1}C_{j,k}(z_{0})he^{2i\psi(z_{0})/h}(\tilde{A}^{ \prime}_{j,k}(z_{0})-A^{\prime}_{j,k}(z_{0}))\bar{\partial}^{k}a(z_{0}) \partial^{j}\bar{b}(z_{0})+o(h). \tag{3.5}\]
where \(C_{j,k}(z_{0})\neq 0\) for all \(0\leq j,k\leq m-1\).
Next, we use the fact that \(\|r_{h}\|_{H^{m}(\Omega)}=O(h^{\frac{1}{2}+\epsilon})\), \(\|s_{h}\|_{H^{m}(\Omega)}=O(h^{\frac{1}{2}+\epsilon})\), for some \(\epsilon>0\) and obtain the following estimate
\[\sum_{j,k=0}^{m-1}(-1)^{j}\int_{\Omega}\left[e^{(\Phi-\bar{\Phi})/h}(\tilde{A} ^{\prime}_{j,k}-A^{\prime}_{j,k})\bar{\partial}^{k}r_{h}\partial^{j}\bar{s}_{ h}\right]=O(h^{1+2\epsilon}). \tag{3.6}\]
Let \(\tilde{r}_{h}\) and \(\tilde{s}_{h}\) be such that
\[r_{h} =\bar{\partial}^{-m}\left(e^{(\bar{\Phi}-\Phi)/h}\tilde{r}_{h} \right),\] \[s_{h} =\bar{\partial}^{-m}\left(e^{(\bar{\Phi}-\Phi)/h}\tilde{s}_{h}\right)\]
and they satisfy the equation (2.8) in place of \(g\). Using this we have the following estimate
\[\sum_{j,k=0}^{m-1}(-1)^{j}\int_{\Omega}\left[e^{(\Phi-\bar{\Phi})/h} (\tilde{A}^{\prime}_{j,k}-A^{\prime}_{j,k})\bar{\partial}^{k}r_{h}\partial^{j} \bar{b}\right]\] \[\quad=\sum_{j,k=0}^{m-1}(-1)^{j}\int_{\Omega}\left[e^{(\Phi-\bar{ \Phi})/h}(\tilde{A}^{\prime}_{j,k}-A^{\prime}_{j,k})\bar{\partial}^{k-m}\left(e ^{(\bar{\Phi}-\Phi)/h}\tilde{r}_{h}\right)\partial^{j}\bar{b}\right]\] \[\quad=\sum_{j,k=0}^{m-1}(-1)^{j}\int_{\Omega}\left[\bar{\partial }^{k-m}\left(e^{(\Phi-\bar{\Phi})/h}(\tilde{A}^{\prime}_{j,k}-A^{\prime}_{j,k} )\partial^{j}\bar{b}\right)e^{(\bar{\Phi}-\Phi)/h}\tilde{r}_{h}\right]\] \[\quad\leq Ch^{\frac{1}{2}+\epsilon}\sum_{j,k=0}^{m-1}\left\|( \tilde{A}^{\prime}_{j,k}-A^{\prime}_{j,k})\partial^{j}\bar{b}\right\|_{W^{1,p }}\lVert\tilde{r}_{h}\rVert_{L^{2}},\]
where we have used Fubini's theorem in the third equality while the last inequality is obtained by applying Lemma 2.1. Now, we apply Proposition 2.3 to obtain
\[\sum_{j,k=0}^{m-1}(-1)^{j}\int_{\Omega}\left[e^{(\Phi-\bar{\Phi})/h}(\tilde{A} ^{\prime}_{j,k}-A^{\prime}_{j,k})\bar{\partial}^{k}r_{h}\partial^{j}\bar{b} \right]=O(h^{1+2\epsilon}). \tag{3.7}\]
Similarly, we obtain
\[\sum_{j,k=0}^{m-1}(-1)^{j}\int_{\Omega}\left[e^{(\Phi-\bar{\Phi})/h}(\tilde{A} ^{\prime}_{j,k}-A^{\prime}_{j,k})\bar{\partial}^{k}a\partial^{j}\bar{s}_{h} \right]=O(h^{1+2\epsilon}). \tag{3.8}\]
**Proof of Theorem 1.1.** Using the estimates (3.5) - (3.8) and matching the asymptotics as \(h\to 0\), we obtain
\[0=\sum_{j,k=0}^{m-1}(-1)^{j}(\tilde{A}^{\prime}_{j,k}(z_{0})-A^{\prime}_{j,k} (z_{0}))\bar{\partial}^{k}a(z_{0})\partial^{j}\bar{b}(z_{0}). \tag{3.9}\]
We now show that \(A^{\prime}_{0,0}=\tilde{A}^{\prime}_{0,0}\). To this end, let us choose \(a=b=1\). With this choice we obtain
\[\tilde{A}^{\prime}_{0,0}(z_{0})=A^{\prime}_{0,0}(z_{0}).\]
Since for any \(z_{0}\in\Omega\) we can choose \(\Phi\) with a unique critical point at \(z_{0}\), we have
\[\tilde{A}^{\prime}_{0,0}=A^{\prime}_{0,0}\quad\text{in }\Omega.\]
Next to show that \(A^{\prime}_{0,1}=\tilde{A}^{\prime}_{0,1}\), we rewrite (3.9) by setting the term \(\tilde{A}^{\prime}_{0,0}-A^{\prime}_{0,0}=0.\) Then, we choose \(a=\bar{z},b=1\) to obtain
\[A^{\prime}_{0,1}=\tilde{A}^{\prime}_{0,1},\quad\text{in }\Omega.\]
Similarly, one can show \(A^{\prime}_{j,k}=\tilde{A}^{\prime}_{j,k}\) in an increasing order for \(j+k\) by choosing
\[a=\frac{\bar{z}^{k}}{k!}\ \ \text{and}\ b=\frac{\bar{z}^{j}}{j!}\]
and applying the above procedure to obtain
\[\tilde{A}^{\prime}_{j,k}=A^{\prime}_{j,k}\ \text{in}\ \Omega\quad\text{for all}\ \ 0\leq j,k\leq m-1.\]
From (2.2), we readily obtain that
\[\tilde{A}_{j,k}=A_{j,k}\ \text{in}\ \Omega\quad\text{for all}\ \ 0\leq j,k\leq m-1.\]
This proves Theorem 1.1.
## Acknowledgements
VPK would like to thank the Isaac Newton Institute for Mathematical Sciences, Cambridge, UK, for support and hospitality during _Rich and Nonlinear Tomography - a multidisciplinary approach_ in 2023 where part of this work was done (supported by EPSRC Grant Number EP/R014604/1). The authors thank Manas Kar for suggesting this problem and Masaru Ikehata for pointing out the references [11, 12, 13].
|
2310.18315 | Systematic Analysis of COVID-19 Ontologies | This comprehensive study conducts an in-depth analysis of existing COVID-19
ontologies, scrutinizing their objectives, classifications, design
methodologies, and domain focal points. The study is conducted through a
dual-stage approach, commencing with a systematic review of relevant literature
and followed by an ontological assessment utilizing a parametric methodology.
Through this meticulous process, twenty-four COVID-19 Ontologies (CovOs) are
selected and examined. The findings highlight the scope, intended purpose,
granularity of ontology, modularity, formalism, vocabulary reuse, and extent of
domain coverage. The analysis reveals varying levels of formality in ontology
development, a prevalent preference for utilizing OWL as the representational
language, and diverse approaches to constructing class hierarchies within the
models. Noteworthy is the recurrent reuse of ontologies like OBO models (CIDO,
GO, etc.) alongside CODO. The METHONTOLOGY approach emerges as a favored design
methodology, often coupled with application-based or data-centric evaluation
methods. Our study provides valuable insights for the scientific community and
COVID-19 ontology developers, supplemented by comprehensive ontology metrics.
By meticulously evaluating and documenting COVID-19 information-driven
ontological models, this research offers a comparative cross-domain
perspective, shedding light on knowledge representation variations. The present
study significantly enhances understanding of CovOs, serving as a consolidated
resource for comparative analysis and future development, while also
pinpointing research gaps and domain emphases, thereby guiding the trajectory
of future ontological advancements. | Debanjali Bain, Biswanath Dutta | 2023-09-15T18:17:01Z | http://arxiv.org/abs/2310.18315v1 | Bain, D. & Dutta, B (2023). Systematic Analysis of COVID-19 Ontologies. In: Garoufallou, Emmanuel and Vlachidis, Andreas (eds), _Metadata and Semantic Research (MTSR 2023): 17th International Conference on Metadata and Semantics Research_, Milan, Italy, October 23-27, 2023. Proceedings. Communications in Computer and Information Science (CCIS), Vol. XXX, pp. XXX. Cham (Switzerland): Springer Nature. (Accepted).
###### Abstract
This comprehensive study conducts an in-depth analysis of existing COVID-19 ontologies, scrutinizing their objectives, classifications, design methodologies, and domain focal points. The study is conducted through a dual-stage approach, commencing with a systematic review of relevant literature and followed by an ontological assessment utilizing a parametric methodology. Through this meticulous process, twenty-four COVID-19 Ontologies (CovOs) are selected and examined. The findings highlight the scope, intended purpose, granularity of ontology, modularity, formalism, vocabulary reuse, and extent of domain coverage. The analysis reveals varying levels of formality in ontology development, a prevalent preference for utilizing OWL as the representational language, and diverse approaches to constructing class hierarchies within the models. Noteworthy is the recurrent reuse of ontologies like OBO models (CIDO, GO, etc.) alongside CODO. The METHONTOLOGY approach emerges as a favored design methodology, often coupled with application-based or data-centric evaluation methods. Our study provides valuable insights for the scientific community and COVID-19 ontology developers, supplemented by comprehensive ontology metrics. By meticulously evaluating and documenting COVID-19 information-driven ontological models, this research offers a comparative cross-domain perspective, shedding light on knowledge representation variations. The present study significantly enhances understanding of CovOs, serving as a consolidated resource for comparative analysis and future development, while also pinpointing research gaps and domain emphases, thereby guiding the trajectory of future ontological advancements.
Keywords:COVID-19 ontologies, systematic literature review, ontological review, domain analysis, comparative study, ontology-driven models, knowledge representation.
## 1 Introduction
The emergence of the COVID-19 pandemic in December 2019 marked a pivotal and unprecedented moment in contemporary history, originating in Wuhan, China, and its rapid and relentless global spread resulting in significant loss of life, with the World Health Organization (WHO) reporting 769 million confirmed cases and 6.95 million documented deaths as of August 9, 2023 [1]. In a concerted response to this devastating crisis, a global immunization effort has administered an incredible number of 13.49 billion doses of vaccine by August 5, 2023, underscoring the collective resolve to mitigate the impact of the pandemic. As humanity grapples with this multi-faceted challenge, researchers from various disciplines and domains have come together to address various dimensions of the pandemic. This collaborative effort has led to an unprecedented influx of data and datasets, curated by governmental, non-governmental, and individual entities, underscoring an acute and urgent need for effective information management systems that can leverage and make sense of this vast amount of information. In this rapidly changing landscape, ontology-based systems have emerged as a beacon of hope, characterized by their semantic models and sophisticated data processing tools that promise to seamlessly integrate, analyze and visualize the complex web of data related to COVID-19. By providing the ability to glean insights from complex datasets, these systems provide a strategic advantage in decision-making processes, resource allocation, and disease prevention strategies. The potential of ontology-based systems to revolutionize the approach to pandemic management is undeniable. Against this backdrop, the present study embarks on a systematic exploration and
evaluation of COVID-19 ontologies that have been proposed by the research community to address the multifaceted and ever-evolving challenges posed by the COVID-19 pandemic. Recognizing the vital role of ontologies as structured models capable of effectively representing information needs and formalizing complex processes, particularly in the field of medical knowledge representation and data sharing [2], this study seeks to unravel the intricacies of these ontology and their potential to reshape our understanding of the pandemic. The growing popularity of ontology-driven systems, as evidenced by their exponential growth in research [3], further highlights their critical importance in addressing the complex and multifaceted challenges that the COVID-19 pandemic continues to present. Building on notable existing studies, this study aspires to fill an important gap in the literature by undertaking a comprehensive comparative analysis that delves into key parameters such as scope, intended purpose, ontology granularity, modularity, formalism, vocabulary reuse, and domain coverage (discussed further in section 2 and section 3). In doing so, this study aims to provide a consolidated and indispensable resource for researchers, practitioners, and ontology developers who are actively engaged in the pursuit of effective ontology-driven COVID-19 information representation.
The main objective of this study is multifaceted. This involves the rigorous identification and comprehensive analysis of existing COVID-19 ontologies (CovOs) in the vast expanse of scientific literature. This comprehensive undertaking involves a detailed examination of their unique attributes, diverse design methodologies, and specific scope, complemented by an incisive examination of ontology granularity and coverage in the complex COVID-19 domain. Through these multi-faceted goals, this research strives to provide valuable insights that transcend disciplinary boundaries, serving the needs of the scientific community, ontology developers, and decision-makers. The contributions of this study are both profound and impactful. Beyond the extensive review of the CovOs literature, based on a concise list of parameters derived from [3], this study introduces a new parameter, "Ontology Coverage/Domain", which further enriches the structured analysis of the selected ontology. Furthermore, this study sheds light on potential research gaps in the area of COVID-19, identifying areas where additional ontologies may be needed to comprehensively address the complex landscape of the pandemic. This critical insight is poised to guide future research efforts, fostering a more holistic and informed approach to the development and deployment of ontology-based systems. The rest of the paper is organized as follows: section 2 discusses the related works in the literature and section 3 describes the methodology formulated for CovOs analysis. Section 4 unveils the findings of the study. Section 5 engages in a discourse on the discussion and limitations of the current study. Finally, section 5 provides a conclusive summary of the current study.
## 2 Related Work
Ontology, as discussed above, represents an explicit specification of a shared conceptualization. While fewer studies have formally reviewed ontologies created for capturing and reasoning COVID-19 information, it is imperative to conduct a comprehensive examination of existing works. This section delves into some of the prominent studies that have analyzed and assessed ontologies in this context.
Gao and Wang (2023) contribute an article [4] that delves into epidemic management data models from an ontological perspective, with a focus on enhancing data interoperability. The study evaluates and synthesizes various pertinent vocabularies and ontologies, including EPO [5], GeMInA [6], CIDO [7], IDO [8], CODO [9], COVIDCRFRAPID [10], and OPM [11]. Facets such as disease, person, organism, epidemiology, organization, medical personnel, medical activity, medical resource, infection transmission, statistics, and city are identified across these ontologies. While these existing models capture crucial aspects of epidemic scenarios, the study reveals persisting gaps, accentuating the necessity for further refinement and expansion. Bayoudhia et al. (2021) present an article [12] that offers an overview of biomedical ontologies for representing pandemics and infectious diseases. The authors emphasize ontologies' pivotal role in capturing and sharing knowledge related to various disease aspects, epidemiology, clinical features, and biology. The paper reviews ontologies developed for specific diseases, including malaria (IDOMAL) [13], dengue fever (IDODEN) [14], schistosomiasisomiasis (IDOSCHISTO) [15], COVID-19 Ontology [16], COVID-19 Surveillance Ontology [17], CODO, and CIDO. These ontologies leverage existing resources such as SNOMED CT, FOAF, and other ontologies like IDO Core and CheBEL. The authors discuss the use of tools like Protege [18], DL reasoning, and SPARQL [19] queries for ontology development, evaluation, and utilization to advance disease understanding, control, and treatment endeavors. Ahmad et al. (2021) contribute an in-depth exploration of ontologies and tool support in the realm of COVID-19 analytics [20].
Bain, D. & Dutta, B (2023). Systematic Analysis of COVID-19 Ontologies. In: Garoufallou, Emmanuel and Vlachidis, Andreas (eds), _Metadata and Semantic Research (MTSR 2023): 17th International Conference on Metadata and Semantics Research_, Milan, Italy, October 23-27, 2023. Proceedings. Communications in Computer and Information Science (CCIS), Vol. XXXX, pp. XXXX. Cham (Switzerland): Springer Nature. (Accepted).
The study addresses challenges associated with the pandemic and emphasizes ontology-based solutions. Notable ontologies discussed encompass CODO, CIDO, IDO, and COVID-19 surveillance ontology. Yousefianzadeh et al. (2020) delve into the role of ontologies in managing the influx of COVID-19-related data in the medical domain [21]. The focus centers on ontologies stored in the BioPortal database, encompassing COVID-19-related ontologies such as CIDO, COVID-19 Ontology, IDO-COVID-19 [22], COVID-19 Surveillance Ontology, and CODO. These ontologies function as semantic tools to standardize and represent intricate and heterogeneous textual data linked to COVID-19. The article highlights the pivotal role of ontologies in supporting healthcare decision-making, data analysis, and knowledge dissemination during the COVID-19 pandemic. In the landscape of ontology evaluation, the notable studies mentioned above have yielded valuable insights, but a significant gap persists. As the field of COVID-19 research continues to rapidly evolve, a thorough study that encompasses a substantial amount of existing ontologies suitable for representing COVID-19 data remains conspicuously lacking. This gap is further exacerbated by the lack of comprehensive comparative analyzes that systematically explore critical parameters, including scope, intended purpose, ontology granularity, modularity, levels of formalism, vocabulary reuse, and extent of domain coverage. This obvious void in current scientific discourse underscores the fundamental motivation driving current research: to effectively fill this gap by performing systematic analysis, and facilitating a comprehensive comparison of CovOs. In undertaking this endeavor, our research aims to address the pressing need for a holistic and insightful assessment of the dynamic and evolving landscape of ontology-driven systems in the context of the COVID-19 pandemic. Table 1 provides a concise overview of the related studies.
## 3 Methodology: CovOs Review and Analysis
The methodology employed for the review and analysis of CovOs is structured and systematic, drawing inspiration from [23] and tailored to the specific objectives of this study. The step-by-step approach is illustrated in Figure 1, followed by a comprehensive breakdown of each phase.
\begin{table}
\begin{tabular}{c c c c} \hline
**Study Authors** & **Main Focus** & **Ontologies/Vcablearies for Comparative Analysis** & **Key Contributions and Findings** \\ \hline Gao and Wang (2023) & Epidemic management & EFO, GeMina, CIDO, IDO, CODO, & Evaluated ontologies, identified recurring patterns, highlighted gaps. \\ Bayoudhis et al. (2021) & Biomedical ontologies & DOUAL, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, ID, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, ID, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDGRU, IDU, IDGRU, IDGRU, IDU, IDGRU, IDU, IDGRU, IDGRU, IDU, IDGRU, IDGRU, IDGRU, IDU, IDGRU, IDGRU, IDGRU, IDU, IDGRU, IDGRU, IDGRU, IDGRU, IDU, IDGRU, IDU, IDGRU, IDGRU, IDU, IDGRU, IDGRU, IDGRU, IDU, IDGRU, IDU, IDGRU, IDU, IDGRU, IDU, IDGRU, IDU, IDGRU, IDGRU, IDU, IDGRU, IDU, IDGRU, IDU, IDGRU, IDU, IDGRU, IDU, IDGRU, IDU, IDGRU, IDU, IDGRU, IDU, IDGRU, IDU, IDGRU, IDU, IDGRU, IDU, IDGRU, IDU, IDGRU, IDU, IDU, IDGRU, IDU, IDU, IDGRU, IDU, IDU, IDGRU, IDU, IDU, IDGRU, ID
Bain, D. & Dutta, B (2023). Systematic Analysis of COVID-19 Ontologies. In: Garoufallou, Emmanuel and Vlachidis, Andreas (eds), _Metadata and Semantic Research (MTSR 2023): 17th International Conference on Metadata and Semantics Research_, Milan, Italy, October 23-27, 2023. Proceedings. Communications in Computer and Information Science (CCIS), Vol. X000X, pp. X00X. Cham (Switzerland): Springer Nature. (Accepted).
## Phase I: Preliminary Investigation
_Step 1: Defining the Research Questions_: The foundation of the methodology is laid by formulating pertinent research questions aligned with the study's objectives. These questions encompass the identification and analysis of existing ontological models for COVID-19-related information representation, along with an exploration of knowledge representation variations across domains. The framed research questions (RQ1-RQ5) are as follows: RQ1: What are the existing ontology-based models for representing COVID-19-related information? RQ2: How is COVID-19-related information represented using ontologies? RQ3: What are the domain coverages of existing COVID-19 ontologies? RQ4: What are the knowledge representation formalism languages used for creating COVID-19 ontologies? RQ5: How are the COVID-19 ontologies modeled with a focus on granularity, scope, modularity, level of formalism, and vocabulary (re)use?
_Step 2: Generation of Search Terminology:_ Derived from the research questions, a set of pertinent search terms is constructed. These terms encapsulate the thematic essence of the research questions and aid in the subsequent literature search. The search terminologies (T1-T8) include: T1: COVID-19 ontology, T2: Coronavirus ontology, T3: COVID-19 knowledge representation, T4: COVID-19 model, T5: Epidemic ontology, T6: Infectious disease ontology, T7: Pandemic knowledge representation, T8: Disease ontology for COVID-19.
_Step 3: Development of Search String:_ Search strings are formulated using combinations of the derived terminology (Step 2). Example search strings include S1: "ontology-based model for COVID-19", S2: "COVID-19" AND "ontology", S3: "coronavirus information" AND "ontology", S4: "ontology model for COVID-19", S5: "epidemic ontology" AND "COVID-19", S6: "infectious disease ontology" AND "COVID-19", S7: "pandemic knowledge representation", AND "COVID-19", S8: "disease ontology" AND "COVID-19".
_Step 4: Selection of Information Repositories:_ Databases are chosen based on availability, reputation, and subject coverage. Databases include IEEE Xplore ([https://ieeexplore.ieee.org/Xplore](https://ieeexplore.ieee.org/Xplore)), Scopus ([https://www.scopus.com](https://www.scopus.com)). ScienceDirect ([https://www.sciencedirect.com/](https://www.sciencedirect.com/)), Taylor and Francis ([https://taylorandfrancis.com/](https://taylorandfrancis.com/)), and Google Scholar ([https://scholar.google.com/](https://scholar.google.com/)). Search strings are modified according to the databases.
## Phase II: Information Retrieval and Refinement
_Step 5: Query Execution and Retrieval:_ A comprehensive literature search is conducted using the formulated queries from Step 3 across the chosen information repositories. The search yields a pool of potential resources, which is subsequently refined, yielding 3437 documents. Duplicates are removed, resulting in 283 works.
_Step 6: Inclusion and Exclusion Criteria Application_: A systematic approach is employed to narrow down the resources based on inclusion and exclusion criteria, as detailed in Table 2. Criteria cover publication status, ontology description, and relevance to research questions, language, and function of ontology.
_Step 7: Literature Selection:_ Applying the inclusion-exclusion criteria narrows down the literature to 38 works.
**Phase III: Comprehensive Analysis and Model Selection**
_Step 8: In-depth Examination and Bibliographic Study_: The selected literature is reviewed and analyzed, including a thorough examination of their bibliographies. This step aids in identifying relevant works that contribute to the research context.
_Step 9: Model Availability and Final Selection:_ The focal point of this step is to assess the presence of CovOs in the selected literature. Works with available CovOs are prioritized for final inclusion, ensuring the suitability of the literature for subsequent comprehensive comparative analysis. Among the 38 works meeting inclusion criteria, 24 are selected based on CovOs availability and successful identification. This process entails extracting fundamental details (illustrated in Table 3) such as sponsored agencies, project names, ontology design patterns, utilized operations, illustrative classes, properties, and ontology metrics that characterize the ontologies. The summary of the chosen models is presented in Table 4, offering as guide for the upcoming analysis and discussions.
\begin{table}
\begin{tabular}{l l l} \hline \hline
**Categories** & **Inclusion Criteria** & **Exclusion Criteria** \\ \hline Publication status & Literature published in journals and conferences & Unpublished literature, non-journal sources \\ Description availability & Explicit ontology description & No ontology description \\ Relevance & Addresses research questions & Does not address research questions \\ Language & English language literature & Non-English language literature \\ Function of ontology & Models COVID-19 information & Non-COVID-19 information \\ \hline \hline \end{tabular}
\end{table}
Table 2: Overview of Inclusion and Exclusion Criteria
Bain, D. & Dutta, B (2023). Systematic Analysis of COVID-19 Ontologies. In: Garoufallou, Emmanouel and Vlachidis, Andreas (eds), _Metadata and Semantic Research (MTSR 2023): 17th International Conference on Metadata and Semantics Research_, Milan, Italy, October 23-27, 2023. Proceedings. Communications in Computer and Information Science (CCIS), Vol. XXXX, pp. XXX. Cham (Switzerland): Springer Nature. (Accepted).
**Phase IV: Ontology Review and Comparative Analysis**
_Step 10: Parameter Selection and Categorization:_ We extracted key comparison parameters from [3], organized them into categories, and introduced an additional parameter, "Ontology coverage/domain", to enhance the structured analysis of selected ontological models. Our study omits certain parameters like the "Focused phase" mentioned in [3] while incorporating "Ontology coverage/domain" to pinpoint potential research gaps and emphasize specific domains needing further exploration in comparative CovOs analysis. An outline of the crucial parameters for the comparative analysis can be found in Table 5.
_Step 11: Detailed Review and Comparative Analysis:_ With the parameters defined (as outlined in Table 5), the subsequent phase involves a comprehensive evaluation of the selected ontologies. To facilitate this process, various spreadsheet tools can be utilized to organize the parameters systematically. In this study, data collection was executed using Microsoft Excel, and the collected information was structured within Table 6.
_Step 12: Findings and Interpretations:_ Upon the completion of parameter measurement and tabulation, the focus shifts to in-depth analysis and uncovering significant findings. The assessment of the CovOs is succeeded by a detailed exploration of the insights and outcomes, subsequently discussed in depth (as elaborated in sections 4 and 5), offering insights into the intricacies and implications of COVID-19 information representation.
Bain, D. & Dutta, B (2023). Systematic Analysis of COVID-19 Ontologies. In: Garoufallou, Emmanouel and Vlachidis, Andreas (eds), _Metadata and Semantic Research (MTSR 2023): 17th International Conference on Metodata and Semantics Research_, Milan, Italy, October 23-27, 2023. Proceedings. Communications in Computer and Information Science (CCIS), Vol. XXX, pp. XXX. Cham (Switzerland): Springer Nature. (Accepted).
\begin{tabular}{l l l l l l l l l} \hline Model & Model [Ref. No.] & Arrow & Sponsored & Project Name & Deign & Operation & Example Chises and Properties & Ontology \\ No. & & m & Agencies & & Patter & ns Used & & Metrics \\ \hline M1 & Infectious & ID0 & NF & NF & Modil & Integration & chronic infection, bacteria, pathogenic disposition, symptom, & Class: 362 \\ & Disease & & & nr & n, & immune response, derives from, occurs in, participates in & OP:43 \\ & Ontology [24, 25] & & & & & Extension & DP:0 \\ M2 & Virus Infectious & VID0 & NIH/NLM & Biomedical Informatics and & Modil & Integration & Anatomical space, Symptom, blood, bodily fluid, host, virus, & Class: 432 \\ & Disease & & & Data Science Research & nr & n, & pathogen, disease, assay, replication, symptom, located in, & OP 32 \\ & Ontology [26] & & & Training & & Extension & occurs in, realized in, & DP:0 \\ M3 & International & ICD0 & University of & Undergraduate Research & Modil & Integration & pathological duration, pathological intrusion, pathological & Class: 1313 \\ & Classification of & Michigan & Opportunity Program & nr & n, & transformation, bacterial pneumonia, viral pneumonia, & OP 233 \\ & Diseases & & & (IROP) & & Extension & bronchopneumous, COVID-19 pneumonia, COVID-19, virus & DP:1 \\ & Ontology [27] & & & & & not identified, pneumonia due to SARS-associated & Individual 4 \\ & & & & & & coronavirus, respiratory failure, functionally related to, has & & \\ & & & & & & xdl0 code, has ICD10 name, has participant, has process & & \\ & & & & & profile, has prototype, has syndrome & & \\ M4 & Coronavirus & CID0 & University of & NIH & Modil & Integration & pathological intrusion, pathological & Class: 8775 \\ & Infectious & Michigan & & nr & n & transformation, bacterial pneumonia, viral pneumonia, & OP 363 \\ & Disease & & & University of & & & bronchopneumous, COVID-19 pneumonia & DP:18 \\ & Ontology [7] & & North Dakota & & & Individual 457 \\ M5 & CID0-COVID- & CD0- & NIH & NF & Modil & Integration & diagnosis, personal information, diagnostic process, disease & Class: 10386 \\ & 19. Ontology of & COVID- & & & nr & n & course, biological process, viral process, gene, genome, & OP 375 \\ & COVID-19 [28] & 19 & & & & diagnose, depends on, has participant, has concurrent part, is & DP:25 \\ & & & & & & AA mutation of is A variant of participates in, located in, & & \\ & & & & & & may prevent, admitted from, disposition & & \\ M6 & The COVID-19 & ID0- & NIH/NLM & NIH & Modil & Integration & Disorder, Infection, Protein, nucleic acid, pathogen, viral, & Class: 486 \\ & Infectious & COVID- & & & nr & n & assay, replication, symptoms, has part, has participant, located & OP:43 \\ & Disease & 19 & & & & in, has quality, has functions & & DP:0 \\ & Ontology [8] & & & & & Individual 23 \\ M7 & COVID-19 & COVID & Public Health & NF & Non- & Integration & Suspected COVID-19, Taking of swab for SARS-CoV-2 & Class: 32 \\ & surveillance & 19 & England/ & & modil & n & (severe acute respiratory syndrome coronavirus 2), & OP:0 \\ & Ontology [17] & & Wellcome & & r & & Myocardial caused by SARS-CoV-2 (severe acute respiratory syndrome coronavirus 2), & DP:0 \\ M8 & COVID-19 & COVID- & NF & NF & Modil & Integration & COVID-19 & Individual:0 \\ & Ontology [16] & 19 & & & nr & n & aspect, laboratory finding, diagnosis, intervention, bodily & OP 9 \\ & & & & & & process, transport of virus, iidMedFot, iRiRiFactFotFotFotFotFotFotFotFotFotFotFotFotFotFotFotFotFotFototFotFotFotFototFotFotFotFotFotFotFotFotFotFotFototFotFototFotFotFotFotFototFototFotFotFototFotFototFototFototFotFotFototFototFotFototFototFotFototFototFototFototFototFototFotFototFototFototFototFotFototFototFototFotFotFototFototFotFototFototFotFototFototFototFotFototFotFototFototFotototFotFototFototFototFototFototFototFototFotototFototFotototFototFotototFototFototFototFototFototFototFotFototFototFototFototFotFototFototFototFotFototFototFototFototFotFototFototFototFototFotFototFotFotototFotFototFotFototFototFototFototFototFototFotFototFototFotFototFototFototFotFototFototFototFototFotFototFototFototFototFototFotFototFotFototFototFototFototFotFototFototFototFototFototFototFotototFotFototFototFototFototFototFototFototFototFototFototFotFototFototFototFototFototFotFototFototFotFototFototFototFotFototFototFototFototFototFototFototFototFototFotototFotototFototFotototFototFotototFototFototFotototFototFototFotFotototFototFotototFototFotototFototFototFototFotototFototFototFotototFotototFotototFototFotototFototFotototFotototFotototFotototFotototFotototFotototFotototFotototFotototFototototFot
Bain, D. & Dutta, B (2023). Systematic Analysis of COVID-19 Ontologies. In: Garoufaulou, Emmanuel and Vlachidis, Andreas (eds), _Metadata and Semantic Research (MTSR 2023): 12th International Conference on Metadata and Semantics Research_, Milan, Italy, October 23-27, 2023. Proceedings. Communications in Computer and Information Science (CCIS), Vol. XXXX, pp. 300C. Cham (Switzerland): Springer Nature. (Accepted).
\begin{table}
\begin{tabular}{l
Bain, D. & Dutta, B (2023). Systematic Analysis of COVID-19 Ontologies. In: Garoufallou, Emmanuel and Vlachidis, Andreas (eds), _Metadata and Semantic Research (MTSR 2023): 17th International Conference on Metadata and Semantics Research_, Milan, Italy, October 23-27, 2023. Proceedings. Communications in Computer and Information Science (CCIS), Vol. 300X, pp. 200X. Cham (Switzerland): Springer Nature. (Accepted).
## 4 Findings
In Table 6, we present an overview of the selected twenty-four CovOs (M1 to M24). The table demonstrates the varied purposes of CovOs (M1 to M24), spanning from deepening insights into virus-related domains and the biomedical facets of COVID-19, to supporting primary care surveillance, elucidating drug associations, facilitating data integration, and providing comprehensive data representation. These models aim to provide structured knowledge on infectious illnesses, expand existing ontologies, aid medical decision-making, standardize terminology, annotate literature, depict COVID-19 progression, characterize genetic aspects, and enable data publishing. By addressing different aspects of COVID-19 and related domains, these CovOs contribute to improved research, analysis, and decision support in the fight against the pandemic. The ontology types of the overall models vary based on their intended purposes and applications. For instance, models M1 to M3 serve as general ontologies, providing foundational concepts for infectious illnesses, virus-specific terms, and structured biomedical representations of disease classifications. Models M4 to M9 are domain ontologies, focusing on comprehensive biomedical representations of coronavirus infectious disease, drug associations, and patient data for improved medical insights. Models M10 and M11 are application ontologies, designed to structure patient data and analyze global COVID-19 responses, while M12 acts as an application ontology for biomedical literature annotation. Models M13 to M24 fall under the category of domain ontologies, capturing various aspects of the COVID-19 pandemic, such as regional knowledge, virus progression, genome characterization, and replication processes. These domain ontologies contribute to a comprehensive understanding of COVID-19 and support a range of research and practical applications. The ontology vocabulary reused across the different models provides a foundation for their knowledge representation and interoperability. M1 and M2 utilize OBO and DC. In M3, FOAF, OBO, BFO, OGMS, UBERON, and PATO contribute to structured biomedical representations. M4 and M5 reuse OBO models, FOAF, and SNOMED CT for enhanced COVID-19 understanding. M6 leverages OBO and DC, while for M7 we have not found any mentioned methodology. M8 integrates OBO, FOAF, and SKOS for enriched domain knowledge, and M9 adopts DC term and ATC for bibliographic resources. M10 involves DC, OBO models, and GEO-Ont. M11 benefits from CODO, SNOMED, FOAF, and SCHEMA. M12 incorporates
OBO models. M13 and M14 rely on CODO and CIDO, while M14 further integrates Basic Formal Ontology, and various OBO models. M15 utilizes Schema, OBO, CODO, FOAF, SCHEMA, and IBO. M16 and M17 integrate Schema, FOAF, CODO, SNOMED CT, and OBO. M18 involves CIDO, COVID-19 Ontology, and GO. M19 and M22 are unspecified (NF). M20 and M21 leverage SKOS for structured vocabularies. M23 adopts OBO, and M24 combines FOAF, ORD, SCHEMA, SNOMED CT, and OBO for comprehensive data representation. _The design methodologies_ employed across these models guide the systematic creation of ontologies, ensuring their effectiveness and relevance. Models M1, M2, M4, and M5 follow the OBO methodological principles, promoting consistency and compatibility within biological and biomedical ontologies. M3 adopts the eXtensible Ontology Development (XOD) strategy for flexible ontology creation, while M7 utilizes METHONTOLOGY to establish surveillance systems. M9 incorporates temporal information using the LOT approach. M11 employs the NeOn methodology for systematic ontology construction, and M13, M14, and M15 utilize METHONTOLOGY and Diligent approaches to enhance knowledge representation. M16, M17, and M24 continue with METHONTOLOGY and YAMO methods to facilitate comprehensive data analysis and publication. Some models are not specified (NF), and each methodology contributes to the ontologies' structure, usability, and relevance. _Class hierarchy development_ varies across models: M1-M6, M8, M9, M12, M14, M15 M18, M22, and M23 follow top-down; M7, M10, M13, M16, M17, M19-M21, and M24 adopt bottom-up; and M11 use combination approaches. _Representation language_ signifies the medium for depicting concepts and relationships, such as OWL and RDFS. This parameter highlights the prevalent language choice for representation. Our analysis indicates that OWL was predominantly used in the ontologies developed, as demonstrated in Table 6. _Level of Formality_ pertains to the extent of rigor in ontology development, such as Semi-formal for M1-M6, M13-M15 and Informal for M7-M12, M16-M23 models. M24 adopts a Formal approach, showcasing the diverse formalities employed in constructing the ontologies. The _Ontology Editor_ used varies across the models, with Protege being the predominant choice for M1-M23 and M24, highlighting its widespread adoption. M6 employs an unspecified editor, while M20 and M21 utilize the sheet2rdf GitHub workflow. _Ontology Evaluation methods_ vary across models. Evaluation serves as a foundation for designing new ontologies and enables updates when necessary. Various evaluation approaches exist, including comparing with golden standards, application-based assessment, data-based comparisons, expert or human evaluations, and task-based and criteria-based evaluations. Depending on factors like standard ontology availability, expertise, data, and application, relevant evaluation methods are chosen. M1 and M2 employ HerriT, Pellet reasoner, Mace4 model checker, and Prover9. M3 uses Hermit reasoner, user feedback, and application-based evaluation. M4 and M5 rely on HerriT, Pellet reasoner, Mace4 model checker, and Prover9. M6 applies application-specific validation, while M7 involves expert evaluation and external consensus exercise. M9 uses OOPS! and data-driven evaluation. M11 and M12 utilize SPARQL and Lucene Elasticsearch engine metadata parsing, respectively. M13 ensures consistency through Pellet reasoner and expert/non-expert validation. M14 employs Herrit27, ELK28, and SPARQL. M15 uses SPARQL query and OOPS! Pitfall Scanner. M16, M17, and M23 utilize SPARQL queries. M24 combines reasoner-based and SPARQL query evaluations. M8, M10, M18, M19, M20, M21, M22, and M23 are not found (NF). The _ontology coverage/domain_ defines the extent of the ontologies, encompassing a range of disease-related dimensions. The ontology coverage or domain defines the breadth of ontological representation across various disease-related dimensions. Among the 24 CovOs (M1-M24) and the identified 23 distinct domains (D1-D23), each model is associated with specific domain numbers based on its coverage. For instance, Models M1, M2, and M5 encompass domains such as etiology, epidemiology, pathogenesis, and virology. Models M6, M7, and M8 emphasize diagnosis, prevention, and therapy. Model M14 is tailored toward procedures and resources, while M15 and M16 delve into immunology and ethics respectively. M17 explores demographics, race, and ethnicity, and M20 investigates the influence of weather. Models M21 and M22 pertain to laboratory tests and locations, while M23 is primarily centered on statistics and data analysis. Lastly, Model M24 spans domains including etiology, epidemiology, pathogenesis, virology, diagnosis, prevention, therapy, procedures, resources, immunology, ethics, demographics, race and ethnicity, weather influence, laboratory tests, locations, and statistics, and data analysis.
This extensive range of disease-related domains encompassed by these ontologies fosters a comprehensive understanding and analysis of various aspects pertaining to disease and health.
## 5 Discussion and Limitations
### Comprehensive Exploration and Insights
This section presents a comprehensive exploration of the research questions posed in section 3 (RQ1-RQ5) in the context of the 24 CovOs (M1 to M24). Collectively, these models offer a panoramic insight into the landscape of ontology-based approaches for representing COVID-19-related information, showcasing the diversity in ontological representations during the pandemic (RQ1). The analysis delves into the methodologies, vocabularies, and representation languages employed to depict COVID-19-related information, providing a detailed understanding of how ontologies capture this intricate domain (RQ2). Moreover, the models reveal an expansive domain coverage, spanning various disease-related aspects such as etiology, epidemiology, virology, primary care surveillance, government responses, and regional contexts, effectively addressing a wide spectrum of knowledge domains (RQ3). For example, models like M1, M2, and M5 span domains such as etiology, epidemiology, pathogenesis, and virology. M7 focuses on primary care surveillance, M11 on global government responses, and M13 on a specific regional context. Other models delve into immunology, genetics, patient data, and clinical aspects. Thus, the ontology models comprehensively cover various domains related to COVID-19. Exploring the world of knowledge representation formalism within the ontology models reveals a prominent pattern - the widespread utilization of the OWL language. This formalism preference for OWL is evident in several models, including M1, M5, M13, M15, M16, M17, and M24. The prevalence of OWL across these models highlights its significant contribution to shaping the landscape of ontology-based COVID-19 modeling (RQ4). The attributes of the models, encompassing granularity, scope, modularity, formalism, vocabulary reuse, and domain coverage, are examined, providing a comprehensive view of their characteristics and contributions (RQ5). Models like M24 exhibit a high level of formalism, while others, such as M6 and M7, adopt more informal approaches. Granularity varies across models, with some focusing on specific aspects like virus progression (M13) or clinical findings (M14). Vocabulary reuse is a common practice, promoting interoperability. As a whole, these CovOs, spanning from M1 to M24, collectively fulfill the outlined research questions, enhancing their role in advancing research, analysis, and decision-making in the pandemic landscape.
Bain, D. & Dutta, B (2023). Systematic Analysis of COVID-19 Ontologies. In: Garoufallou, Emmanouel and Vlachidis, Andreas (eds), _Metadata and Semantic Research (MTSR 2023): 17th International Conference on Metadata and Semantics Research_, Milan, Italy, October 23-27, 2023. Proceedings. Communications in Computer and Information Science (CCIS), Vol. X00X, pp. XXX. Cham (Switzerland): Springer Nature. (Accepted).
\begin{table}
\begin{tabular}{p{28.5pt} p{28.5pt} p{28.5pt} p{28.5pt} p{28.5pt} p{28.5pt} p{28.5pt} p{28.5pt} p{28.5pt} p{28.5pt} p{28.5pt}}
**Mo** & **Purpose** & **Ontol** & **Ontology** & **Design** & **Clas** & **Ropree** & **Level of** & **Ontol** & **Evaluation** & **Outology** \\
**del** & **logy** & **Vocabularies** & **Methodology** & **hierarchy** & **national** & **Formalism** & **agy** & **Coverage/** \\
**No.** & **Type** & **reused** & **y** & **development** & **Language** & **ty** & **Editor** & **Domain** \\ \hline M1 & To provide an interoperable ontology for & General & OBO (BPO, GOMS, OBO & Top-down & OWL & Semi- & Protag & HemiT, Pellet & D1, D2, D4, \\ & infectious illnesses, morning clinical & Ontology & NSDTaxon, & methodologia & & formal & & reasoner, Macei & D5, D6, D8, \\ & biological details, while building upon the & y & UBERON, IAO, GO, I principles & & & & & & & & & & \\ & foundational ‘disease’ entity from the & VSMO etc), DC & & & & & & & & & & \\ & Ontology for General Medical Science & & & & & & & & & & \\ & (OOMS). & General OBO (BPO, CHEH, OBO & Top-down & OWL & Semi- & Protag & HemiT, Pellet & D1, D2, D4, \\ & concepts according to OBO Foundry & Ontology & OMBL CL, OMBSE, & methodologia & & formal & & reasoner, Macei & D5, D6, D8, \\ & guidelines, comprehensively evolving & y & OOMS, NSTEM,1 principles & & & & & & & & & \\ & virology-related terms, and integrating information from OBO Foundry ontologies, thereby enhancing our understanding of virus-related domains. & UBERON, IAO, GO, I principles & & & & & & & & \\ M3 & To offer a structured and coherent biomedical ontology, facilitating the logical representation of terms and relationships & General FOAF, OBO, BYG, representing to the International Classification of Diseases (ICD). & FOAF, OBO, BYG, OMBO
Bain, D. & Dutta, B (2023). Systematic Analysis of COVID-19 Ontologies. In: Garufallou, Emmanuel and Vlachidis, Andreas (eds), _Metodata and Semantic Research (MTSR 2023): 17th International Conference on Metadata and Semantics Research_, Milan, Italy, October 23-27, 2023. Proceedings. Communications in Computer and Information Science (CCIS), Vol. XXXX, pp. XXX. Cham (Switzerland): Springer Nature. (Accepted).
* [1] To facilitate data integration from diverse sources to analyze the effectiveness and side effects of COVID-19 government responses globally, utilizing an ontology for assessing and interlinking national measures, and supporting insightful inquiries from the data.
[MISSING_PAGE_POST]
ion OMIT, PR, PATO, etc.
Bain, D. & Dutta, B (2023). Systematic Analysis of COVID-19 Ontologies. In: Garoufaulou, Emmanuel and Vlachidis, Andreas (eds), _Metadata and Semantic Research (MITSR 2023): 17th International Conference on Metadata and Semantics Research_, Milan, Italy, October 23-27, 2023. Proceedings. Communications in Computer and Information Science (CCIS), Vol. X00X, pp. X0X. Cham (Switzerland): Springer Nature. (Accepted).
M20 To standardize controlled terms and semantic Demain SKOS - properties within the COVID-19 generic - template of VODNAEA, facilitating consistent - data representation across facilities and encompassing elements such as cause of death, COVID-19 results, vaccination status, and health meetings.
M21 To compile terms for the Zomb/WO COVID19 - Demain SKOS - program, utilizing websites to describe - project and data content metadata, focusing on essential aspects including patient, proteins, viruses, hosts, cells, organs, pathology, diseases, and symptoms.
M22 To characterize the SARS-CoV2 genome, encompassing its genes, mutants, and - mutations in a hierarchical manner, providing detailed information about their lineage, appearance, diabetes, descriptions, and WHO - names, contributing to a comprehensive understanding of the virus's genetic structure.
M23 To depict distinct phases of SARS-CoV2 - Demain OBO - NF - Top-down OWL - Informal - Protege SPARQL query - D6, DT, D13, replication, defining genome components and replication elements along comprehension of the virus's replication process through - attributes like rank, status, and base information.
M24 Serys a knowledge graph model to publishDemain POAT, ORO, YAMO - Bottom-up OWL - Formal - Protege Rensemer based mDI, D2, D4, COVID-19 data on the web, focusing on - SCHEMA.
Bain, D. & Dutta, B (2023). Systematic Analysis of COVID-19 Ontologies. In: Garoufallou, Emmanuel and Vianchidis, Andreas (eds), _Metadata and Semantic Research (MTSR 2023): 17th International Conference on Metadata and Semantics Research_, Milan, Italy, October 23-27, 2023. Proceedings. Communications in Computer and Information Science (CCIS), Vol. XXXX, pp. XXXX. Cham (Switzerland): Springer Nature. (Accepted).
### Limitations and Scope
It is important to acknowledge certain limitations inherent to this study. The inclusion criteria for the selection of CovOs is guided by specific research objectives, potentially resulting in the omission of other pertinent models. Moreover, our analysis is confined to a comprehensive assessment of existing CovOs based on the specified parameters outlined in Section 3. The scope of this research is centered on the representation of COVID-19-related information within ontologies, and as such, the analysis may not encompass all facets or variables within the broader domain of ontological research. Despite these limitations, the study provides valuable insights into the domain coverage, methodologies, and characteristics of CovOs relevant to COVID-19, thereby assisting researchers in advancing comprehension and effective utilization of ontologies within the pandemic context.
## 6 Conclusion
This study has undertaken a comprehensive exploration of ontology-based approaches for representing COVID-19-related information, offering valuable insights into the diverse landscape of ontological representations. Through the analysis, the study has not only addressed the framed research questions but has also contributed to the advancement of knowledge in several key aspects. The study has successfully identified and examined twenty-four CovOs (M1 to M24) spanning a wide range of domains, from etiology and epidemiology to virology, diagnosis, prevention, and more. This extensive coverage has provided an aerial perspective of the ontological landscape, fulfilling the contribution statement of this work. Furthermore, the study's detailed analysis of methodologies, representation languages, class hierarchies, and ontology evaluation methods has enriched our understanding of the characteristics and attributes of these ontological models. By comparing and contrasting the different models, the study has highlighted both distinct features and commonalities, thereby offering insights into best practices and potential areas for improvement. In fulfilling its contribution statement, this study has not only provided a comprehensive overview of CovOs but has also identified potential gaps in research. The emphasis on certain domains across several models, in contrast to the limited representation of others, highlights the need for additional ontological development in specific areas. This work lays the groundwork for future research endeavors by guiding researchers toward domains that require additional focus and refinement. Building upon the insights gained from this study, future research endeavors could delve into several promising directions. Firstly, an extended exploration of the identified research gaps and unaddressed domains within existing COVID-19 ontologies (CovOs) could provide a foundation for developing specialized ontologies to fill these gaps and enhance the comprehensiveness of COVID-19 data representation. Furthermore, the refinement and expansion of the parameter set used for ontology comparison, potentially incorporating emerging criteria and advanced evaluation methodologies, could contribute to a more nuanced and comprehensive analysis of CovOs. As the pandemic landscape continues to evolve, ontologies will play a crucial role in advancing research, analysis, and decision-making, and this study provides a solid foundation for researchers to build upon further to enhance our understanding of COVID-19 and its multifaceted dimensions.
|
2309.04887 | SortedAP: Rethinking evaluation metrics for instance segmentation | Designing metrics for evaluating instance segmentation revolves around
comprehensively considering object detection and segmentation accuracy.
However, other important properties, such as sensitivity, continuity, and
equality, are overlooked in the current study. In this paper, we reveal that
most existing metrics have a limited resolution of segmentation quality. They
are only conditionally sensitive to the change of masks or false predictions.
For certain metrics, the score can change drastically in a narrow range which
could provide a misleading indication of the quality gap between results.
Therefore, we propose a new metric called sortedAP, which strictly decreases
with both object- and pixel-level imperfections and has an uninterrupted
penalization scale over the entire domain. We provide the evaluation toolkit
and experiment code at https://www.github.com/looooongChen/sortedAP. | Long Chen, Yuli Wu, Johannes Stegmaier, Dorit Merhof | 2023-09-09T22:50:35Z | http://arxiv.org/abs/2309.04887v1 | # SortedAP: Rethinking evaluation metrics for instance segmentation
###### Abstract
Designing metrics for evaluating instance segmentation revolves around comprehensively considering object detection and segmentation accuracy. However, other important properties, such as sensitivity, continuity, and equality, are overlooked in the current study. In this paper, we reveal that most existing metrics have a limited resolution of segmentation quality. They are only conditionally sensitive to the change of masks or false predictions. For certain metrics, the score can change drastically in a narrow range which could provide a misleading indication of the quality gap between results. Therefore, we propose a new metric called sortedAP, which strictly decreases with both object- and pixel-level imperfections and has an uninterpret pendalization scale over the entire domain. We provide the evaluation toolkit and experiment code at [https://www.github.com/looooongChen/sortedAP](https://www.github.com/looooongChen/sortedAP).
## 1 Introduction
Recently, considerable work has been conducted in instance segmentation due to its wide scope of application [9, 19, 4, 3], such as autonomous driving [5], medical diagnosis [13] and agricultural phenotyping [18, 2]. In the field of bioimage computing, segmenting instances of animals [16], cells [6], and subcellular structures [1, 8] is also common and infrastructural processing for further analysis and study. Instance segmentation not only localizes the object of interest but also delineates the exact boundary, which can be seen as performing object detection and semantic segmentation concurrently.
Correspondingly, a qualified evaluation metric should consider three fundamental types of imperfections: missed ground truth objects (false negative), falsely predicted objects (false positive), and segmentation inaccuracy. Existing metrics all incorporate the three error types above, but are not discussed with respect to properties, including sensitivity, continuity, and equality.
**Sensitivity.** An ideal metric should be sensitive to all occurrences of imperfections of all types. Any additional errors are supposed to lead monotonically to a worse score, not ignored or obscured by the occurrence of other errors. A metric that monotonically decreases with any errors will enable a more accurate comparison.
**Continuity.** The penalization scale of a metric should be relatively consistent locally across the score domain. Intuitively, gradually and evenly changing segmentations should correspond to a smoothly changing metric score as well. Abrupt changes are not desired.
**Equality.** Without any assumed importance of different objects, all objects should have an equal influence on the metric score. A common case of inequality is that the score is biased towards larger objects. Although larger objects may be prioritized in some applications, as a general metric, the metric should treat all objects equally. Analysis with respect to object size can be easily performed by evaluating different size groups using a metric of equal property.
Although all metrics discussed in this paper implement a penalization of false positive, false negative, and segmentation inaccuracy, the majority of metrics, even very widely used ones, such as the mean Average Precision (mAP) [1], are only conditionally sensitive to errors. This violates the sensitivity property, as some differences in segmentation results are not reflected in the score. For match-based approaches, such as Average Precision (AP) [1] and Panoptic Quality (PQ) [11], the score will change abruptly at the match threshold. There is actually a paradox in choosing thresholds, which is discussed in Section 3.
To address the gap, we propose a new metric called the sorted Average Precision (sortedAP). Unlike mAP [1], which queries the AP score at a sequence of fixed intersection over union (IoU) thresholds, sortedAP detects every exact IoU value at which the AP score drops. This is achieved through our proposed _Unique Matching_ approach and sort
ing all possible matches according to the IoU values (Section 4). The Unique Matching method explicitly preserves the one-to-one relationship between two sets of instances. This also allows the use of IoU thresholds smaller than 0.5, or under object overlap, in all match-based metrics.
## 2 Related work: A review
This section provides an overview of proposed evaluation metrics in the literature. We use the notion \(\mathcal{G}=\{g_{1},g_{2},\ldots,g_{M}\}\) and \(\mathcal{P}=\{p_{1},p_{2},\ldots,p_{N}\}\) to represent the set of ground truth and predicted objects in the following context. The capitalized symbols \(\mathcal{G}\) and \(\mathcal{P}\) can represent a set, or the number of elements in the set, for notation simplicity.
### Overlap-based metrics
The Dice coefficient (Dice) and the Intersection over Union (IoU) are the most commonly used metrics to measure the similarity between two binary masks. The IoU, also known as Jaccard Index (JI), is defined as the ratio of the intersection area to the union area between two masks:
\[IoU(p,g)=\frac{|p\cap g|}{|p\cup g|}. \tag{1}\]
Instead of the union, Dice use accumulated area:
\[Dice(p,g)=\frac{2\cdot|p\cap g|}{|p|+|g|}. \tag{2}\]
Although they have slightly different definitions, both metrics utilize the same fact that the intersection area is maximized when two masks are identical. Furthermore, the two metrics are directly related in values:
\[Dice(p,g)=\frac{2\cdot IoU(p,g)}{1+IoU(p,g)}. \tag{3}\]
**Aggregated Jaccard Index (AJI).** The AJI [13] extends the Jaccard Index to instance segmentation by accumulating the object-level intersection and union area, which is computed between each ground truth object and the prediction yielding the maximum IoU. The area of predicted objects without any matched ground truth objects is also aggregated to the union area as the penalization to false positives.
**Symmetric Best Dice (SBD).** SBD [18] is based on an asymmetric score Best Dice (BD). For each object in one set, BD finds the maximal Dice with any object in the other set (the reference set) for averaging.
\[BD(\mathcal{P},\mathcal{G})=\frac{1}{N}\sum_{i=1}^{N}\max_{j=1:M}Dice(p_{i},g _{j}), \tag{4}\]
The BD does not fully penalize all errors, since unmatched objects in the reference set are excluded and have no impact on the score. Therefore, the SBD computes BD using both sets under comparison as the reference and takes the worse score as the final score:
\[SBD(\mathcal{P},\mathcal{G})=min\{BD(\mathcal{P},\mathcal{G}),BD(\mathcal{G },\mathcal{P})\}. \tag{5}\]
### Match-based metrics
Another category of metrics is based on object-level detection errors at one or multiple segmentation quality thresholds. A matching criterion \(t\), typically an IoU value, is defined as a prerequisite. Each ground truth object searches for a successful match in the predicted objects, or vice versa. Based on the match results, all objects can be grouped into one of the three categories: true positives (\(TP_{t}\)), false positives (\(FP_{t}\)), and false negatives (\(FN_{t}\)).
Fundamentally, the match between predicted objects and ground truth objects should satisfy a one-to-one relationship. This ensures that the number of true positives is equal to the number of ground truth objects that have a successful match. We will discuss how to explicitly maintain this relationship in Section 4.1.
**Average precision (AP).** The term AP can refer to different evaluation metrics in the literature. For ease of discussion, we refer to them as the P-R AP [7] and the point AP [1]. Despite being based on different perspectives, both metrics are defined in terms of precision and recall:
\[Pre_{t}=\frac{TP_{t}}{TP_{t}+FP_{t}},\;Rec_{t}=\frac{TP_{t}}{TP_{t}+FN_{t}}. \tag{6}\]
The P-R AP was first proposed for the evaluation of object detection tasks [7, 14]. As a summary of the Precision-Recall curve (P-R curve), it evaluates a model from a more comprehensive view by considering the precision performance over the entire recall domain. Although very widely used, the P-R AP suffers from certain deficiencies, as pointed out by recent works. Firstly, the definition requires a confidence score for each prediction, while not all approaches naturally score the outputs. For example, most bottom-up approaches do not directly deliver object-level confidence scores as most detection-based pipelines do. In terms of discrimination capability, P-R AP does not really distinguish between different shapes of P-R curves [17]. The neglect of low-confidence duplicates (hedged prediction) is another important deficiency of P-R AP [10].
In comparison, the point AP is oriented towards the end result and corresponds to a point on the P-R curve that achieves a certain precision-recall trade-off. In this case, all predictions are treated equally regardless of scoring. The point AP is formulated as follows:
\[AP_{t}=\frac{TP_{t}}{TP_{t}+FP_{t}+FN_{t}}. \tag{7}\]
The point AP relates to the P-R curve according to the following equation:
\[AP_{t}=\frac{1}{Pre_{t}+Rec_{t}+1}. \tag{8}\]
While the P-R AP favors precision improvements at any recall level, the point AP only focuses on the single point of best precision-recall trade-off. From the user's perspective, higher precision in the extreme recall range is of limited practical significance. Therefore, point AP obligates the processing pipeline to screen predictions, including determining the optimal cutoff confidence. In the following context, we refer to the point AP when using the term AP.
**Mean Average Precision (mAP).** The AP score is based on the matching results under a certain IoU threshold \(t\). Segmentation imperfections better than the matching criterion will not be further penalized. Similarly, objects worse than the threshold are viewed as equally bad.
To compensate for the neglect of segmentation imperfections, the mean Average Precision (mAP) [1] averages a series of AP scores over progressively higher IoU thresholds:
\[mAP=\frac{1}{N}\sum_{t\in T}\frac{TP_{t}}{TP_{t}+FP_{t}+FN_{t}}, \tag{9}\]
where \(T=\{t_{1},t_{2},\ldots,t_{N}\}\). A typical choice for the threshold range is from 0.5 to 0.95, with a step size of 0.05.
It is worth mentioning that when referring to mAP, it generally means the averaging of multiple AP scores, rather than scores under different matching thresholds specifically. For example, the PASCAL dataset [7] computes P-R AP scores of different semantic classes for averaging. The COCO challenge [14] considers both the semantic categories and varying matching thresholds. In this work, we only discuss averaging across matching thresholds, as it is directly relevant to metric design.
**Panoptic Quality (PQ).** The PQ is defined as the multiplication of the Recognition Quality (RQ)
\[RQ=\frac{2\cdot TP_{t=0.5}}{2\cdot TP_{t=0.5}+FP_{t=0.5}+FN_{t=0.5}} \tag{10}\]
and the Segmentation Quality (SQ)
\[SQ=\frac{\sum_{(p,g)\in TP_{T=0.5}}IoU(p,g)}{|TP_{t=0.5}|}, \tag{11}\]
where \((p,g)\) indicates a matched prediction and ground truth pair. The RQ measures the detection accuracy as the AP and they are related as
\[RQ=\frac{2\cdot AP_{t=0.5}}{1+AP_{t=0.5}}. \tag{12}\]
The SQ term is basically the mean IoU of all true positive pairs, explicitly modeling the segmentation quality of objects above the match threshold.
## 3 An analysis of deficiencies
### Sensitivity to errors
While existing metrics account for all three types of errors, few of them are sensitive to all occurrences of errors.
**Exempted error.** SBD takes the worse BD score between using the ground truth and the prediction as the reference. This only considers false positives or false negatives, except the segmentation inaccuracy, respectively. As illustrated in Figure 0(a) and Figure 0(c), predictions with and without an additional false positive have the same SBD score. Although the false prediction decreases \(\text{BD}(\mathcal{P},\mathcal{G})\), the impact on SBD is exempted by the lower \(\text{BD}(\mathcal{G},\mathcal{P})\).
**Resolution of segmentation difference.** As stated previously, the mAP score reflects the segmentation quality by computing AP scores at varying IoU thresholds, with a certain step size. Despite having a good practical utility with an appropriate step size, mAP is only definitely sensitive to IoU changes greater than the step size. A smaller difference in IoU may or may not result in score changes, depending on whether the change crosses a predefined IoU threshold or not. From Figure 0(a) to Figure 0(b), all IoUs decrease by 0.08. However, the mAP score remains unchanged in the case of step size 0.1 (Figure 2).
### Match thresholds and score continuity
Match-based metrics use hard thresholds to determine true and false positives. As a result, objects can abruptly transition from true positives to false positives, even if they are only slightly different in IoU. PQ and mAP introduce a continuous or quasi-continuous measure of the segmentation, but only in the domain above the minimum IoU threshold. A discontinuous change always occurs at the lower IoU threshold. An example is shown in Figure 1, where increasing the IoU of only one prediction from 0.49 to 0.51 leads to a PQ change of 17.64%, from 0.4229 to 0.4975 (Table 1).
**Threshold dilemma.** A discontinuous score is not completely unacceptable. The IoU threshold can be set low enough so that two useful results (away from the low IoU range) will not be assigned drastically different scores. However, a single AP or PQ score reported with a low match threshold becomes less informative. PQ makes the
\begin{table}
\begin{tabular}{l|c c c c c} \hline \hline Metrics & AJI & SBD & PQ & mAP & sortedAP \\ \hline Case-1 & **.5125** & **.4925** & **.4229** & **.3778** &.4261 \\ Case-2 &.4587 &.4325 &.3771 & **.3778** &.3839 \\ Case-3 &.6252 & **.4925** &.4933 &.4722 &.5283 \\ Case-4 &.5159 &.4975 & **.4975** &.4000 &.4288 \\ Case-5 & **.5125** &.3940 &.3700 &.3148 &.3572 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Scores of different metrics for the examples shown in Figure 1. In each column, the pair of cases marked in bold demonstrate the deficiency of a metric.
compromise at the IoU of 0.5. The mAP only alleviates the amplitude of abrupt changes by dividing them into multiple levels (Figure 2(c) and Figure 2(f)).
### Equality of object-level errors
Without specific assumptions, objects should be treated equally. A missed small object is supposed to place the same impact on the score as a larger object. Object segmentation accuracy should also be measured relative to their size, rather than the absolute area. Match-based approaches satisfy this property by constructing the metric using the object counts and object-level IoU. SBD takes the average of object Dice, therefore also area-independent. In contrast, AJI does not have a notion of objects. For instance, the scenario of having two false positives in Figure 0(e) yields the same AJI score as the scenario of having one larger false positive in Figure 0(a). And accumulating absolute area will also bias the score towards the quality of larger objects.
## 4 Sorted Average Precision (sortedAP)
### Unique Matching
For match-based metrics, each ground truth object can match at most one prediction, and vice versa. This rule ensures that the number of true positives is consistent with the number of ground truth objects that have a successful match. In the greedy match used by mAP and PQ, the one-to-one relationship is implicitly maintained by using matching IoUs larger than 0.5. This is because, under the non-overlapping assumption, no two objects can match with the same object while both having IoUs larger than 0.5 [11].
We propose using the Hungarian algorithm [12] to determine true positive matches. This involves the following steps: constructing the cost matrix, padding the cost matrix to square, solving the maximal assignment problem using the Hungarian algorithm, and removing matches of zero cost. The implementation details are depicted in Algorithm 1.
The Hungarian matching algorithm not only maintains the one-to-one match relationship but also maximizes the accumulated IoUs of true positive matches. The Unique Matching as a plug-in extension can be applied to both AP and PQ, making them applicable with low match thresholds and object overlap.
### AP scores over the entire IoU domain
To avoid the drastic score change (Section 3.2), we propose to summarize the AP scores over the entire IoU threshold domain as a metric, instead of a single AP score or scores covering only part of the domain. By using our proposed Unique Matching approach, the mAP can be straight
Figure 1: Examples to illustrate the deficiencies of evaluation metrics. (a) Case-1 is the base example. (b) All IoUs get worse in Case-2, but the mAP score remains unchanged. (c) Case-3 contains one less false positive, but SBD score is the same as Case-1. (d) In Case-4, only one object segmentation improves by 0.02 in IoU, but the PQ score increases by 17.64%. (e) Two false positives are present in Case-5, while only one exists in Case-1. AJI score penalizes them equally due to the smaller size of objects in Case-5.
Figure 2: Computation of meanAP and sortedAP on the Case-1 and Case-2 in Figure 1. The mAP estimates the AP curve by querying AP values at fixed IoUs, while sortedAP identifies the exact IoU value where the AP curve drops.
forwardly extended to the entire IoU domain, such as using a threshold collection of \(\{0.1,0.2,...,0.9\}\). However, querying AP scores at fixed IoU values can ignore small segmentation changes, noted as the limited resolution in Section 3.1.
We propose sorted Average Precision (sortedAP) as a new metric that is sensitive to all segmentation changes. The concept of sortedAP involves identifying all IoU values at which the AP score drops, instead of querying AP scores at fixed IoUs as the mAP. The AP score can only change at the IoUs of each object where the object transitions from true positive to false positive. Raising the matching threshold from 0 to 1 will turn all matches into non-matches one by one in the ascending order of IoU. In consequence, one non-match will diminish a true positive and introduce a false negative. Considering the sum of true and false positives is constant, we rewrite the AP score as:
\[AP_{t}=\frac{TP_{t}}{TP_{t}+FP_{t}+FN_{t}}=\frac{TP_{t}}{P+FN_{t}}. \tag{13}\]
We let \(TP_{0}\) and \(FN_{0}\) be true positives and false negatives of the maximal possible match between two sets. This can be obtained by the Unique Matching (Section 4.1) with a tiny but non-zero fuzzy threshold. All possible AP scores can then be computed by:
\[AP_{t_{k}}=\frac{TP_{0}-k}{P+FN_{0}+k},\ k=1,2,...,TP_{0}, \tag{14}\]
where \(t_{k}\) is the k-th lowest IoU of all matches. As shown in Figure 1(b), any segmentation differences will be reflected by the positions of turning points. The sortedAP is defined as the area under the AP curve and can be computed by Algorithm 2. In the computation of sortedAP, the Unique Matching runs only once, while it has to be performed multiply times for different IoUs in mAP.
```
0: ground truth \(\mathcal{G}=\{g_{1},g_{2},\ldots,g_{M}\}\), prediction \(\mathcal{P}=\{p_{1},p_{2},\ldots,p_{N}\}\)
0: the sortedAP score \(s\in R\) Match: run Unique Matching with a fuzz threshold \(1e^{-6}\) Count: true positives \(TP_{0}\) and false negatives \(FN_{0}\) Sort: arrange IoUs of all matches in increasing order \([IoU_{1},IoU_{2},\ldots,IoU_{TP_{0}}]\) Initialize: \(AP_{prev}\gets\frac{TP_{0}}{TP_{0}}\), \(t_{prev}\gets IoU_{1}\) Initialize: \(s\gets t_{prev}\cdot AP_{prev}\) for\(k\) from 1 to \(TP_{0}\)do \(AP_{k}\leftarrow\frac{TP_{0}-k}{P+FN_{0}+k}\), \(t_{k}\gets IoU_{k}\) \(s\gets s+\frac{1}{2}\cdot(t_{k}-t_{prev})\cdot(AP_{k}+AP_{prev})\) \(AP_{prev}\gets AP_{k}\), \(t_{prev}\gets t_{k}\) endfor
```
**Algorithm 2** Sorted Average Precision
## 5 Experiments and results
We also simulate imperfect results on the basis of ground truth segmentation from real datasets, in order to observe the behavior of different metrics. We choose the CVPP dataset [18] and the CervicalCell dataset [15], containing clustered instances. We perform experiments per image because the effects, such as abrupt changes, will be covered when averaged over a large population. We design three experiments based on the fact that introducing errors gradually and evenly will result in a smooth decrease in the evaluation score.
**Incremental fales.** This experiment starts with two identical sets of objects and alternately introduces new objects into each set. At each step, we randomly duplicate an object and place it in a position where it does not overlap with any existing objects. This ensures that the newly introduced object is always a false positive or false negative. In our experiment, we add two objects to one set, then switch to the other set and repeat the process. The experiment only concerns detection errors, as objects are either perfectly matched or not matched at all.
**Object erosion.** At each step, morphological erosion is performed to a random object with a 3\(\times\)3 structuring ele
ment. Consequently, the segmentation quality will steadily deteriorate. But we do not completely remove any objects. Metric scores are reported between the continuously eroded masks and the original set.
**Pixel removal.** Similar to the object erosion experiment, we construct a sequence of increasingly degraded results by deteriorating the segmentation quality of objects. However, instead of handling one object per step, we randomly remove a fixed portion of pixels from all objects at each step. This process simulates a situation where the segmentation of most objects is at a similar quality level. The deficiencies are more pronounced in this experiment.
The experiments conducted on the CVPPP and CervicalCell datasets yielded similar results. In the incremental false experiment (Figure (a)a and Figure (d)d), objects are either a perfect match with an IoU of 1 or not matched at all. Thus, segmentation inaccuracy does not play any role. The AP, mAP, and sortedAP all degrade to the same score in this case. All match-based metrics decrease smoothly as expected. In contrast, the AJI fluctuates depending on the size of introduced objects. The SBD score does not decrease in a strictly monotonic manner but instead exhibits periodic plateaus. This is an instance of the error exemption (Section 3.1). In the alternating introduction of false matches into two sets, errors introduced earlier can obscure subsequent ones.
In the object erosion and pixel removal experiment, the segmentation quality gets worse step by step. The AP and mAP also show plateaus but for a different reason from SBD in the incremental false experiment. This is due to AP's insensitivity to segmentation differences above or below the match threshold. Using multiple thresholds by mAP only improves sensitivity up to the scale of the threshold interval. PQ explicitly considers segmentation quality in the IoU range above the threshold. However, it faces a common issue of abrupt change at the match threshold as AP and mAP. Combining the two factors above, mAP exhibits a step-wise change, which is more noticeable in the pixel removal experiment (Figure (c)c and Figure (f)f). PQ scores will not be completely flat, but can drastically drop in a narrow IoU range. In comparison, our proposed sortedAP maintains sensitivity and continuity in all cases where other metrics fail.
## 6 Conclusion
In this paper, we have analyzed existing evaluation metrics for instance segmentation from the perspective of sensitivity, continuity, and equality. Although some metrics are widely used in practice, we have found that no metric
Figure 3: Comparison of different metric scores on simulated imperfect segmentation results. Three experiments (Incremental Falses, Object Erosion, and Pixel Removal) create increasingly degraded results from the ground truth of real datasets (CVPPP and CervicalCell). Since errors are gradually and evenly introduced, the evaluation score is supposed to smoothly decrease in response. In Figure (a)a and Figure (d)d, the curve of AP, mAP, and sortedAP are identical, shown in mixed dark blue.
strictly satisfies all the properties under discussion. To address this gap, we propose the sortedAP, which is sensitive to any small segmentation changes, continuous over the entire IoU domain, and treats objects equally. The proposed Unique Matching approach can also be applied to AP, mAP, and PQ, allowing its use under object overlap and match IoU thresholds smaller than 0.5.
|
2309.15752 | The Möbius game and other Bell tests for relativity | We derive multiparty games that, if the winning chance exceeds a certain
limit, prove the incompatibility of the parties' causal relations with any
partial order. This, in turn, means that the parties exert a back-action on the
causal relations; the causal relations are dynamical. The games turn out to be
representable by directed graphs, for instance by an orientation of the
M\"obius ladder. We discuss these games as device-independent tests of
spacetime's dynamical nature in general relativity. To do so, we design a
relativistic setting where, in the Minkowski spacetime, the winning chance is
bound to the limits. In contrast, we find otherwise tame processes with
classical control of causal order that win the games deterministically. These
suggest a violation of the bounds in gravitational implementations. We obtain
these games by uncovering a "pairwise central symmetry" of the correlations in
question. This symmetry allows us to recycle the facets of the acyclic subgraph
polytope studied by Gr\"otschel, J\"unger, and Reinelt in the mid-80s for
combinatorial optimization. In addition, we derive multiparty games in a
scenario where the polytope dimension grows only linearly in the number of
parties. Here, exceeding the limits not only proves the dynamical nature of the
causal relations, but also that the correlations are incompatible with any
global causal order. | Eleftherios-Ermis Tselentis, Ämin Baumeler | 2023-09-27T16:08:13Z | http://arxiv.org/abs/2309.15752v1 | # The Mobius game and other Bell tests for relativity
###### Abstract
We derive multiparty games that, if the winning chance exceeds a certain limit, prove the incompatibility of the parties' causal relations with any partial order. This, in turn, means that the parties exert a back-action on the causal relations; the causal relations are _dynamical_. The games turn out to be representable by directed graphs, for instance by an orientation of the _Mobius ladder_. We discuss these games as device-independent tests of spacetime's dynamical nature in general relativity. To do so, we design a relativistic setting where, in the Minkowski spacetime, the winning chance is bound to the limits. In contrast, we find otherwise tame processes with classical control of causal order that win the games deterministically. These suggest a violation of the bounds in gravitational implementations. We obtain these games by uncovering a "pairwise central symmetry" of the correlations in question. This symmetry allows us to recycle the facets of the _acyclic subgraph polytope_ studied by Grotschel, Junger, and Reinelt in the mid-80s for combinatorial optimization. In addition, we derive multiparty games in a scenario where the polytope dimension grows only linearly in the number of parties. Here, exceeding the limits not only proves the dynamical nature of the causal relations, but also that the correlations are incompatible with _any global causal order_.
## I Introduction
Bell [1], in his seminal work, showed that quantum correlations are causally inexplicable [2]. Such _nonlocal_ correlations arise as the spontaneous spacelike separated creation of fresh albeit strongly correlated data [3]. Any tentative causal explanation of these correlations--without regressing to the formalism of quantum theory [4]--necessitates an infinite speed of communication [5; 6; 7] and infinite precision [8]; a clash with relativity. In order to show his result, Bell employed a _device-independent technique_. He derived limits on correlations--expressed via inequalities--from assumptions on the observed data only, without invoking any specifics of any theory. If experimental observations exceed these limits _i.e.,_ a Bell inequality is violated, then one is forced to reject one of the assumptions. This is known as _Bell test_. Bell inequalities are commonly expressed as collaborative multiparty games with a bound on the winning probability. Bell's discovery is not only considered as one of the most fascinating in quantum theory, but also substantially boosted the development of quantum information [9; 10]: The informational abstraction and simplification of quantum mechanics as a theory of qubits. In this work, we extend Bell's reasoning to _relativity,_ and obtain Bell tests to certify the _dynamical nature of causal relations_. In addition, this might--similar to the quantum case--aid in the development of a theory of "gravity information."
### Dynamical and indefinite causal order
Quantum nonlocal correlations, as mentioned above, indicate the central role of _causality_ in unifying quantum theory with relativity. As pointed out by Hardy [11], there is a complementary aspect that hints at the same. On the one hand, general relativity is a _deterministic_ theory equipped with a _dynamical_ spacetime: The causal relations among events depend on the mass-energy distribution. On the other, quantum theory is a _probabilistic_ theory equipped with a _static_ spacetime. Therefore, it is expected that a theory of quantum gravity features _indefinite causal order, e.g.,_ by extending quantum superposition to causal relations [12; 13; 14; 15; 16].
Exemplary, indefinite causal order arises from the celebrated _quantum switch_[14] (see Fig. 1). Here, the causal relation between two experiments, Bob's and Charlie's, is coherently controlled by Alice's experiment in their common causal past. More specifically, Alice may perform an experiment such that Bob's experiment is performed _before_ Charlie's, or another such that Bob's experiment is performed _after_ Charlie's. The causal rela
Figure 1: The quantum switch: Depending on whether Alice prepares a control qubit in the state \(|0\rangle\) or \(|1\rangle\), a target system \(|\psi\rangle\) traverses Bob’s laboratory before Charlie’s or _vice versa_. If Alice prepares the control qubit in a superposition state, then the trajectory of the target system is entangled with the control qubit: The causal relation between Bob and Charlie is _indefinite._
tion between Bob's and Charlie's experiment is _dynamical._ If Alice, however, performs both experiments in superposition, then _quantum indefiniteness_ is injected into the causal relations, and dynamical causal order is turned into quantum indefinite causal order. It is well-known [17; 18; 19] that no Bell test can certify the quantum indefinite causal order as exhibited by the quantum switch and its generalizations [19]--unless additional assumptions are invoked [20; 21; 16]. This is natural for the following reason. In stark contrast to the Bell scenario [1] where nonsignaling correlations are established, here, the limits of communication are of relevance. But in unrestricted settings, all quantum communication can always be simulated classically.
### The Mobius game
In this work, we devise Bell tests for dynamical causal order. Events obey a non-dynamical, or _static,_ causal order whenever the parties' actions, which constitute the events, do not alter the causal relations. In other words, events obey static causal order whenever the causal relations form a _partial order._ A violation of the presented inequalities, that bound the correlations to respect a static causal order, implies then that the parties _altered_ the causal relations. These inequalities are--as we will see later--also violated by the quantum switch. While, as mentioned above, the quantum indefinite nature of the causal relations in the quantum switch cannot be certified, the _dynamical part indeed can._
These inequalities are the facets of the partial-order-correlation polytope. It turns out that the minimum of a _single_ output bit suffices to obtain such Bell test. In fact, the dimension of the polytope grows only quadratically as \(2n(n-1)\) in the number of parties \(n\). We find that a projection of the polytope is the polytope of directed acyclic graphs (DAGs). The latter has been studied by Grotschel, Junger, and Reinelt [22] in the mid-80s in the context of discrete optimization. Moreover, we find that we can lift the facet-defining inequalities of this DAG polytope to obtain the relevant Bell games. The key insight for this step is that the polytope of interest is what we call "pairwise centrally symmetric;" a symmetry that reflects the possibility of the parties to relabel the output bit.
An example of such a Bell test is represented by a directed Mobius ladder (see Fig. 2). As a preview, the Mobius game is the following. Each vertex on the graph represents a party. A referee picks at random an arc \((s,r)\) from the graph, and a bit \(x\). Then, the referee broadcast the chosen arc to all parties, and the bit \(x\) to the "sender" \(s\) only. The game is won whenever the "receiver" \(r\) outputs \(x\). If the winning probability exceed \(11/12\), then the parties' causal arrangement is _incompatible_ with any partial order or mixtures thereof, and we must conclude that the parties' actions _influenced_ the causal relations. In addition to the Mobius game, we find other games represented by directed graphs. The Mobius game, however, has a clear advantage. The bound of \(11/12\) holds for any finite number of parties involved. In contrast, the bounds of the other games approach one for an increasing number of parties. This favorable situation renders the tests reliable, and show that the feature of dynamical causal order is non-vanishing, as it is the case for nonlocality [24; 25] and noncausality [26].
### Relativity
We apply our results to relativity. Here, we find an arrangement of events, defined by the _crossing of worldlines,_ such that the derived inequalities are never violated within special relativity. If the games were played in a gravitational setting, however, the dynamical spacetime of general relativity may be certified. In this work, we do not carry out general-relativistic calculations, but support our claim via the description of classically well-behaved processes that win these games deterministically. Moreover, these processes are conservative in the following sense: An event can only influence its causal future, including the causal relations among the events within its causal future. Such processes were recently investigated in detail due to their affinity for physical implementations [19]. If a displacement of matter in general relativity alters the causal relations of events within the future lightcone, then these processes may be implemented in general relativity, and the back-action of matter to the spacetime structure can be certified.
### Outline
We start with a description of our notation, and necessary basics on convex polytopes and graphs. Thereafter, we provide the definition and derive properties of what we call _pairwise centrally symmetric polytopes_. This symmetry refines central symmetry, and is our key-method for the projection and lifting of the polytopes (in particular, see Theorem 4). After that, we describe the correlations of interest, and compute the facet-defining Bell games (see Theorem 9). This is followed by a shorter part where the scenario is further simplified. While the
Figure 2: A directed Möbius ladder [23]. This graph represents a Bell game for ten or more parties with which dynamical causal order is detected.
resulting polytope grows only linearly as \(2n\) in the number of parties \(n\), we show that the derived Bell game cannot distinguish between partial-order and _causal correlations_ (see Theorem 11). The latter describes correlations where the parties may alter the causal relations of all parties in their future. Then, we introduce the framework of _processes and causal models,_ and show how these games are won deterministically therein (see Theorem 14). This is followed by a discussion of the results in the context of special and general relativity. We end this work with conclusions and a series of open questions.
## II Notation, polytopes, and graphs
We use \([n]\) for the set \(\{0,1,\ldots,n-1\}\), and \([n]_{\sharp}^{2}\) for the set of all _distinct_ pairs over \([n]\), _i.e., \([n]\times[n]\setminus\{(k,k)\}_{k\in[n]}\)_. Lowercase letters will usually be used to express values, calligraphic letters sets, and bold letters matrices and vectors. We simplify the common notation of \(P_{A|X}\) for a conditional probability distribution of \(A\) given \(X\), and \(P_{A|X}(a|x)\) for the conditional probability that \(A\) takes value \(a\) with \(X\) having value \(x\), by only referring to the probability-density function, _e.g., \(p(a|x)\)_. We may always index the entries of a \(d\)-dimensional vector \(\mathbf{v}\) with the natural numbers in \([d]\), _i.e., \(\mathbf{v}=(v_{i})_{i\in[d]}\)_. The vector \(\mathbf{0}\) is the all-zero vector, \(\mathbf{e}_{i}\) is \(\mathbf{0}\) with a one in dimension \(i\), and \(\mathbf{1}\) is the all-one vector. The dimension of these vectors is always understood from the context. The symbol \(\oplus\) is used for element-wise addition modulo two. For a collection \(\{\Delta_{k}\}_{k\in[n]}\) of objects labeled by \([n]\) and a set \(\mathcal{S}\subseteq[n]\), we use \(\Delta_{\mathcal{S}}\) to denote the natural composition of the elements \(\{\Delta_{k}\}_{k\in\mathcal{S}}\), _e.g.,_ the expression \((a_{k})_{k\in\mathcal{S}}\in\bigtimes_{k\in\mathcal{S}}\mathcal{A}_{k}\) may equally be written as \(a_{\mathcal{S}}\in\mathcal{A}_{\mathcal{S}}\). We use the underlined symbol \(\underline{\Delta}\) whenever the set of labels is \([n]\), and we may write \(\underline{a}\in\underline{\mathcal{A}}\). Also, we use the subscript \(\setminus\mathcal{T}\), where \(\mathcal{T}\subseteq[n]\) is a set, for the natural composition over \([n]\setminus\mathcal{T}\). We further extend this notation in relation to a partial order \(\preceq_{\sigma}\) over \([n]\): The expression \(\Delta_{\preceq_{\sigma}k}\) for some \(k\in[n]\) denotes the natural composition over the elements in \(\{i\in[n]\mid i\preceq_{\sigma}k\}\), and similarly in the non-reflexive case \(\prec_{\sigma}\).
In this work, we employ the theory of convex polytopes [27]. A convex polytope \(\mathcal{P}\subseteq\mathbb{R}^{d}\) is the convex hull of a finite set of vectors \(\mathcal{S}\subseteq\mathbb{R}^{d}\), denoted by \(\mathcal{P}=\operatorname{conv}(\mathcal{S})\). This is known as its \(V\)-representation. Equivalently, \(\mathcal{P}\) is the intersection of a finite set of halfspaces \(\mathcal{P}=\{\mathbf{x}\in\mathbb{R}^{d}\mid\mathbf{A}\mathbf{x}\leq\mathbf{z}\}\) for a set of \(m\) inequalities given by \(\mathbf{A}\in\mathbb{R}^{m\times d},\mathbf{z}\in\mathbb{R}^{m}\). This is known as the \(H\)-representation. Since we always consider _convex_ polytopes, we sometimes omit the term _convex_ in the remaining of this work. The dimension \(\dim(\mathcal{P})\) of \(\mathcal{P}\) is the dimension of its affine hull \(\operatorname{aff}(\mathcal{P})\). The polytope \(\mathcal{P}\in\mathbb{R}^{d}\) is _full-dimensional_ if it is \(d\)-dimensional, _i.e.,_ if its affine hull is the ambient space. If \(\mathcal{P}\) is full-dimensional, then there exists a _unique_ (up to multiplication) \(H\)-representation. A vector \(\mathbf{p}\in\mathcal{P}\) is _extremal_ if \(\mathbf{p}\not\in\operatorname{conv}(\mathcal{P}\setminus\{\mathbf{p}\})\). The set of extremal vectors of \(\mathcal{P}\) is \(\operatorname{ext}(\mathcal{P})\). If \(\operatorname{ext}(\mathcal{P})\subseteq\{0,1\}^{d}\), then \(\mathcal{P}\) is called a \(0/1\)_polytope_. A linear inequality \(\mathbf{w}\cdot\mathbf{x}\leq c\), with \(\mathbf{w}\in\mathbb{R}^{d}\) and \(c\in\mathbb{R}\), is _valid for \(\mathcal{P}\)_ whenever \(\forall\mathbf{p}\in\mathcal{P}:\mathbf{w}\cdot\mathbf{p}\leq c\). We represent inequalities as pairs \((\mathbf{w},c)\). An inequality \((\mathbf{w},c)\) is _trivial_ if \(\mathbf{w}\) contains at most a single nonzero entry, and _non-negative_ if all entries of \(\mathbf{w}\) are non-negative. If \((\mathbf{w},c)\) is valid for \(\mathcal{P}\), then \(\mathcal{F}:=\mathcal{P}\cap\{\mathbf{p}\in\mathbb{R}^{d}\mid\mathbf{w}\cdot\mathbf{p}=c\}\) is a _face_ of \(\mathcal{P}\). Faces of dimension \(\dim(\mathcal{P})-1\) are called _faces,_ and inequalities that give rise to facets are called _facet defining_. If \(\mathbf{c}+\mathbf{\delta}\in\mathcal{P}\Leftrightarrow\mathbf{c}-\mathbf{\delta}\in\mathcal{P}\) for some center \(\mathbf{c}\in\mathcal{P}\), then \(P\) is called _centrally symmetric_.
We also make extensive use of graph theory [28]. A graph \(G=(\mathcal{V},\mathcal{E})\) consists of a finite and non-empty set of vertices \(\mathcal{V}\) and a set of edges \(\mathcal{E}\subseteq\{\{u,v\}\mid u,v\in\mathcal{V}\}\). For two graphs \(G\) and \(G^{\prime}=(\mathcal{V}^{\prime},\mathcal{E}^{\prime})\), the _Cartesian product_\(G\times G^{\prime}\) is the graph with the vertices \(\mathcal{V}\times\mathcal{V}^{\prime}\), and \(\{(u,u^{\prime}),(v,v^{\prime})\}\) is an edge if and only if either \(u=v\) and \((u^{\prime},v^{\prime})\in\mathcal{E}^{\prime}\), or \(u^{\prime}=v^{\prime}\) and \((u,v)\in\mathcal{E}\). Each region bounded by the edges of a planar graph, _i.e.,_ a graph drawn on the plane, is called a _face_. A face surrounded by edges is called _internal face_. A _directed_ graph (_digraph_ for short) \(D=(\mathcal{V},\mathcal{A})\) is a graph where the edges have a direction, _i.e.,_\(\mathcal{V}\) is a finite and non-empty set of vertices, and \(\mathcal{A}\subseteq\mathcal{V}\times\mathcal{V}\) is a set of ordered pairs called _arcs_. The _order_\(\operatorname{ord}(G)\) of a graph (digraph) is the cardinality \(|\mathcal{V}|\) of its vertex set. A _bipartite_ graph (digraph) is two-colorable, _i.e.,_ each vertex can be colored with one out of two colors such that adjacent vertices have different colors. A digraph \(D\) is an _orientation_ of a graph \(G\) if there exists an ordering of the elements in each edge \(o(\mathcal{E})\), such that \(\mathcal{A}=o(\mathcal{E})\). In a digraph, a _directed path_ is a sequence of connected arcs \(((v_{0},v_{1}),(v_{1},v_{2}),\dots)\) where no vertex is revisited, and a _directed cycle_ is a directed path where the last vertex coincides with the first. A directed cycle with \(k\) arcs is a \(k\)_-cycle_. A digraph \(D\) is _simple_ if it does not contain self-loops, _i.e.,_\(\forall v\in\mathcal{V}:(v,v)\not\in\mathcal{A}\). A simple digraph is a _directed acyclic graph_ (DAG for short), if it does not contain any directed cycles. The set of all DAGs of order \(n\) is \(\operatorname{DAG}_{n}\), and the set of all DAGs is DAG. If \(D^{\prime}=(\mathcal{V}^{\prime},\mathcal{A}^{\prime})\) with \(\mathcal{V}^{\prime}\subseteq\mathcal{V}\) and \(\mathcal{A}^{\prime}\subseteq\mathcal{A}\), then \(D^{\prime}\) is a _subdigraph_ of \(D\), and we write \(D^{\prime}\subseteq D\). A simple digraph \(D=(\mathcal{V},\mathcal{A})\) of order \(n\) may be represented by its _adjacency vector_\(\mathbf{\alpha}(D):=(\alpha_{u,v})_{(u,v)\in[n]_{\sharp}^{2}}\subseteq\{0,1\}^{n(n-1)}\), where \(\alpha_{u,v}\) is one if \((u,v)\in\mathcal{A}\), and zero otherwise.
Basic graphs are the _line graph_\(L_{n}\) with vertices \([n]\) and edges \(\{\{i,i+1\}\mid i\in[n-1]\}\), and the _complete bipartite graph_\(K_{m,n}\) with vertices \(\{0\}\times[m]\cup\{1\}\times[n]\) and edges \(\{\{(0,u),(1,v)\}\mid u\in[m],v[n]\}\). Basic digraphs are the _complete digraph_\(K_{n}^{\mathrm{di}}\) with vertices \([n]\) and arcs \([n]_{\sharp}^{2}\), and the \(k\)_-cycle digraph_\(C_{k}\) with vertices \([n]\) and arcs \(\{(i,i+1)\mid i\in[n-1]\}\cup\{(i-1,0)\}\).
## III Pairwise central symmetry
As key-method to establish our results, we introduce and study pairwise centrally symmetric \(0/1\) polytopes. Pairwise central symmetry is a specific form of central
symmetry in which the dimensions are paired, and the polytope has a center per pair. For instance, if we pair the first two dimensions, then there exists a vector \(\mathbf{c}\) in the polytope such that \(\mathbf{c}+(x,y,\mathbf{0})\) is in the polytope if and only if \(\mathbf{c}-(x,y,\mathbf{0})\) is. These polytopes contain a "redundancy" as the manifestation of relabelings.
**Definition 1** (Pairwise central symmetry).: A 0/1 polytope \(\mathcal{P}\subseteq\mathbb{R}^{2d}\) is _pairwise centrally symmetric_ if and only if there exists a permutation of the dimensions such that
\[\begin{split}\mathbf{p}&=(\mathbf{p}_{0},\mathbf{p}_{1})\in \operatorname{ext}(\mathcal{P})\implies\\ \forall i&\in[d]:(\mathbf{p}_{0}\oplus\mathbf{e}_{i},\mathbf{p}_ {1}\oplus\mathbf{e}_{i})\in\operatorname{ext}(\mathcal{P})\,.\end{split} \tag{1}\]
By a _permutation_ of the dimensions we mean the following. For each \(2d\)-dimensional vector \(\mathbf{p}=(p_{i})_{i\in[2d]}\), we write the entries of \(\mathbf{p}\) in a specific order \((p_{v_{0}},p_{v_{1}},\ldots,p_{v_{2d-1}})\). The \(d\)-dimensional constituents \(\mathbf{p}_{0}\) and \(\mathbf{p}_{1}\) are then \((p_{v_{0}},p_{v_{1}},\ldots,p_{v_{d-1}})\) and \((p_{v_{d}},p_{v_{d+1}},\ldots,p_{v_{2d}})\), respectively.
In the following, whenever we refer to a pairwise centrally symmetric polytope, we understand that the dimensions are permuted in the described way.
We start by showing two basic facts of such polytopes.
**Lemma 1** (Extension).: _The 0/1 polytope \(\mathcal{P}\subseteq\mathbb{R}^{2d}\) is pairwise centrally symmetric if and only if_
\[\begin{split}\mathbf{p}&=(\mathbf{p}_{0},\mathbf{p}_{1})\in \operatorname{ext}(\mathcal{P})\Longleftrightarrow\\ \forall\mathbf{z}&\in\{0,1\}^{d}:(\mathbf{p}_{0}\oplus\mathbf{z},\mathbf{p}_{1}\oplus\mathbf{z})\in\operatorname{ext}(\mathcal{P})\,.\end{split} \tag{2}\]
Proof.: Iteratively apply Eq. (1).
**Lemma 2** (Central symmetry).: _If \(\mathcal{P}\subseteq\mathbb{R}^{2d}\) is a pairwise centrally symmetric 0/1 polytope, then it is also centrally symmetric._
Proof.: Let \(\mathbf{c}:=1/2\cdot\mathbf{1}\) be the center, \(\mathbf{\delta}\in\{\pm 1/2\}^{2d}\), and define the vectors \(\mathbf{p}^{\pm}:=\mathbf{c}\pm\mathbf{\delta}\in\{0,1\}^{2d}\). Now, we have the identity \(\mathbf{p}^{+}\oplus\mathbf{1}=\mathbf{p}^{-}\): Flipping all bits is equivalent to subtracting instead of adding \(\mathbf{\delta}\) to the center. Finally, by Lemma 1 we have \(\mathbf{p}^{+}\in\operatorname{ext}(\mathcal{P})\Leftrightarrow\mathbf{p}^{+}\oplus \mathbf{1}\in\operatorname{ext}(\mathcal{P})\).
Pairwise centrally symmetric 0/1 polytopes have a natural projection.
**Definition 2** (Projection).: Let \(\mathcal{P}\subseteq\mathbb{R}^{2d}\) be a pairwise centrally symmetric 0/1 polytope. We define the corresponding _projection map_\(\pi_{d}\) as
\[\pi_{d}:\{0,1\}^{2d} \to\{0,1\}^{d} \tag{3}\] \[(\mathbf{v}_{0},\mathbf{v}_{1}) \mapsto\mathbf{v}_{0}\oplus\mathbf{v}_{1}\,. \tag{4}\]
The _projected polytope_ is \(\mathcal{Q}:=\operatorname{conv}(\pi_{d}(\operatorname{ext}(\mathcal{P})))\).
This projection cancels the multitude of the extremal vectors which arises from the arbitrary vectors \(\mathbf{z}\) in Eq. (2).1 In fact, all the "information" of \(\mathcal{P}\) is contained in the projection \(\mathcal{Q}\):
Footnote 1: Pairwise central symmetry can also be understood through the equivalence relation \(\mathbf{p}\sim\mathbf{p}^{\prime}\) defined by \(\mathbf{p}_{0}\oplus\mathbf{p}_{1}=\mathbf{p}^{\prime}_{0}\oplus\mathbf{p}^{\prime}_{1}\). The extremal vectors of the natural projection are then representatives of the equivalence classes.
**Lemma 3** (Lifting polytope).: _Define the parametrized lifting map \(\lambda_{\mathbf{z}}\) for \(\mathbf{z}\in\{0,1\}^{d}\) as_
\[\lambda_{\mathbf{z}}:\{0,1\}^{d} \to\{0,1\}^{2d} \tag{5}\] \[\mathbf{q} \mapsto(\mathbf{q}\oplus\mathbf{z},\mathbf{z})\,. \tag{6}\]
_If \(\mathcal{P}\subseteq\mathbb{R}^{2d}\) is a pairwise centrally symmetric 0/1 polytope, and \(\mathcal{Q}\) the corresponding projected polytope, then \(\mathcal{P}\) is recovered from \(\mathcal{Q}\) via_
\[\operatorname{ext}(\mathcal{P})=\bigcup_{\mathbf{z}\in\{0,1\}^{d}}\lambda_{\mathbf{z}} (\operatorname{ext}(\mathcal{Q}))\,. \tag{7}\]
Proof.: By Def. 2, the extremal vectors of \(\mathcal{Q}\) are \(\pi_{d}(\operatorname{ext}(\mathcal{P}))\). To prove the \(\subseteq\) direction, let \(\mathbf{p}=(\mathbf{p}_{0},\mathbf{p}_{1})\) be an element in \(\operatorname{ext}(\mathcal{P})\). Now, by taking \(\mathbf{z}=\mathbf{p}_{1}\), we get that the right-hand side contains \(\lambda_{\mathbf{z}}(\pi_{d}(\mathbf{p}))=\lambda_{\mathbf{p}_{1}}(\mathbf{p}_{0}\oplus\mathbf{p}_{1 })=\mathbf{p}\). For the reverse direction \(\supseteq\), let \(\mathbf{r}=(\mathbf{r}_{0},\mathbf{r}_{1})\) be an element of the right-hand side. This vector arises from some \(\mathbf{p}=(\mathbf{p}_{0},\mathbf{p}_{1})\in\mathcal{P}\) and some \(\mathbf{z}\) through the identities \(\mathbf{r}_{0}=\mathbf{p}_{0}\oplus\mathbf{p}_{1}\oplus\mathbf{z}\) and \(\mathbf{r}_{1}=\mathbf{z}\). Due to Lemma 1 and by setting \(\mathbf{y}:=\mathbf{p}_{1}\oplus\mathbf{z}\), the vector \(\mathbf{r}\) is also an element of the left-hand side because \((\mathbf{r}_{0}\oplus\mathbf{y},\mathbf{r}_{1}\oplus\mathbf{y})\) is.
Our main result on pairwise centrally symmetric polytopes is that the facets of the projected polytope can be _recycled:_
**Theorem 4** (Lifting facets).: _Let \(\mathcal{P}\subseteq\mathbb{R}^{2d}\) be a pairwise centrally symmetric 0/1 polytope, and \(\mathcal{Q}\) the corresponding projected polytope. If \(\mathcal{P}\) and \(\mathcal{Q}\) are full-dimensional, and if \((\mathbf{w},c)\) is a non-negative and non-trivial facet-defining inequality of \(\mathcal{Q}\), then \(((\mathbf{w},-\mathbf{w}),c)\) is a non-trivial facet-defining inequality of \(\mathcal{P}\)._
Proof.: First, we show that the lifted inequality is _valid_ for \(\mathcal{P}\). Thanks to Lemma 3, each vector \(\mathbf{p}\in\operatorname{ext}(\mathcal{P})\) can be expressed as \(\lambda_{\mathbf{z}}(\mathbf{q})\) for some \(\mathbf{q}\in\mathcal{Q}\) and some \(\mathbf{z}\), thus
\[(\mathbf{w},-\mathbf{w})\cdot(\mathbf{q}\oplus\mathbf{z},\mathbf{z}) =\sum_{i\in[d]}w_{i}(q_{i}\oplus z_{i}-z_{i}) \tag{8}\] \[=\sum_{i\in[d]:z_{i}=0}w_{i}q_{i}+\Delta\,. \tag{9}\]
Since \((\mathbf{w},c)\) is non-negative, we have \(\Delta\leq 0\), and validity is inherited from the validity of \((\mathbf{w},c)\) for \(\mathcal{Q}\).
In the following, we find \(2d\) affinely independent vectors in \(\operatorname{ext}(\mathcal{P})\) that saturate the lifted inequality. These vectors define a \((2d-1)\) dimensional face of \(\mathcal{P}\), a facet
of \(\mathcal{P}\). First, note that \((\mathbf{v},c)\) is facet defining for \(\mathcal{Q}\). Therefore, there exists a family \(\mathcal{T}=\{\mathbf{q}_{i}\}_{i\in[d]}\) of \(d\) affinely independent vectors that saturate the inequality. These vectors lifted with the parameter \(\mathbf{0}\) generate the family \(\mathcal{S}:=\{\lambda_{\mathbf{0}}(\mathbf{q}_{i})\}_{i\in[d]}=\{(\mathbf{q}_{i},\mathbf{0}) \}_{i\in[d]}\) of \(d\) affinely independent vectors in \(\mathcal{P}\). These vectors trivially saturate the lifted inequality: For all \(i\in[d]\), we have
\[(\mathbf{w},-\mathbf{w})\cdot(\mathbf{q}_{i},\mathbf{0})=\mathbf{w}\cdot\mathbf{q}_{i}=c\,. \tag{10}\]
Now, we need \(d\) such vectors in addition. These are obtained by an appropriate lifting of the extremal vectors. Towards that, we use the implication which holds for all vectors \(\mathbf{q}=(q_{i})_{i\in[d]}\in\mathbb{R}^{d}\)
\[q_{j}=0\implies\mathbf{w}\cdot\mathbf{q}=(\mathbf{w},-\mathbf{w})\cdot(\lambda_{\mathbf{e}_{j}}( \mathbf{q}))\,; \tag{11}\]
if some \(\mathbf{q}\) has a zero entry in dimension \(j\), then lifting that vector with parameter \(\mathbf{e}_{j}\) yields a vector that produces the same value for the lifted inequality as \(\mathbf{q}\) for the original. This implication is immediate:
\[(\mathbf{w},-\mathbf{w})\cdot(\mathbf{q}\oplus\mathbf{e}_{j},\mathbf{e}_{j})=\sum_{i\in[d]\setminus \{j\}}w_{i}q_{i}=\mathbf{w}\cdot\mathbf{q}\,. \tag{12}\]
This means that whenever \(\mathbf{q}\in\operatorname{ext}(\mathcal{Q})\) saturates \((\mathbf{w},c)\) and has a zero entry in dimension \(j\), then \(\lambda_{\mathbf{e}_{j}}(\mathbf{q})\) saturates \(((\mathbf{w},-\mathbf{w}),c)\). Suppose now there exists a sequence of the vectors in \(\mathcal{T}\), such that the \(k\)th vector has a zero in dimension \(k\), _i.e.,_ suppose there exists a function \(f:[d]\to[d]\), such that \((\mathbf{q}_{f(k)})_{k}=0\). Then, the additional \(d\) vectors we are looking for are easily obtained:
\[\mathcal{S}^{\prime}:=\{\lambda_{\mathbf{e}_{k}}(\mathbf{q}_{f(k)})\}_{k\in[d]}=\{( \mathbf{q}_{f(k)}\oplus\mathbf{e}_{k},\mathbf{e}_{k})\}_{k\in[d]}\,. \tag{13}\]
These vectors saturate the lifted inequality due to Eq. (11), and are affinely independent (also with respect to \(\mathcal{S}\)), because each of the vectors contributes to an otherwise untouched dimension.
What remains to show is that such a function \(f\) exists for \(\mathcal{T}\). Towards a contradiction, suppose no such \(f\) exists. This means that there exists a dimension \(\ell\) such that \(\forall\mathbf{t}\in\mathcal{T}:t_{\ell}=1\). Without loss of generality, we take \(\ell\) to be the first dimension, _i.e.,_\(\ell=0\), and we have \(\forall\mathbf{v}\in\operatorname{aff}(\mathcal{T}):v_{0}=1\). Since \(\operatorname{aff}(\mathcal{T})\) is \((d-1)\)-dimensional and the first dimension is fixed to take value \(1\), the vector \((1,\mathbf{0})\) and every vector from \(\{(1,\mathbf{e}_{k})\}_{k\in[d-1]}\) is in \(\operatorname{aff}(\mathcal{T})\). Moreover, note that \((\mathbf{w},c)\) is saturated by any affine combination of the vectors in \(\mathcal{T}\). From this we get that for all \(k\in[d-1]\):
\[c=\mathbf{w}\cdot(1,\mathbf{0})=\mathbf{w}\cdot(1,\mathbf{e}_{k})\,, \tag{14}\]
and the inequality \((\mathbf{w},c)\) must be trivial.
## IV Bell Games
Consider the following general scenario. For \(n\)-parties \([n]\), each party \(k\in[n]\) receives an input \(x_{k}\in\mathcal{X}_{k}\), for some input space \(\mathcal{X}_{k}\), and _immediately_ produces an output \(a_{k}\in\mathcal{A}_{k}\), for some output space \(\mathcal{A}_{k}\). The realization of \(a_{k}\) is the physical event \(E_{k}\). An input-output behavior of these parties is described by the conditional probability distribution \(p(\underline{a}|\underline{x})\). We assume that the causal order of the parties is static, _i.e.,_ the events \(\{E_{k}\}_{k\in[n]}\) form a partial order \(\preceq_{\sigma}\). This means that if \(j\not\in_{\sigma}i\), then the output \(a_{i}\) is _independent_ of \(x_{j}\). In general, the partial order among the events may depend on some initial randomness, _e.g.,_ a coin flip. This leads to the following definition.2
Footnote 2: In fact, this definition is equivalent in asking \(p(\underline{a}|\underline{x})\) to decompose with total orders instead of partial orders.
**Definition 3** (Partial-order correlations).: The \(n\)-party correlations \(p(\underline{a}|\underline{x})\) are _partial-order correlations_ if and only if there exists a probability distribution \(p(\sigma)\) over partial orders \(\preceq_{\sigma}\) and a family of conditional probability distributions \(\{p_{k,\sigma}(a_{k}|a_{\prec_{\sigma}k},x_{\preceq_{\sigma}k})\}_{k,\sigma}\), such that
\[p(\underline{a}|\underline{x})=\sum_{\sigma}p(\sigma)\prod_{k\in[n]}p_{k,\sigma}( a_{k}|a_{\prec_{\sigma}k},x_{\preceq_{\sigma}k})\,. \tag{15}\]
### Single-output scenario
We simplify the general scenario to the minimal case of a single bit of output. The party who produces this single bit of output, however, is selected by the input.
**Definition 4** (Single-output scenario).: For \(n\geq 2\) parties \([n]\), the input space of party \(k\in[n]\) is \(\mathcal{X}_{k}:=[n]_{\neq}^{2}\times\mathcal{Z}_{k}\), where the space \(\mathcal{Z}_{k}\) depends on the input to the \([n]_{\neq}^{2}\) part. Similarly, the output space \(\mathcal{A}_{k}\) depends on the input to the \([n]_{\neq}^{2}\) part. For \((s,r)\in[n]_{\neq}^{2}\),
\[\mathcal{Z}_{k}=\begin{cases}[2]&\text{if }k=s,\\ \emptyset&\text{otherwise},\end{cases}\quad\mathcal{A}_{k}=\begin{cases}[2]& \text{if }k=r,\\ \emptyset&\text{otherwise}.\end{cases} \tag{16}\]
The input \((s,r)\in[n]_{\neq}^{2}\) is _shared_ with all parties.
Note that the above definition could also be formulated with fixed input spaces and a special symbol to denote "no input." Also, one could define the output space to be the binary space [2] for _all_ parties, and then marginalize over the non-relevant outputs. Such a definition, however, would be redundant. Also, note that the _shared input_ space \([n]_{\neq}^{2}\) has an intuitive role: If the shared input is \((s,r)\), then party \(s\) has the additional binary input space and party \(r\) has the binary output space; party \(s\) is the "sender" and party \(r\) is the "receiver." It is important to note, however, that the parties without output are thought of producing a trivial output. This renders the above-mentioned events \(\{E_{k}\}_{k\in[n]}\) well-defined.
In this single-output scenario, the input-output behavior is some conditional probability distribution \(p(a|s,r,x)\)
where the pair \((s,r)\) is an input to every party, \(x\) is the binary input to party \(s\), and \(a\) is the binary output of party \(r\). Such correlations \(p(a|s,r,x)\), according to Def. 3, are partial-order correlations whenever there exists a distribution \(p(\sigma)\) over all partial orders, and a family of conditional probability distributions \(\{p_{r,\sigma}^{\leq}(a|s,r,x),p_{r,\sigma}^{\leq}(a|s,r)\}_{r,\sigma}\), such that they decompose as
\[\begin{split} p(a|s,r,x)&=\sum_{\sigma:s\leq \sigma r}p(\sigma)p_{r,\sigma}^{\prec}(a|s,r,x)\\ &+\quad\sum_{\sigma:s\leq\sigma r}p(\sigma)p_{r,\sigma}^{\succeq r }(a|s,r)\,;\end{split} \tag{17}\]
the output \(a\) of the "receiver" \(r\) may depend on the bit \(x\) only if the "sender" \(s\) is before \(r\).
**Definition 5** (Single-output partial-order correlations).: For \(n\geq 2\) parties, the set \(\mathcal{C}_{n}\) is the set of all conditional probability distributions \(p(a|s,r,x)\) that decomposed as in Eq. (17).
### Geometric representation
In order to derive the limits on partial-order correlations, we represent the attainable correlations geometrically. The facet-defining inequalities of the resulting polytope correspond to the respective Bell games (see Fig. 3). The geometric representation is obtained by expressing the conditional probability distributions from the set \(\mathcal{C}_{n}\) as vectors. Thanks to total probability, the characteristic vector of some \(p\in\mathcal{C}_{n}\) is the \(2n(n-1)\)-dimensional vector of the probability that \(a=0\) for each input, _i.e.,_ each \(p\in\mathcal{C}_{n}\) is represented by \(\mathbf{\chi}(p)\) with
\[\mathbf{\chi}:\mathcal{C}_{n} \rightarrow\mathbb{R}^{2n(n-1)}\] (18) \[p \mapsto(p(0|s,r,x))_{(s,r,x)\in[n]_{\mathbf{\chi}}^{2}\times[\mathbf{ \check{\check{\check{\check{\check{\check{\check{\check{\check{\check{\check{ \check{\check
For the reverse direction \(\supseteq\), let \(D=([n],\mathcal{A})\) be a DAG. In the following, we construct partial-order correlations \(p\in\mathcal{C}_{n}\), such that the corresponding vector \(\mathbf{q}\) is \(\mathbf{\alpha}(D)\). First, we find a partial order \(\preceq_{\sigma}\), such that \(D\) is compatible with \(\preceq_{\sigma}\) in the following sense: if \((u,v)\in\mathcal{A}\), then \(u\preceq_{\sigma}v\). Such a partial order can, for instance, be obtained from the transitive closure of the arcs. Now, the strategies of the parties are: On input \((u,v)\), the "receiver" \(v\) outputs \(a=0\) whenever \((u,v)\not\in\mathcal{A}\), and \(a=x\) otherwise. In the former case, the "receiver" \(v\) outputs \(a=0\)_even if_\(u\preceq_{\sigma}v\)--the receiver simply _ignores_ any potential information on \(x\). In the latter case, the arc \((u,v)\) implies that the "sender" \(u\) is in the causal past of \(v\), and therefore, \(u\) might communicate the value of \(x\) to \(v\). These strategies yield the vector \(\mathbf{p}=(p_{u,v,x})_{u,v,x}\) with
\[p_{u,v,x}=\begin{cases}1&(u,v)\not\in\mathcal{A}\,,\\ 1&(u,v)\in\mathcal{A}\wedge x=0\,,\\ 0&(u,v)\in\mathcal{A}\wedge x=1\,.\end{cases} \tag{21}\]
Therefore, the parity \(p_{u,v,0}\oplus p_{u,v,1}\) is one if \((u,v)\in\mathcal{A}\), and zero otherwise: \(\mathbf{q}=\pi_{n(n-1)}(\mathbf{p})=\mathbf{\alpha}(D)\).
### The acyclic-subdigraph polytope
The (unweighted) acyclic-subdigraph problem in combinatorial optimization is the following: Given a digraph \(D=(\mathcal{V},\mathcal{A})\), find a DAG subdigraph \(D^{\prime}\) of \(D\) which maximizes the number of arcs. The dual of this problem is known as the minimum-feedback-arc-set problem. Algorithms to these problems are of practical relevance in _e.g.,_ voting, ranking, and task scheduling [29, p. 57-61]. One can associate to every instance of such a problem a polytope with the solutions as extremal points. The representation of the polytope in terms of linear inequalities allows for the application of linear-programming techniques. The polytope for the acyclic-subdigraph problem is as follows:
**Definition 6** (Acyclic-subdigraph polytope [22]).: Given a digraph \(D=(\mathcal{V},\mathcal{A})\), the acyclic-subdigraph polytope is
\[\mathrm{AC}(D):=\mathrm{conv}(\{\mathbf{\alpha}(D^{\prime})\mid\mathrm{DAG}\ni D ^{\prime}\subseteq D\})\,. \tag{22}\]
Thus, the polytope \(\mathcal{Q}_{n}\) of our interest is \(\mathrm{AC}(K_{n}^{\mathrm{di}})\), and we have
\[\mathrm{ext}(\mathcal{Q}_{n})=\mathrm{ext}(\mathrm{AC}(K_{n}^{\mathrm{di}}))= \mathbf{\alpha}(\mathrm{DAG}_{n})\,. \tag{23}\]
Grotschel, Junger, and Reinelt [22] derive a series of facet-defining inequalities for \(\mathrm{AC}(D)\), and hence, also for \(\mathcal{Q}_{n}\). Before we restate these inequalities, we define two classes of digraphs. Note that the graphs in both classes are bipartite.
**Definition 7** (\(k\)-fence and \(k\)-Mobius digraph).: The _\(k\)-fence digraph_ for \(k\geq 2\) (see Fig. 4) is the orientation of \(K_{k,k}\) where \(\{(0,i),(1,i)\}\) is oriented as \(((0,i),(1,i))\), and all remaining edges are oriented as \(((1,i),(0,j))\), _i.e.,_ it has the vertex set \(\{0,1\}\times[k]\), and the arcs
\[\left\{((0,i),(1,i)),((1,i),(0,j))\,\Big{|}\,(i,j)\in[k]_{\not\in}^{2}\right\}. \tag{24}\]
The _\(k\)-Mobius digraph_ (see Fig. 5) is defined for odd \(k\geq 3\) only, and it is the orientation of the \(k\)-ladder \(L_{2}\times L_{k}\) such that \(\{(0,0),(1,0)\}\) is oriented as \(((0,0),(1,0))\), the arcs adjacent to every internal face form a four-cycle, and has the "crossing" arcs \(\{((1,0),(0,k-1)),((1,k-1),(0,0))\}\) in addition, _i.e.,_ it has the vertex set \(\{0,1\}\times[k]\) and the arcs
\[\left\{\big{(}(x,i+\gamma y),(y,i+\gamma(x\oplus 1))\big{)}\, \Big{|}\right.\] \[\left.i\in[k],i\;\mathrm{odd},x,y\in\{0,1\},\gamma\in\{\pm 1\} \right\}\cup \tag{25}\] \[\left\{((1,0),(0,k-1)),((1,k-1),(0,0))\right\}.\]
Note that the \(2\)-fence digraph is isomorphic to the \(4\)-cycle digraph, and that the \(3\)-fence digraph is isomorphic to the \(3\)-Mobius digraph. Also, the \(k\)-Mobius digraph is an orientation of the \(2k\)-Mobius ladder defined in Ref. [23].
For the following theorem, we need the notion of _trivial embedding_. Let \(D=(\mathcal{V}\subseteq[n],\mathcal{A})\) be some digraph. We can always embed \(D\) in a digraph \(D^{\uparrow n}:=([n],\mathcal{A})\) of order \(n\) by extending the set of vertices from \(\mathcal{V}\) to \([n]\), and by keeping the set of arcs \(\mathcal{A}\).
**Theorem 8** (Facet-defining inequalities [22]).: _For \(n\geq 2\), let \(D=(\mathcal{V}\subseteq[n],\mathcal{A})\) be some digraph._
_If \(D\) is isomorphic to the \(k\)-cycle digraph \(C_{k}\), then the \(k\)-cycle inequality \((\mathbf{\alpha}(D^{\uparrow n}),k-1)\) is facet-defining for \(\mathcal{Q}_{n}\)._
_If \(D\) is isomorphic to the \(k\)-fence digraph, then the \(k\)-fence inequality \((\mathbf{\alpha}(D^{\uparrow n}),k^{2}-k+1)\) is facet-defining for \(\mathcal{Q}_{n}\)._
Figure 4: The \(5\)-fence digraph with its two-coloring.
Figure 5: The \(5\)-Möbius digraph with its two-coloring.
_If \(D\) is isomorphic to the \(k\)-Mobius digraph, then the \(k\)-Mobus inequality \((\mathbf{\alpha}(D^{\uparrow n}),(5k-1)/2)\) is facet-defining for \(\mathcal{Q}_{n}\)._
Many variations of these facet-defining inequalities are known (see Ref. [30] for a list). While Theorem 4 holds for all facet-defining inequalities of the polytope \(Q_{n}\), we only focus on these simple forms.
We briefly discuss these inequalities. For the digraph \(D=([3],\{(0,1),(1,0),(2,0),(2,1)\})\) (see Fig. 5(a), the vector \(\mathbf{\alpha}(D)\) is _not_ in the polytope \(\mathcal{Q}_{3}\) (obviously, \(D\) is not a DAG), because the 2-cycle inequality is violated:
\[\mathbf{\alpha}(C_{2}^{\uparrow 3})\cdot\mathbf{\alpha}(D)=2>1\,. \tag{26}\]
The scalar product in the evaluation of the inequality corresponds to counting the number of the arcs simultaneously present in both digraphs. For another illustration, let \(\mathbf{v}\) be the vector that corresponds to the uniform mixture of two digraphs \((\mathbf{\alpha}(D_{0})+\mathbf{\alpha}(D_{1}))/2\), where \(D_{0}\) is isomorphic to the 3-fence digraph, shown in Fig. 5(b), and \(D_{1}\) is the same digraph with the vertical arcs \(\{(0,3),(1,4),(2,5)\}\) missing.3 First, note that \(\mathbf{v}\) violates the 3-fence inequality:
Footnote 3: The vector \(\mathbf{v}\) can be understood as the adjacency vector of a weighted digraph.
\[\mathbf{\alpha}(D_{0})\cdot\mathbf{v}=7.5>7\,. \tag{27}\]
However, the vector \(\mathbf{v}\) does _not_ violate any \(k\)-cycle inequality. For instance, consider digraph \(\hat{C}\) with the arcs \(\{(0,3),(3,1),(1,4),(4,0)\}\) and vertices \(\{0,1,\ldots,5\}\). When we evaluate the corresponding 4-cycle inequality, we obtain
\[\mathbf{\alpha}(\hat{C})\cdot\mathbf{v}=3\leq 3\,; \tag{28}\]
the inequality is satisfied. This illustrates that the \(k\)-cycle inequalities are _insufficient_ to single out the DAG polytope \(\mathcal{Q}_{n}\). For \(0/1\) vectors \(\mathbf{v}\in\{0,1\}^{n(n-1)}\) the \(k\)-cycle inequalities are--by definition--sufficient to detect whether \(\mathbf{v}\) corresponds to the adjacency vector of a DAG or not. This insufficiency for non-\(0/1\) vectors means that the DAG polytope has additional structure that emerges with increasing order (see also Refs. [22; 29]).
### Partial-order inequalities
We introduce the concept of a digraph game:
**Definition 8** (Digraph game).: For \(n\) parties \([n]\) and a digraph \(D=(\mathcal{V}\subseteq[n],\mathcal{A})\), the _digraph game_\(\Gamma(n,D)\) is as follows. First, a referee picks at random an arc \((s,r)\) from \(\mathcal{A}\), and a bit \(x\in\{0,1\}\). Then, the referee announces \((s,r)\) to all parties \([n]\), and in addition, distributes \(x\) to party \(s\). We say that the \(n\) parties _win_ the game \(\Gamma(n,D)\) whenever the output of party \(r\) equals \(x\), and denote this event by \(\mathcal{W}(\Gamma(n,D))\).
We may now combine all previous lemmas with Theorem 8, and obtain Bell games with which dynamical causal relations among the parties are detected.
**Theorem 9** (Bell games).: _Consider \(n\) parties \([n]\) and a digraph \(D=(\mathcal{V}\subseteq[n],\mathcal{A})\). If \((\mathbf{\alpha}(D^{\uparrow n}),c)\) is a facet-defining inequality of \(\operatorname{DAG}_{n}\), then the maximum winning probability of the game \(\Gamma(n,D)\) with partial-order correlations is bounded by_
\[\max_{p\in\mathcal{C}_{n}}\Pr[\mathcal{W}(\Gamma(n,D))]\leq\frac{1}{2}+\frac{ c}{2|\mathcal{A}|}\,. \tag{29}\]
_In particular, if \(D\) is isomorphic to the \(k\)-cycle digraph, then the bound is \(1-\frac{1}{2k}\), if \(D\) is isomorphic to the \(k\)-fence digraph, then the bound is \(1-\frac{k-1}{2k^{2}}\), and if \(D\) is isomorphic to the \(k\)-Mobius digraph, then the bound is \(1-\frac{k+1}{12k}\). Each of these inequalities is facet defining for the partial-order-correlations polytope \(\mathcal{P}_{n}\)._
Proof.: The set of single-output correlations among \(n\) partially ordered parties forms a polytope \(\mathcal{P}_{n}\) (Lemma 5) which is pairwise centrally symmetric (Lemma 6). By applying the natural projection (Def. 2), we obtain the polytope \(\mathcal{Q}_{n}\) where all extremal vectors are the adjacency vectors of DAGs (Lemma 7). From Theorem 8, we get the facet-defining inequalities for \(\mathcal{Q}_{n}\). All these inequalities are non-trivial and non-negative, and the lifting theorem (Theorem 4) is applicable. Therefore, in the respective cases where \(D^{\uparrow n}\) is the trivial embedding of a \(k\)-cycle, \(k\)-fence, or \(k\)-Mobius digraph, we have as facet-defining inequalities for \(\mathcal{P}_{n}\) the \(k\)-cycle inequality \(((\mathbf{\alpha}(D^{\uparrow n}),-\mathbf{\alpha}(D^{\uparrow n})),k-1)\), the \(k\)-fence inequality \(((\mathbf{\alpha}(D^{\uparrow n}),-\mathbf{\alpha}(D^{\uparrow n})),k^{2}-k+1)\), and the \(k\)-Mobius inequality \(((\mathbf{\alpha}(D^{\uparrow n}),-\mathbf{\alpha}(D^{\uparrow n})),(5k-1)/2)\).
Now, let \(p\in\mathcal{C}_{n}\) be some single-output partial-order correlations, set \(\mathbf{p}\) as its characteristic vector, and let \(((\mathbf{\alpha}(D^{\uparrow n}),-\mathbf{\alpha}(D^{\uparrow n})),c)\) be a facet-defining inequality from above. In evaluating the inequality with respect
Figure 6: (a) The digraph \(D\) on top of the 2-cycle digraph \(C_{2}\). (b) A weighted digraph (dotted arcs have weight \(1/2\)) on top of the 3-fence digraph.
to \(\mathbf{p}\), we get
\[(\mathbf{\alpha}(D^{\uparrow n}),-\mathbf{\alpha}(D^{\uparrow n}))\cdot\mathbf{p} \tag{30}\] \[=\sum_{(s,r)\in\mathcal{A}}p(0|s,r,0)-\sum_{(s,r)\in\mathcal{A}}p(0 |s,r,1)\] (31) \[=\sum_{(s,r)\in\mathcal{A}}\left(p(0|s,r,0)-(1-p(1|s,r,1))\right)\] (32) \[=\sum_{\begin{subarray}{c}x\in\{0,1\}\\ (s,r)\in\mathcal{A}\end{subarray}}p(x|s,r,x)-|\mathcal{A}|\leq c\,. \tag{33}\]
By moving \(|\mathcal{A}|\) to the right side, and by multiplying the inequality with the uniform probability for the referee announcing \((s,r)\) and \(x\), we get
\[\Pr[a=x]\leq\frac{|\mathcal{A}|+c}{2|\mathcal{A}|}\,. \tag{34}\]
Finally, we obtain the stated bounds by noting that the \(k\)-cycle digraph has \(|\mathcal{A}|=k\), the \(k\)-fence digraph has \(|\mathcal{A}|=k^{2}\), and the \(k\)-Mobius digraph has \(|\mathcal{A}|=3k\).
Note that for increasing \(k\), the bounds for the \(k\)-cycle and the \(k\)-fence games approach one; they may be won with partial-order correlations. In contrast, the winning chance of the \(k\)-Mobius game under the same conditions never exceeds \(11/12\). This elevates the \(k\)-Mobius game as a preferred and robust test for dynamical causal order, and shows that dynamical causal order might be unlimited: That feature does not vanish for large numbers of events. The same quality has been observed for nonlocal correlations [24; 25] and non-causal correlations [26].
### Partial-order and causal inequalities
Causal inequalities, similar to partial-order inequalities, limit the correlations among \(n\) parties under the assumption of a global, _possibly dynamical,_ causal order. Here, in contrast with partial-order inequalities, a party may influence the causal relations of the parties in its causal future. Clearly, partial-order correlations are a subset of the causal ones. Causal inequalities have been extensively studied in the context of indefinite causal order (see, _e.g.,_ Refs. [13; 17; 18; 26; 31; 32; 33]). While we are mainly concerned about partial-order inequalities, we nevertheless introduce causal correlations and the respective inequalities. The reason is that they coincide with the former for an even simpler scenario than studied above. Moreover, we recover a Bell game that was studied in Ref. [34].
First, we introduce causal correlations in the general setting.
**Definition 9** (Causal correlations).: The \(n\)-party correlations \(p(\underline{a}|\underline{x})\), where \(x_{k}\) is the input to party \(k\), and \(a_{k}\) is the output from party \(k\), are _causal_ if and only if they decomposed as
\[p(\underline{a}|\underline{x})=\sum_{i\in[n]}p(i)p(a_{i}|x_{i})p^{i}_{a_{i},x _{i}}(a_{\setminus\{i\}}|x_{\setminus\{i\}})\,, \tag{35}\]
where \(p(i)\) is a probability distribution over \([n]\), and \(p^{i}_{a_{i},x_{i}}(a_{\setminus\{i\}}|x_{\setminus\{i\}})\) are \((n-1)\)-party causal correlations.
In this recursive definition, party \(i\) acts "first" (hence, its output may only depend on its input). The remaining parties have access to \(i,a_{i},x_{i}\), but again, there exists a party that acts "first." The selection of that party may depend on \(i\), \(a_{i}\), and \(x_{i}\).
In the single-output scenario (Def. 4) studied above, the input \((s,r)\) specifies the output-providing party \(r\) and the party \(s\) who gets the additional input \(x\). Instead, one can neglect \(s\), and provide \(x\) to _every_ party, except--to make it non-trivial--to party \(r\).
**Definition 10** (All-to-one scenario).: For \(n\geq 2\) parties \([n]\), the input space of party \(k\in[n]\) is \(\mathcal{X}_{k}:=[n]\times\mathcal{Z}_{k}\), where the space \(\mathcal{Z}_{k}\) depends on the input to the \([n]\) part. Similarly, the output space \(\mathcal{A}_{k}\) depends on the input to the \([n]\) part. For \(r\in[n]\),
\[\mathcal{Z}_{k}=\begin{cases}[2]&\text{if }k\neq r,\\ \emptyset&\text{otherwise},\end{cases}\quad\mathcal{A}_{k}=\begin{cases}[2]& \text{if }k=r,\\ \emptyset&\text{otherwise}.\end{cases} \tag{36}\]
The input to the \([n]\) part is _shared_ with all parties, and the input to the \(\mathcal{Z}_{k}\) part is _shared_ with all parties but \(r\).
As above, one can define the set of partial-order correlations \(\mathcal{C}_{n}^{\text{all-to-1}}\) for this simplified scenario and study the polytope \(\mathcal{P}_{n}^{\text{all-to-1}}:=\mathbf{\chi}^{\prime}(\mathcal{C}_{n}^{\text{ all-to-1}})\in\mathbb{R}^{2n}\), and likewise for causal correlations. Here, the characteristic vector of some conditional probability distribution \(p\in\mathcal{C}_{n}^{\text{all-to-1}}\) is obtained from
\[\mathbf{\chi}^{\prime}:\mathcal{C}_{n}^{\text{all-to-1}} \to\mathbb{R}^{2n} \tag{37}\] \[p \mapsto(p(0|r,x))_{(r,x)\in[n]\times[2]}\,. \tag{38}\]
Note that in both cases, partial-order and causal, the output \(a\) of party \(r\)_must be_ independent of \(x\) only if \(r\) is first in the partial, respectively causal order. Hence, we arrive at the following statement, which is trivial to prove.
**Lemma 10** (All-to-one partial-order and causal correlations).: _For \(n\) parties \([n]\), the correlations \(p(a|r,x)\), where \(a\) is the output of party \(r\), \(r\) is the input to every party, and \(x\) is an input to the parties \([n]\setminus\{r\}\), are partial-order correlations if and only if there exists a distribution \(p(k)\) over all parties, and a family of conditional probability distributions \(\{p^{\mathcal{Z}}_{r}(a|r),p^{\mathcal{Z}}_{r}(a|r,x)\}_{r}\), such that_
\[p(a|r,x)=p(r)p^{\mathcal{Z}}_{r}(a|r)+(1-p(r))p^{\mathcal{Z}}_{r}(a|r,x)\,. \tag{39}\]
_These correlations are causal correlations if and only if the same decomposition exists._
We directly present our result. In the proof, we again exploit the property of pairwise central symmetry. The projected polytope, as shown in the proof, turns out to be the _faulty hypercube_.
**Theorem 11** (All-to-one Bell game).: _For \(n\geq 2\) parties \([n]\), a referee picks uniformly at random a "receiver" \(r\in[n]\), and a bit \(x\in\{0,1\}\), and distributes \(r\) to every party, and \(x\) to every party but \(r\). The inequality_
\[\Pr[a=x]\leq 1-\frac{1}{2n} \tag{40}\]
_is facet-defining for \(\mathcal{P}_{n}^{\text{all-to-1}}\)._
Proof.: The set \(\mathcal{P}_{n}^{\text{all-to-1}}\) is a full-dimensional convex \(0/1\) polytope in \(\mathbb{R}^{2n}\), where a vector \(\mathbf{p}=(p_{r,x})_{(r,x)\in[n]\times\{0,1\}}\) is a list of the probability that party \(r\) outputs \(a=0\) on input \(r\) to all parties and input \(x\) to the parties \([n]\setminus\{r\}\). This polytope is pairwise centrally symmetric, where \(p_{0}=(p_{r,0})_{r\in[n]}\), and \(p_{1}=(p_{r,1})_{r\in[n]}\), for the same reason as \(\mathcal{P}_{n}\) is: Each party might relabel the output. We apply the projection map (Def. 2) to obtain the polytope \(\mathcal{Q}_{n}^{\text{all-to-1}}:=\text{conv}(\pi_{n}(\text{ext}(\mathcal{P }_{n}^{\text{all-to-1}})))\subseteq\mathbb{R}^{n}\). As it turns out, this polytope is the faulty \(n\)-cube
\[\text{ext}(\mathcal{Q}_{n}^{\text{all-to-1}})=\{0,1\}^{n}\setminus\mathbf{1}\,. \tag{41}\]
To see this, let \(\mathbf{q}=(q_{r})_{r\in[n]}=\pi_{n}(\mathbf{p})\) be an element of the left-hand side. The entries of this vector are \(q_{r}=p_{r,0}\oplus p_{r,1}\). The entry \(q_{r}\) is zero if the output \(a\) of party \(r\) does _not_ depend on \(x\), and is one otherwise. Since \(\mathbf{q}\) arises from a partial ordering of the parties, we have that there always must exist a party \(r^{-}\) that is minimal with respect to that partial order. The entry \(q_{r^{-}}\) must be zero, and therefore \(\mathbf{q}\neq\mathbf{1}\). For the converse, let \(\mathbf{v}=(v_{r})_{r\in[n]}\) be a vertex of the faulty \(n\)-cube, and suppose that for \(r_{0}\in[n]\) we have \(v_{r_{0}}=0\). Now, take the partial order \(\preceq_{\sigma}\) where \(r_{0}\) is minimal, _i.e.,_ for all \(r^{\prime}\in[n]\setminus\{r_{0}\}\), the relation \(r_{0}\preceq_{\sigma}r^{\prime}\) holds. From this, we can construct the following strategy. If the selected party \(r\) to make the guess is \(r_{0}\), then party \(r_{0}\) outputs \(a=0\). In the alternative case, where \(r\neq r_{0}\), party \(r_{0}\) receives \(x\) from the referee and forwards \(x\) to party \(r\). Finally, party \(r\) outputs \(a=q_{r}x\).
The faulty hypercube has a single non-trivial facet, which is defined by the inequality \((\mathbf{1},n-1)\). This facet-defining inequality is non-negative, and therefore, we apply our facet-lifting theorem, and obtain the facet-defining inequality \(((\mathbf{1},\mathbf{-1}),n-1)\) for the polytope \(\mathcal{P}_{n}^{\text{all-to-1}}\). At last, this inequality
\[(\mathbf{1},\mathbf{-1})\cdot(\mathbf{p}_{0},\mathbf{p}_{1}) =\sum_{r\in[n]}p(0|r,0)-\sum_{r\in[n]}p(0|r,1) \tag{42}\] \[=\sum_{\begin{subarray}{c}x\in\{0,1\}\\ r\in[n]\end{subarray}}p(x|r,x)-n\] (43) \[\leq n-1 \tag{44}\]
is turned into the Bell game as stated.
## V Causal models
We present strategies with which the presented games are won. In contrast to the device-independent approach that we followed until now, we give a physical description of the parties, how they are interlinked, and their actions. This description is given in terms of causal models. We start with a brief introduction to this framework. The interested reader may consult Refs. [34, 35] for details.
### Framework
Consider a setup with \(n\) parties \([n]\). Each party \(k\in[n]\) is composed out of a past boundary and a future boundary (see Fig. 6(a)). Party \(k\) receives a physical system on the past boundary. The physical system is in a state from the set \(\mathcal{I}_{k}\). After receiving that system, party \(k\) carries out an experiment on that system, and releases a system to the future boundary. The released system is in a state from the set \(\mathcal{O}_{k}\). In a classical-deterministic world, the experiment carried out by party \(k\) is a function \(\mu_{k}:\mathcal{I}_{k}\to\mathcal{O}_{k}\). Party \(k\), additionally, may choose a specific experiment based on some experimental setting \(x_{k}\in\mathcal{X}_{k}\). The experiment may also produce some experimental result \(a_{k}\in\mathcal{A}_{k}\). Thus, in this general setting, the experiment of party \(k\) is some function \(\mu_{k}:\mathcal{X}_{k}\times\mathcal{I}_{k}\to\mathcal{A}_{k}\times\mathcal{O }_{k}\).
The \(n\) parties are interlinked through the environment (see Fig. 6(b)). The environment takes the physical systems on the future boundaries of the parties, and provides systems to their past boundaries. Thus, the environment is a function \(\omega:\underline{\mathcal{Q}}\to\underline{\mathcal{I}}\). If we do not request the parties to be situated at some specific locations, but instead assume that each party carries out their experiment exactly once, and that the parties may only communicate through the environment, we arrive at the following description of a process.
**Theorem 12** (Process [36]).: _The function \(\omega:\underline{\mathcal{Q}}\to\underline{\mathcal{I}}\) is a classical-deterministic process if and only if_
\[\begin{split}\forall(\mu_{k})_{k\in[n]}&\in(( \mathcal{I}_{k}\to\mathcal{O}_{k}))_{k\in[n]},\,\exists!\underline{i}\in \underline{\mathcal{I}}:\\ &\underline{i}=\omega(\underline{\mu}(\underline{i}))\,,\end{split} \tag{45}\]
_where \(\exists!\) is the uniqueness quantifier._
This theorem follows from the process-matrix framework [13], and states that irrespective of the experiment
Figure 7: (a) Party \(k\) receives a system, performs an experiment \(\mu_{k}\) with experimental setting \(x_{k}\), observes the experimental result \(a_{k}\), and releases a system back to the environment. (b) Three parties interlinked by the environment (some process \(\omega\)).
carried out by the parties, the state of the input system is well-defined. For a process \(\omega\) and a choice of experiments, the observed statistics are
\[p(\underline{a}|\underline{x})=\sum_{\underline{i}\in\underline{a}}[\omega( \underline{o})=\underline{i}][(\underline{a},\underline{o})=\underline{\mu}( \underline{x},\underline{i})]\,, \tag{46}\]
where we use \([i=j]\) for the Kronecker delta \(\delta_{i,j}\). Satisfying the law of total probability, this expression returns one for experiments without settings and results.
This notion of process is neatly combined with the notion of causal models. A causal model is a pair: The causal structure, which is a digraph \({M=(\mathcal{W},\mathcal{S})}\) with parties \(\mathcal{W}\), and the model parameters, which are a family of functions \(\{\omega_{k}:\mathcal{O}_{\mathrm{P}_{\mathrm{a}M}(k)}\to\mathcal{I}_{k}\}_{k \in[n]}\). Here, \(\mathrm{P}_{\mathrm{a}M}(k):=\{p\mid(p,k)\in\mathcal{S}\}\subseteq\mathcal{W}\) denotes the set of parents of the vertex \(k\in\mathcal{W}\) with respect to the digraph \(M\). By combining processes with causal models, we arrive at the following, where the connections among the parties are _faithfully represented_ by the causal structure and where the model parameters are _consistent_ in the sense that the laws of probability are conserved.
**Definition 11** (Faithful and consistent causal model [34]).: The causal structure \(M\) and the model parameters \({\{\omega_{k}:\mathcal{O}_{\mathrm{P}_{\mathrm{a}M}(k)}\to\mathcal{I}_{k}\}_{k \in[n]}}\) form a _faithful and consistent causal model_ if and only if for each vertex \(k\in\mathcal{W}\), the model parameter \(\omega_{k}\) depends on every argument, _i.e.,_
\[\forall p\in\mathrm{P}_{\mathrm{a}M}(k),\exists o\in\mathcal{O}_{ \mathrm{P}_{\mathrm{a}M}(k)\setminus\{p\}},\,o_{0},o_{1}\in\mathcal{O}_{p}: \tag{47}\] \[\omega_{k}(o,o_{0})\neq\omega_{k}(o,o_{1})\,, \tag{48}\]
and the function \(\omega=(\omega_{k})_{k\in[n]}\) is a classical-deterministic process.
Given a causal structure \(M\), we build a causal model by defining some natural model parameters, the _veto model parameters_, where we use \({\mathrm{C}_{\mathrm{h}M}(k):=\{v\mid(k,v)\in\mathcal{S}\}\subseteq\mathcal{W}}\) to denote the set of children of the vertex \(k\in\mathcal{W}\) with respect to the digraph \(M\).
**Definition 12** (Veto model parameters [34]).: The _veto model parameters_ for the causal structure \(M=(\mathcal{W},\mathcal{S})\) are
\[\mathcal{I}_{k}:=\{0,1\}\,,\qquad\mathcal{O}_{k}:=\mathrm{C}_{ \mathrm{h}M}(k)\cup\{\bot\} \tag{49}\] \[\omega_{k}:\mathcal{O}_{\mathrm{P}_{\mathrm{a}M}(k)}\to\mathcal{I}_ {k}\] (50) \[(o_{\ell})_{\ell\in\mathrm{P}_{\mathrm{a}M}(k)}\mapsto\prod_{ \ell\in\mathrm{P}_{\mathrm{a}M}(k)}[k=o_{\ell}]\,. \tag{51}\]
These model parameters implement the following functionality, which justifies the name. Each party \(k\) may specify one of its children on its future boundary, or "nobody" expressed with the bottom symbol \(\bot\). If all parents of some party \(k\) specified "\(k\)," then party \(k\) receives a one on the past boundary, and a zero otherwise.
In the following theorem, we combine some known results on causal models concerning a specific class of digraphs. On the one hand, when a causal structure from that class is amended with the veto model parameters, the resulting causal model is always _faithful and consistent_. On the other hand, these causal models always give rise to causal correlations only.
**Theorem 13** (Consistency and causal correlations [34]).: _If \(M=(\mathcal{W},\mathcal{S})\) is a chordless siblings-on-cycles digraph, i.e., for each directed cycle \(C=(s_{0},s_{1},\dots)\) in \(M\) that traverses the vertices \(\mathcal{W}_{C}\subseteq\mathcal{W}\), (chordless) the arc set \(\mathcal{S}\) contains no chord \((u,v)\in\mathcal{W}_{C}^{2}\setminus C\), and (siblings-on-cycles) the arc set \(\mathcal{S}\) contains two arcs \((p,u_{0}),(p,u_{1})\) with \(u_{0},u_{1}\in\mathcal{W}_{C}\), then the causal model with causal structure \(M\) and the veto model parameters is a consistent and faithful causal model, and for all experiments \(\{\mu_{k}\}_{k\in\mathcal{W}}\), the correlations \(p(\underline{a}|\underline{x})\) are causal._
### Illustration: The classical switch
The present framework and this theorem are illustrated by the classical switch. Consider the 3-party scenario, and let the causal structure \(M\) be the digraph \(D\) of Fig. 5(a). In this case, the veto model parameters are
\[\omega_{0}:\{0,\bot\}\times\{0,1,\bot\} \to\{0,1\} \tag{52}\] \[(o_{1},o_{2}) \mapsto[0=o_{1}][0=o_{2}]\] (53) \[\omega_{1}:\{1,\bot\}\times\{0,1,\bot\} \to\{0,1\}\] (54) \[(o_{0},o_{2}) \mapsto[1=o_{1}][1=o_{2}]\,. \tag{55}\]
Since party 2 has no parents, the model parameter \(\omega_{2}\) is simply the constant 1. If party 2 implements an experiment such that the system on the future boundary of party 2 is in the state 0, then the above functions--partially evaluated--become
\[\omega_{0}(o_{1},o_{2}=0) =[0=o_{1}] \tag{56}\] \[\omega_{1}(o_{0},o_{2}=0) =0\,; \tag{57}\]
Party 0 may now receive a signal from party 1. If, however, the experiment of party 2 produces the state 1, then
\[\omega_{0}(o_{1},o_{2}=0) =0 \tag{58}\] \[\omega_{1}(o_{0},o_{2}=0) =[1=o_{0}]\,, \tag{59}\]
and party 1 may receive a signal from party 0--just as in the quantum switch (_cf._ Fig. 1). Now, note that \(M\) is a chordless siblings-on-cycles graph. The only directed cycle is \(((0,1),(1,0))\). This cycle is chordless, and there exists \(p=2\) such that \((p,0),(p,1)\) are arcs. The above theorem thus tells us that, no matter what experiments are carried out by the parties \(\{0,1,2\}\), only causal correlations (see Def. 9) are attainable. This example also illustrates the influence of the common parent \(p=2\): No matter what experiment \(p\) carries out, the information flow around the directed cycle is effectively interrupted. In fact, this is what ensures consistency. Were the information flow along the cycle not interrupted, then a party may influence its own past [37], which yields a disagreement with Eq. (45).
### Game-winning strategies
We present causal models and experiments with which the digraph Bell games (Theorem 9) are won deterministically. The respective causal structures are particularly simple.
**Definition 13** (Game-winning causal models).: The _game-winning causal model_ consists of the following causal structure with the veto model parameters. For the \(k\)-cycle game \(\Gamma(n,D)\) with \(D=(\mathcal{V}\subseteq[n],\mathcal{A})\) and \(k<n\), the causal structure is the digraph \(([n],\mathcal{A}\cup\{(p,u_{0}),(p,y_{0})\})\), where \(p\in[n]\setminus\mathcal{V}\) and \((u_{0},y_{0})\in\mathcal{A}\) (see Fig. 7(a)). For the \(k\)-fence game \(\Gamma(n,D)\) as well as for the \(k\)-Mobius game with \(D=(\mathcal{V}\subseteq[n],\mathcal{A})\) and \(2k<n\), the causal structure is the digraph with vertices \([n]\) and arcs
\[\begin{split}\{(u_{i},u_{i+1}),(y_{i},y_{i+1})\mid i\in[k-1] \}\\ \cup\{(u_{k-1},y_{0}),(y_{k-1},u_{0}),(p,u_{0}),(p,y_{0})\} \,,\end{split} \tag{60}\]
where \(\mathcal{U}=\{u_{i}\}_{i\in[k]}\), \(\mathcal{Y}=\{y_{i}\}_{i\in[k]}\) correspond to the two-coloring bipartition of \(D\), and \(p\in[n]\setminus\mathcal{V}\) (see Fig. 7(b)).
Thanks to Theorem 13, the game-winning causal model is consistent and faithful. Next, we specify the experiments to be carried out by the parties.
**Definition 14** (Game-winning experiments).: For the \(k\)-cycle, \(k\)-fence, or \(k\)-Mobius game \(\Gamma(n,D=(\mathcal{V},\mathcal{A}))\) and the respective game-winning causal model \(M=([n],\mathcal{S})\), the game-winning experiments are the following. Whenever the referee announces \((s,r)\in\mathcal{A}\) to all parties \([n]\) and \(x\in\{0,1\}\) to party \(s\), then party \(s\) implements
\[\mu_{s}:\mathcal{A}\times\{0,1\}\times\mathcal{I}_{s} \rightarrow\mathcal{O}_{s} \tag{61}\] \[(s,r,x=0,i_{s}) \mapsto\bot\] (62) \[(s,r,x=1,i_{s}) \mapsto c\,, \tag{63}\]
where \(\mathrm{Ch}_{M}(s)=\{c\}\), _i.e.,_ party \(s\) "votes" for its unique child if and only if \(x=1\), party \(r\) (who produces the output \(a\)) implements
\[\mu_{r}:\mathcal{A}\times\mathcal{I}_{r} \rightarrow\{0,1\}\times\mathcal{O}_{r} \tag{64}\] \[(s,r,i_{r}) \mapsto(i_{r},\bot)\,, \tag{65}\]
_i.e.,_ party \(r\) produces \(a\) according the state on the past boundary, party \(p\) (who has a controlling role) implements
\[\mu_{p}:\mathcal{A}\times\mathcal{I}_{p} \rightarrow\times\mathcal{O}_{p} \tag{66}\] \[(s,r,i_{p}) \mapsto u_{0} \tag{67}\]
whenever the directed path from \(s\) to \(r\) in the causal structure \(M\) traverses or ends in \(u_{0}\), and
\[(s,r,i_{p}) \mapsto y_{0} \tag{68}\]
otherwise, and each other party \(k\in\mathcal{V}\setminus\{s,r,p\}\) implement
\[\mu_{k}:\mathcal{A}\times\mathcal{I}_{k} \rightarrow\times\mathcal{O}_{k} \tag{69}\] \[(s,r,i_{k}=0) \mapsto\bot\] (70) \[(s,r,i_{k}=1) \mapsto c\,, \tag{71}\]
where \(\mathrm{Ch}_{M}(k)=\{c\}\), _i.e.,_ party \(k\) forwards the signal.
**Theorem 14** (Tame game-winning causal models).: _Let \(\Gamma(n,D)\) be a \(k\)-cycle game with \(k<n\), a \(k\)-fence game with \(2k<n\), or a \(k\)-Mobius game with \(2k<n\). If the parties implement the game-winning experiments on the respective game-winning causal model, then the parties win the game \(\Gamma(n,D)\) with certainty, i.e., \(\Pr[\mathcal{W}(\Gamma(n,D))]=1\), and yet for all experiments the causal models yields causal correlations only._
Proof.: The latter part follows from Theorem 13. For the former part, suppose the referee announces \((s,r)\in\mathcal{A}\) to all parties \([n]\), and \(x\in\{0,1\}\) to party \(s\), and let \(\pi\) be the directed path from \(s\) to \(r\) on the game-winning causal structure \(M\). First, note that \(\pi\) is unique, and in the case of the \(k\)-cycle game, \(\pi\) consists of the single arc \((s,r)\). In the case of the \(k\)-fence or \(k\)-Mobius game, \(s\) has a different color than \(r\). Thus, in all cases, the directed path \(\pi\) traverses or ends in at most one of \(u_{0}\) and \(y_{0}\). Thanks to the experiment of party \(p\), each party \(k\) on the path \(\pi\), and \(r\), may receive a signal from \(\mathrm{Pa}_{M}(k)\setminus\{p\}\). Now, party \(s\) votes for its unique child \(c\) only if \(x=1\). If \(c=r\), then according to \(\mu_{c}\), party \(r\) outputs \(a=x\). Else, party \(c\) forwards the signal to its unique child, _etc._
## VI Discussion: Relativistic setting
We exploit the digraph Bell games presented in Theorem 9 to distinguish between special and general relativity. In other words, we show that a violation of the above inequalities may certify the dynamical spacetime structure in general relativity. To arrive at this, we first present a setting that, if embodied in special relativity, will never lead to any violation of the inequalities. Crucial for this endeavor is the notion of _event_. It is central to understand that the Bell games presented provide limits on the correlations where the _events_ form a partial order. The quantum switch, mentioned in the introduction, has been demonstrated experimentally with quantum-optics tabletop experiments (see Ref. [38] for a review). Clearly,
Figure 8: Causal structure to win (a) the \(k\)-cycle game for \(k<n\), and (b) the \(k\)-fence and the \(k\)-Möbius game for \(2k<n\). Remaining isolated vertices are omitted.
no significant gravitational effects entered these experiments. An event, there, is understood as the reception and emission of a signal. So, with that notion of event, our program fails. Instead, we propose to use the commonly accepted notion of event in relativity defined as the crossing of worldlines.
### Special relativity
In the following, we describe a setup where the relevant events in special relativity will _always_ form a partial order. Consider the three-agent setup schematically represented in Fig. 9. This setup is straight forwardly generalizable to any number of agents. In this \(1+1\) Minkowski spacetime, the agents \(A,B,C\) are initially spacelike separated, and so are the respective referees \(R_{X}\) for \(X\in\{A,B,C,0,1\}\). In this setup, the referee \(R_{0}\) carries the inputs to the parties, and the referees \(R_{1},R_{X}\) for \(X\in\{A,B,C\}\), who travel close to the speed of light, ensure that the outputs of the parties are timely produced. For simplicity, we can imagine \(R_{0}\) to emit a light signal that carries the inputs.4 Also, we can imagine \(R_{1}\) and \(R_{X}\), for \(X\in\{A,B,C\}\), to emit light signals that switch off any detection device of agent \(X\).
Footnote 4: Clearly, the information encoded in this light signal must be encrypted in a way such that each agent cannot obtain the input to any other agent.
By this setup, the input to each party \(X\in\{A,B,C\}\) is only available in the intersection of the future lightcones of the initial location of \(X\) and \(R_{0}\). If \(R_{1}\) and \(R_{X}\) meet without having received any output from party \(X\), then the game is aborted. This means that each party \(X\in\{A,B,C\}\) may only receive the input and produce the output in the spacetime region \(S_{X}\), _i.e.,_ the event \(E_{X}\) is in
\[S_{X}:=J^{+}(R_{0})\cap J^{+}(X)\setminus J^{+}(R_{1})\cup J^{+}(R_{X})\,, \tag{72}\]
where \(J^{+}(e)\) denotes the future lightcone of the event \(e\). The spacetime regions \(\{S_{X}\}_{X\in\{A,B,C\}}\) form a partial order. Thus, the correlations \(p(\underline{a}\underline{|}\underline{x})\) attainable in this relativistic setting are partial-order correlations; the presented Bell inequalities, by definition, cannot be violated.
### General relativity
As shown above, the games presented are deterministically won with _causal_ correlations. This suggests that these games may also be won within general relativity (and likely with globally hyperbolic spacetime structures). For instance, consider the two-cycle game defined by the digraph \(D=([B,C],\{(B,C),(C,B)\})\), played with the three parties \(\{A,B,C\}\) arranged as in Fig. 9. In case the referee announces \((B,C)\) and some \(x\in\{0,1\}\) to \(B\), then trivially, \(C\) may output \(x\). Instead, if the referee announces \((C,B)\) and some \(x\in\{0,1\}\) to \(C\), then in the special relativistic setting, \(B\) may at best guess \(x\) with half probability. In a general relativistic setting, however, agent \(A\) may alter the spacetime structure within its future lightcone: Depending on the announced arc \((B,C)\) or \((C,B)\), agent \(A\) may displace a heavy mass which, in the latter case, results in an inversion of the causal relations between the spacetime regions \(S_{B}\) and \(S_{C}\).
Note that an _inversion_ of the causal relations is not necessary. As an alternative, the initial configuration could be such that the spacetime regions \(S_{B}\) and \(S_{C}\) are spacelike separated and within the future lightcone of \(A\). In this case, agent \(A\) must, depending on the announced arc by the referee, tilt the spacetime structure such that \(S_{B}\) and \(S_{C}\) are time-like related in the desired way.
## VII Conclusions and open questions
We derived families of collaborative multiparty games that, if the winning chance exceeds the described limit, prove the parties' influence on the causal relations. These games are formulated with directed graphs, and are rather simple: A referee picks an arc \((s,r)\), and asks party \(s\) to communicate a random bit to party \(r\). The purpose of these games is to detect the dynamical spacetime structure present in general relativity and absent in special relativity. This, however, can also be phrased purely within the theory of general relativity. First, note that the spacetime structure in general relativity is said to be dynamical because matter exerts a back-action on it. Thus, one can regard these Bell tests also as tests for the displacement, and therefore also for the presence, of matter. Moreover, these games serve as device-independent test for _indefinite causal order,_ as exhibited by the quantum switch. It
Figure 9: The spacetime regions \(S_{A},S_{B},S_{C}\) form a partial order. Agent \(A\) may only obtain the input \(x\) and produce the output \(a\) in the region \(S_{A}\), and similarly for the agents \(B\) and \(C\). Therefore, the correlations \(p(a,b,c|x,y,z)\) decompose as a partial order, and no violation of the presented games can be observed.
is known that the causal structure \(M\) of any unitary quantum process with indefinite causal order is cyclic [35]. A candidate game to detect the dynamical component of the indefinite causal order is then simply obtained from the cyclic part of \(M\).
The game-winning strategies for the \(k\)-cycle, the \(k\)-fence, and the \(k\)-Mobius games require more players than nodes in the graph. It is possible, however, to design a game-winning causal model for the excluded case, as well as for the game in the "all-to-one" scenario (which is _nota bene_ a causal game): The Svetlichny-inspired causal model from Ref. [26] has as causal structure the complete directed graph \(K_{n}^{\text{di}}\) among all parties. By using that causal model, it is therefore trivial to communicate a bit from _any_ party to _any other_. The correlations that arise there, however, are incompatible with any global ordering of the parties, dynamical or not.
Our work raises a series of open questions, with the central one: What is the precise general relativistic description of the game-winning causal models? The detection of dynamical causal structure can be understood as the detection of the curvature's change in the spacetime manifold. Thus, an immediate follow-up question is whether and to what extent these games detect _gravitational waves._ Speculatively, it might be possible to reinterpret the data collected at the LIGO experiments (see, _e.g.,_ Ref. [39]) as a violation of an inequality presented.
Understanding the informational content of general relativity might be beneficial for future research, especially when merging quantum theory with general relativity. Towards that, not only answers to the above questions but also a mathematical formalization of the "dynamics" of causal structures might be helpful. How do the experiments implemented in local regions alter the causal structure?
It is suspected that enumerating all facet-defining inequalities for the acyclic subdigraph polytope is infeasible: The more nodes are involved, the more structure enters the polytope, and novel facets emerge [22]. In fact, deciding whether a vector is a member of the acyclic subdigraph polytope is NP-complete [40]. This raises two questions. Firstly, inspired by the Mobius inequality, one might wonder whether an orientation of the _Klein-bottle graph_ would give rise to a facet-defining inequality or not. After all, the Klein bottle is a generalization of the Mobius strip. Secondly, the computational difficulty mentioned above suggests that deciding whether some \(n\)-party correlations are compatible with a partial order or not is NP-complete as well. Note that this holds for local correlations in the context of quantum nonlocality [41].
Finally, it is known [42] that the _axiom of choice_ in set theory yields an advantage for the following game played among infinitely many players situated on a line: Each player carries a hat with a random color, red or black, sees only the players in front, and must guess the own hat color. This game, in fact, forces us to revise the concept of nonsignaling in physical theories [43]. Now, the bounds on the games presented here are indifferent whether the players' actions form a partial or a _total order:_ We can equivalently assume the same configuration as for the "hats" game. Now, does the axiom of choice also allow for an advantage for the "infinite"-versions of the games presented?
**Acknowledgments.** We thank Luca Apadula, Flavio Del Santo, Andrea Di Biagio, and Stefan Wolf for helpful discussions. EET thanks Alexandra Elbakyan for providing access to the scientific literature. EET is supported by the Austrian Science Fund (FWF) through ZK3 (Zukunftskolleg). AB is supported by the Swiss National Science Foundation (SNF) through project 182452 and project 214808.
|
2309.13323 | Light correcting light with nonlinear optics | Structured light, where complex optical fields are tailored in all their
degrees of freedom, has become highly topical of late, advanced by a
sophisticated toolkit comprising both linear and nonlinear optics. Removing
undesired structure from light is far less developed, leveraging mostly on
inverting the distortion, e.g., with adaptive optics or the inverse
transmission matrix of a complex channel, both requiring that the distortion is
fully characterised through appropriate measurement. Here we show that
distortions in spatially structured light can be corrected through difference
frequency generation in a nonlinear crystal without any need for the distortion
to be known. We demonstrate the versatility of our approach by using a wide
range of aberrations and structured light modes, including higher-order orbital
angular momentum (OAM) beams, showing excellent recovery of the original
undistorted field. To highlight the efficacy of this process, we deploy the
system in a prepare-and-measure communications link with OAM, showing minimal
crosstalk even when the transmission channel is highly aberrated, and outline
how the approach could be extended to alternative experimental modalities and
nonlinear processes. Our demonstration of light correcting light without the
need for measurement opens a new approach to measurement-free error correction
for classical and quantum structured light, with direct applications in
imaging, sensing and communication | Sachleen Singh, Bereneice Sephton, Wagner Tavares Buono, Vincenzo D'Ambrosio, Thomas Konrad, Andrew Forbes | 2023-09-23T09:57:05Z | http://arxiv.org/abs/2309.13323v1 | # Light correcting light with nonlinear optics
###### Abstract
Structured light, where complex optical fields are tailored in all their degrees of freedom, has become highly topical of late, advanced by a sophisticated toolkit comprising both linear and nonlinear optics. Removing undesired structure from light is far less developed, leveraging mostly on inverting the distortion, e.g., with adaptive optics or the inverse transmission matrix of a complex channel, both requiring that the distortion is fully characterised through appropriate measurement. Here we show that distortions in spatially structured light can be corrected through difference frequency generation in a nonlinear crystal without any need for the distortion to be known. We demonstrate the versatility of our approach by using a wide range of aberrations and structured light modes, including higher-order orbital angular momentum (OAM) beams, showing excellent recovery of the original undistorted field. To highlight the efficacy of this process, we deploy the system in a prepare-and-measure communications link with OAM, showing minimal crosstalk even when the transmission channel is highly aberrated, and outline how the approach could be extended to alternative experimental modalities and nonlinear processes. Our demonstration of light correcting light without the need for measurement opens a new approach to measurement-free error correction for classical and quantum structured light, with direct applications in imaging, sensing and communication.
## 1 Introduction
Light, and with it, the transverse tailoring of phase and amplitude to create the so-called structured light [1], presents a large field of active research with wide ranging applications [2], from optical trapping [3] to communication [4]. The toolkit has become highly versatile covering generation, control and detection schemes that include liquid crystals [5, 6], digital micromirror devices [7] and metasurfaces [8]. Beyond linear optics, structured light control with nonlinear optics has become topical of late [9], shifting the focus of attention from wavelength change and efficiency to spatial modal creation, control and detection. This has led to a re-invention of the field with a modern twist, ushering in new selection rules [10, 11, 12] and processes [13, 14, 15, 16] while fostering wide reaching applications, including spatial mode creation [17, 18, 19] and detection [20, 21], image processing [22, 23, 24, 25] and filtering [26], holography [27, 28, 29], enhanced interferometry [30], high-dimensional teleportation [31, 32], as well as the development of modern nonlinear materials [33, 34, 35, 36].
Unfortunately the spatial structure of light becomes distorted in complex channels [37, 38, 39, 40], arresting its full potential. Although phase conjugation of structured light is possible by nonlinear optics [41], it does not correct the distortion but rather produces the negative of it, requiring a time reversal step [42]. To mitigate these drawbacks, a measurement based approach to structured light correction is now ubiquitous, for example, using adaptive optics [43, 44, 45, 46, 47] and wavefront shaping [48], inversion of the transmission matrix of complex channels [49, 50, 51], and finding invariances that remain distortion-free [52, 53, 54].
Here we show that light can correct light without the need for any measurement. We exploit parametric wave mixing by difference frequency generation in a nonlinear crystal to restore the information encoded into the structure of light, even after it has passed through a highly aberrating channel. In order to achieve this, two input beams, one with information encoded into its structure and the other as a probe, are passed through the same aberrating channel followed by difference frequency generation in a nonlinear crystal, returning only the desired information. This is due to the nature of the parametric wave mixing process which outputs the product of one of the input modes with the conjugate of the other. We demonstrate the versatility of our approach by using a wide range of aberrations and structured light modes, from Gaussian beams to orbital angular momentum (OAM) beams and their superpositions, showing excellent recovery of the original undistorted field. To highlight the efficacy of this, we consider the crosstalk matrix of a 15 dimensional OAM
alphabet across a noisy channel comprising an arbitrary aberration, showing very good recovery of the information. We outline how our approach can be used across multiple wavelengths that could be close or far apart, offering a new approach to measurement-free error correction for classical and quantum structured light.
## 2 Concept
With difference frequency generation (DFG), two electric fields (\(\mathbf{E}_{1}\) and \(\mathbf{E}_{2}\)) mix in a second order nonlinear crystal to generate a third beam (\(\mathbf{E}_{\text{G}}\)). Here, each field possesses a transverse spatial structure \(M_{n}(r,\phi)\) and polarisation, indicated by the unit vector \(\hat{\mathbf{e}}_{n}\),
\[\mathbf{E}_{n}=M_{n}(r,\phi)\hat{\mathbf{e}}_{n} \tag{1}\]
where \(n=\{1,2,\text{G}\}\) refers to the first, second and generated beams, while \((r,\phi)\) are the radial and azimuthal coordinates in the transverse spatial plane. Coherent amplification of the generated field occurs along the crystal length when the
Figure 1: (a) Concept of correcting aberrated states by using light to correct light. The product of an input beam (middle mode) with another containing the same phase aberration (exponential term) cancels the identical distortion present in the structure carried by a second input beam (left mode) to restore the unaberrated state (right mode) in the difference-frequency beam generated from nonlinear wave mixing. Beams are shown in the far-field for conceptual clarity. (b) Experimental setup used to apply and correct distortions on structured modes with difference frequency generation. SLM, spatial light modulator; HWP, half-wave plates; I, apertures; DM, dichroic mirror; NLC, nonlinear crystal; F\({}_{1}\), shortpass and F\({}_{2}\) longpass wavelength filters; L\({}_{1}\) (18 mm), L\({}_{2}\) (200 mm), L\({}_{3}\) (300 mm), L\({}_{4}\) (75 mm), L\({}_{5}\) (500 mm), L\({}_{6}\) (100 mm), L\({}_{7}\) (750 mm) and L\({}_{8}\) (100 mm) are lenses.
phase-matching conditions are satisfied. This applies a constraint between the wave-vectors and interacting fields [55], ensuring conservation of energy and momentum in the process. For DFG, the energy of the generated field in the paraxial regime is aptly given by the difference of the input angular frequencies, \(\omega_{1}-\omega_{2}=\omega_{\mathrm{G}}\) and wavevectors, \(\mathbf{k}_{1}-\mathbf{k}_{2}=\mathbf{k}_{\mathrm{G}}\) for the transverse components of the interacting fields. A sufficiently large bandwidth for phase-matching of the longitudinal component in the thin-crystal limit causes the spatial profile of the generated field to be reduced to the product of the two input fields [56]. Following from the conservation rules, the output field then holds the combined information of the input fields such that the spatial structure of the generated field is proportional to that of the first input and the complex conjugate of the second input,
\[M_{\mathrm{G}}=\eta M_{1}M_{2}^{*}, \tag{2}\]
where \(\eta\) is a constant related to the efficiency of the process and \(*\) indicates complex conjugation.
By considering the complex form of the spatial structures at the beam waist (neglecting propagation terms for simplicity), \(M_{n}=A_{n}(r,\phi)e^{i\Phi_{n}(r,\phi)}\) where \(A(r,\phi)\) is the amplitude and \(\Phi(r,\phi)\) the phase, the effect of DFG is to conjugate the phase distribution of the second beam and adding it to the phase distribution of the first, \(\Phi_{\mathrm{G}}=\Phi_{1}+(-\Phi_{2})\). Where the second beam's phase is uniform or null, the generated beam phase is simply that which is carried by the first beam (\(\Phi_{\mathrm{G}}=\Phi_{1}\)). As a result, the generated beam will contain any desired structure (\(\Phi_{\mathrm{info}}\)), that the first beam contains. This, however, is true for any additional distortions (\(\Phi_{\mathrm{Ab}}\)) experienced by the beam as well. For such an event, the generated beam will then have a phase of \(\Phi_{\mathrm{G}}=\Phi_{\mathrm{info}}+\Phi_{\mathrm{Ab}}\), such that the modal information or purity is degraded and seen in distortion of the intensity profile upon propagation. One may now consider the case where the contribution of the second beam can be exploited. Without loss of generality, we consider an example where we seek to restore Laguerre-Gaussian (LG\({}_{\ell}\)) modes of zero radial index (\(p=0\)) and abitrary \(\ell\), from an aberrated state. Figure 1 (a) illustrates this concept. Notably, these structured modes hold orbital angular momentum (OAM) as a degree of freedom and are characterised by the integer parameter \(\ell\), which yields \(\ell\hbar\) OAM per photon and \(\ell\) number of twists in the phase-front per wavelength (red to blue transitions in the rightmost phase inset). To correct the aberration, one need only see that by using the same aberration phase on the second beam, the original helical phase can be restored. The distortions of the LG mode amplitude (depicted alongside the phase terms) is then corrected to reveal the characteristic doughnut intensity distribution. Here, due to the naturally occurring phase conjugation in the crystal, the initial disturbance, e.g., \(M_{1}=\mathrm{LG}_{(\ell=2)}e^{i\Phi_{\mathrm{Ab}}}\), also present in the second beam (using a Gaussian profile to conserve the structure of the first beam), e.g., \(M_{2}=\mathrm{LG}_{(\ell=0)}e^{i\Phi_{\mathrm{Ab}}}\), cancels the distortion in the generated beam, \(\Phi_{\mathrm{G}}=(\Phi_{\mathrm{info}}+\Phi_{\mathrm{Ab}})-\Phi_{\mathrm{Ab}}= \Phi_{\mathrm{info}}\), while preserving the initial phase and amplitude. Cancellation of the unwanted disturbance, such as turbulence, and successful transfer of the desired structure carried by the first beam is therefore achieved by using the structure of one light beam to correct that of the other. Note that while we have outlined the concept in the near-field of the aberration, far-field scenarios of phase and amplitude coupling can simply be reversed by a lens before entering the crystal.
## 3 Experimental Results
To demonstrate the principle of using a second beam in DFG to correct phase aberrations on an initial input beam, we implemented the experimental setup as shown in Fig. 1 (b). Here, two continuous wave lasers of wavelengths 532 nm (VIS) and 1550 nm (IR) were collimated and expanded onto liquid crystal spatial light modulators (SLM\({}_{1}\), SLM\({}_{2}\)), before demagnification and imaging onto a type-0 nonlinear crystal (NLC, periodically-poled KTP) with a \(4f\)-lens system (L\({}_{5}\), L\({}_{6}\) and L\({}_{7}\), L\({}_{8}\)). Complex amplitude modulation [57] was used to encode the desired states of each input beam, which we will refer to as probe and signal beams to clarify their roles in the correction process. Apertures (I\({}_{1}\), I\({}_{2}\)) in the Fourier plane spatially filtered the 1st order modulated light from the SLMs, respectively forming the signal and probe input modes. Half-waveplates (HWP\({}_{1}\), HWP\({}_{2}\)) in each arm then respectively adjusted the polarisation for phase-matching and a dichroic mirror (DM) used to collinearly combine the beams before the NLC. A long- and short-pass wavelength filter (F\({}_{1}\), F\({}_{2}\)) placed after the crystal isolates the DFG beam. The generated beam was then focused onto a camera by a lens in a \(2f\) configuration (F\({}_{9}\)), detecting the Fourier plane of the DFG modes.
We now experimentally realise this concept with the results shown in Fig. 2. Here, three azimuthally-varying phase aberrations, \(\Phi_{\mathrm{Ab}}=\exp(i\pi\cos\,(n\phi))\) where \(n=\{1,2,3\}\) (shown in the top insets) were applied to the IR Gaussian signal beam. The Gaussian structure and flat phase of the IR probe beam is retained for the process. As expected, aberrations on the generated mode distorts the beams in the far-field, as seen in the top row. By employing the light correcting light
approach with DFG, implemented by now applying the same aberrational phase to the VIS probe beam, we find the initial structure is corrected and confirmed with unaberrated Gaussian distributions in the bottom row.
In Fig. 3 we next explore aberrations having both radial and azimuthal dependence, while also expanding the encoded states to higher-order modes. We note any spatial modes may be used and chose LG due to their extensive applications from communications to metrology [58, 59]. The Zernike basis (\(Z_{m,n}\)) [60] with azimuthal frequency, \(n\), and radial order, \(m\), are used to simulate the unwanted distortions, forming a natural basis for optical aberrations [61, 62]. Modes representing astigmatism (\(Z_{2,2},Z_{2,-2}\)) and trefoil (\(Z_{3,3},Z_{3,-3}\)) were then chosen from the Zernike family and applied with same aberrational strength. The expected doughnut intensity distributions of these LG states (first three panels), show good agreement to the unaberrated DFG intensities (NA, top-right of each modal set). After the structured signal beam encounters each aberration, however, significant deviations in the DFG intensity profiles (Ab.) are observed, obscuring the modes and related information. Applying the same phase distortion to the Gaussian probe shows successful restoration of the modal structure in the DFG beam by cancellation of the aberrational phase (Cor.). Applicability to states with more modal complexity is further demonstrated by constructing modes from a superpositon of LG states (\(\frac{1}{\sqrt{2}}\) (LG\({}_{\ell}\) + LG\({}_{-\ell}\))), giving 0 to \(\pi\) wedge phase steps with petal intensity structures. This is shown in the last two panels where \(\ell\) = {2,3}, respectively. Similarly, aberrations caused notable distortions in the detected intensity distributions, but excellent restoration when applying our correction approach.
Greater aberrational complexity is also introduced by taking three, \(\Phi_{\text{Ab}}=5Z_{2,2}+5Z_{2,-2}+10Z_{3,3}\), and four, \(\Phi_{\text{Ab}}=10Z_{2,2}-10Z_{2,-2}-10Z_{3,3}+10Z_{3,-3}\), mode superpositions of the Zernike basis states. This is shown in Fig. 4 (a) and (b), respectively, where the signal beam was also encoded with LG modes of \(\ell=[1:6]\). Here, deleterious distortions obscure the encoded doughnuts (top insets) into intermittent linear structures (bottom rows, Ab.). With the same phase distortion on the probe beam, we again find the output structure returns to the ring profile. While the modes are excellently restored, a reduction in the correction efficacy appears as the \(\ell\) value increases. This can be attributed to an increase in the generated beam size of \(w_{\ell}=w_{0}\sqrt{(|\ell|+1)}\) where \(w_{\ell}\) is the OAM beam waist and \(w_{0}\) the waist of the fundamental Gaussian mode. As a result of increasing size, greater interaction with the optical elements occur, leading to the modes obtaining additional peripheral aberrations not encoded and accounted for in the probe profile. In Fig. 4 (c), the same four-mode aberration is applied to the previous petal superpositions where \(\ell\in[1,6]\). The aberrating phases on the signal beam similarly destroy the DFG structure, such that they can no longer be identified in comparison to the expected distributions (top insets in (c)). Excellent agreement then occurs when the probe is used to correct for the distortion. While a small decrease in correction efficacy is also seen as \(\ell\) increases, favourable restoration is still seen up to the largest state.
Figure 2: Three azimuthal aberrations (top insets) were applied to a Gaussian signal beam, resulting in measured intensity distortions in the far-field (Aberrated row). Application of the nonlinear correction process with a probe beam results in the recovery of the initial Gaussian beam, as evident in the measured far-field intensities in the bottom row (Corrected). All intensities are normalized to 1.
Figure 3: Experimental correction of astigmatism and trefoil aberrations for five different spatial states (column-wise) where \(\ell\) = {1,2,3} and petal modes with LG superpositions (\(\frac{1}{\sqrt{2}}\) (LG\({}_{\ell}\) + LG\({}_{-\ell}\))) where \(\ell\) = {2,3}. The correction has been applied for both vertical and oblique combinations of aberrations. Further every such combination has been corrected for both positive and negative strength coefficients. The applied phase distortion has been shown in left panel. Every experimental picture shows results for corrected (Cor.) mode with corresponding aberrated (Ab.) and not aberrated (NA) modes as insets. The expected simulated intensity and phase profiles has been shown in the top row.
In the practical application of our measurement-free approach, a necessary condition is both probe and signal beams incur identical phase distortions for proper cancellation. One may thus consider the wavelength-dependence of the phase accumulated by two beams of differing wavelengths (e.g. \(\lambda_{1}\) and \(\lambda_{2}\)) traversing a distorting medium with refractive index \(n_{\lambda}\) such as glass with thickness, \(d\), varying across the beam profile. An unwanted dynamic phase of \(\varphi=\frac{2\pi n_{\lambda}d}{\lambda}\) is subsequently imparted at each point across the spatial profiles such that the phases for one wavelength can then be related as \(\varphi_{\lambda_{2}}=\alpha\varphi_{\lambda_{1}}\) to the other, where \(\alpha=\frac{\lambda_{1}n_{\lambda_{2}}}{\lambda_{2}n_{\lambda_{1}}}\). The disparity is reduced to a constant value, fixed by the chosen wavelengths and traversed medium. Accordingly, one need only account for a difference in the strength for the same aberrational distribution.
Figure 4: Correction for superpositions of astigmatism and trefoil with arbitrarily chosen strengths. Left column insets show the aberrating phases acting on spatial modes (top insets). Experimentally aberrated (Ab.) and corrected (Cor.) far-field intensities for LG beams increasing columnwise in OAM from \(\ell=[1,6]\) are given for aberrations with (a) three and (b) four mode superpositions. (c) Experimental results with the same OAM range for the LG superpositions (\(\frac{1}{\sqrt{2}}\) (LG\({}_{\ell}\) + LG\({}_{-\ell}\))) are given for the same four mode aberration.
Interestingly, we show that after undergoing distortion, resizing one beam relative to the other achieves a change in the respective aberrational strength overlapping with the unsized beam. To do so, a mismatch of \(\beta=\frac{w_{2}}{w_{1}}=1.4663\) between the probe (\(w_{2}\)) and signal (\(w_{1}\)) beam waists were made when demagnified onto the crystal in Fig. 1. Using the similarity (\(S=\frac{\left\lfloor\sum_{x,y}\sqrt{I_{NA}(x,y)I_{Cor}(x,y)}\right\rfloor^{2}}{ \sum_{x,y}I_{NA}(x,y)\sum_{x,y}I_{Cor}(x,y)}\)) between the measured unaberrated (\(I_{NA}(x,y)\)) and probe-corrected (\(I_{Cor}(x,y)\)) DFG intensity distributions, we quantify the correction efficacy for a range of aberration strengths encoded on the probe, while the signal remained fixed. \(S=1\) (\(S<1\)) indicates perfect correction (presence of uncorrected aberrations). For generality, three cases where the aberrational Zn order, magnitude of the aberration on the signal and spatial mode were tested and shown in Fig. 5. In each case, we find the same aberrational strengths do not cancel the distortion in the generated beam. More specifically, in Fig. 5 (a), we find an astigmatic \(\text{LG}_{\ell=1}\) signal beam with a coefficient (strength) of \(C_{2}=10\) requires a coefficient of \(C_{1}=20.8\) on the probe to cancel the distortion due to weakening of the relative
Figure 5: Resizing beam changes the relative strength of the aberrations for (a) \(\ell=1\) with astigmatism (order 2) and coefficient of 10 (calculated mismatch = 1.443), (b) \(\ell=1\) with trefoil (order 3) and coefficient of 15 (calculated mismatch = 1.431), (c) \(\ell=2\) with astigmatism (order 2) and coefficient of 10 (calculated mismatch = 1.466). Rightmost insets show the (i) un aberrated downconverted mode, (ii) aberrated downconverted mode (no correction) and (iii) aberrating Zernike mode phase distribution.
strength from the beam enlargement. For qualitative comparison, insets (i), (ii) and (iii) give the unaberrated DFG beam, aberrated DFG beam and aberrating mode, while the optimally corrected DFG beam is shown as the inset in the plot. Next, both the aberration type (trefoil) and strength (\(C_{2}=15\)) were altered in Fig. 5 (b), giving optimal correction with \(C_{1}=43.9\) and in Fig. 5 (c), an astigmatic LG\({}_{\ell=2}\) with the same strength as (a) needs approximately the same strength (\(C_{1}=21.4\)).
Here the Zernike radial orders (e.g. astigmatism with order 2 and trefoil with order 3) do not scale linearly with \(r\), but instead to the power of the order (\(m\)) from exponentiation of the radial term in the function. One can then derive the relation, \(\frac{C_{2}}{C_{1}}=\beta^{m}\) dictating the relative strength change as the result of resizing the probe. From this, we find \(\beta=\)1.443, 1.431 and 1.466 for each case in Fig. 5 agrees well with the experimental demagnification ratio (1.466). This confirms that resizing the input modes relative to each other allows one to employ a corrective strength to compensate for the variation in wavelength traversing a distorting medium. Intuitively, such resizing alters the phase gradient seen by the other beam and thus can be used to perfectly correct for the primary (major) aberration in any system of interest, as illustrated in Fig. 6 (a). As an example, phase insets (Ab.) show astigmatism aberrations incurred by wavelengths that have a factor of \(\alpha=2\) difference between them. By splitting, resizing and recombining the aberrated beams correctly, a perfectly corrective overlap is formed in the crystal (insets above NLC show the resized beam phases which match perfectly) as part of the nonlinear detection system. One may be cognisant that in many aberration correction strategies, one can also find limitations in compensating for all the aberrations present and thus achieving correction of the aberrations is limited to the prominent contributors [47]. Such corrections, however, still provides a significant improvement in the systems, facilitating satisfactory performance. Accordingly, employing our resizing approach to facilitate measurement-free correction of the prominent aberration indicates a promising approach to optical systems using spatial modes that undergo distortions such as in communications or metrology. One my additionally utilise the wide ranges of nonlinear crystals [63, 64, 65] to compensate for the disparity by making the input wavelengths closer, but at
Figure 6: Using nonlinear optics so that light corrects light is versatile in implementation where (a) resizing the the input beams of different wavelengths corrects the primary aberration, (b) using a second nonlinear process allows the same aberrations to be incurred by the signal and probe beams and (c) the principle of phase conjugation can be extended to any crystal with an even-ordered non-linear susceptibility. DM, dichroic mirror; L, lens; NLC, nonlinear crystal; PBS, polarising beamsplitter.
the expense of detection at much higher (mid- or infrared) wavelengths.
Alternatively, additional strategies can be engineered to improve the efficacy of the measurement-free approach, each implementing the core of our approach. Here, one can move from resizing the beams in the detection box to a two-step nonlinear system such that the probe is the same wavelength as the signal and thus incur identical distortions (Fig. 6 (b)). On the detection side, after separation with a property such as polarisation, the aberrated probe can undergo an initial nonlinear process to create a secondary probe at the desired wavelength for DFG (retaining the aberration by using another unstructured pump). The new probe is then recombined with the signal using a dichroic mirror for the DFG process in a second crystal. Furthermore, as illustrated in Fig. 6 (c), one need not be restricted to utilising phase conjugation from second-order nonlinear processes (\(\chi^{2}\)). It can be shown (see Appendix) that our core principle principle of phase cancellation is true for parametric wave mixing processes of even order (\(\chi^{2n}\)) in higher-order difference wave mixing, keeping in mind that the encoded phases of interest (\(\Phi_{\text{info}}\)) are generated with a related factor (\(\Phi_{G}=2n\Phi_{\text{info}}\)). Furthermore, one may add the phase-conjugating crystals in a cascaded configuration (see Appendix), making it possible to further extend our scheme to wavelength combinations and processes not possible in the straightforward approach.
Lastly, we demonstrate a prepare-and-measure system that allows us to retrieve the correct encoded modes despite the presence of distortions. Here the conjugating nature allows not only the phase distortions to be eliminated, but the phase of equivalent spatial modes as well. For instance, only when the same OAM mode is encoded on the signal and probe does the DFG beam contains a flat phase. This forms a Gaussian intensity distribution in the far-field which results in the presence of an on-axis intensity. Such matching of input modes (from the orthogonality relation of LG modes) allows it to also be used as a spatial mode detector. For continuity, we chose the same previous four-state Zernike superposition to
Figure 7: The probe field is used as a detector for OAM modes of \(\ell=[-7,7]\) in the cases where the beam size expands as dictated by the OAM value (a-c) and a size-adjustment of \(\frac{w}{\sqrt{(\ell+1)}}\) included to mitigate the OAM-dependent expansion in the generated modes. Detection cross-talk matrices of the system are shown (a,d) without applied aberrations, (b,e) with the 4-mode Zernike aberration and (c,f) with the aberrations corrected. Each row is normalised with the maximum value.
be the aberration and show how such a detection system without aberration, with an aberrated signal beam and with a corrected detector mode (probe) performs. We do so in the case where the OAM beam modal profiles expands naturally with \(\ell\) and with a mitigation of this expansion by encoding a size-adjustment of \(\frac{w}{\sqrt{(\ell+1)}}\) the each OAM mode. In the first case, observation of the detection system for the ground-truth (before aberrations) may be noted as having some higher mode cross-talk in the detection matrix (a), but largely detects the correct encoded OAM. However, with aberrational effects added to the signal, one is not able to distinguish the modes sent as seen with cross-talk extending to adjacent modes and forming a cross-diagonal pattern. Applied corrections on the detection beam retrieves the detection diagonal, although it begins to degrade as the higher-order modes are used. This can be attributed to the enlarged sizes on both the detection and signal beams causing additional aberrations and mismatch being accumulated throughout the optical system for both the beams. This detracts from the encoded and corrected aberrations. Confirmation of this may be observed in the case where the expansion of the beams were mitigated in (d-f) where the ground-truth detection matrix (d) already demonstrates an improvement in the system. The aberrational effects in (e) are additionally mitigated, but a clear distortion of the information being sent is still present, where adjacent modes are detected along with the modes being sent. Application of the correction on the detector mode then fixes the aberrational effects to yield the detection of the correct modes (f), in close agreement to what was observed for the non-aberrational case in (d).
## 4 Conclusion
In conclusion, we demonstrated the ability to use light as a mechanism to correct aberrated light modes through difference frequency generation. An advantage of the DFG mixing process is the correction need only be in the form of identical aberrations, providing, in principle, the opportunity to naturally cancel any incurred distortions when both the input beams (one carrying a desired structure and the other used for conversion of the structure) experience the same aberration. We demonstrated this principle for a wide array of both spatial modes and aberrations, starting from azimuthally-varying aberrations to including radial variations and linear combinations thereof. In doing so, excellent restoration of the spatial states were found for Laguerre-Gaussian modes with a range of OAM and a symmetric superpositions thereof, forming exemplary complex structures that hold utility across a wide range of applications. We showed that this technique can be important to other higher order nonlinear phenomena or cascaded nonlinear effects, enabling improved wavelength manipulation.
Furthermore, the practical aspects of different wavelengths traversing the same aberrating medium were considered, in the event measurement-free correction is desired, and found a relative resizing of the inputs can mitigate the disparity in the strength of the aberrations incurred. In doing so, while not all aberrations can be corrected, the primary contribution can be cancelled in the DFG mode according to the order one wishes to compensate for. We also consider the ability to use similar wavelengths in the process, where the wavelength-dependent disparity can be reduced at the cost of needing to detect DFG light in the IR region. Finally, by employing an identically aberrated detector beam, we were able to restore the ability to detect the encoded modes. Here, good agreement was found between the detected and encoded modes that were left to scale in size with the OAM charge. Compensating for the scaling in the modal set then further improved the restoration of the encoded states. Application in such a system would be then be useful for retrieving information through noisy channels. Notably, the probing mechanism reliance on light itself renders it advantageous under rapidly varying distortions, such as atmospheric turbulence, making this technique a valuable tool for various applications in optical communications to imaging and sensing. Furthermore, for a non-degenerate wavelength setup as used here, one is afforded the ability to detect in the visible range when working with information carried by structured light in the difficult-to-detect near infrared region.
Funding.The authors acknowledge the funding from the Department of Science and Innovation (DSI) as well as the National Research Foundation (NRF) in South Africa. Support from the Italian Ministry of Research (MUR) through the PRIN 2017 project "Interacting photons in polariton circuits" (INPhoPOL) and the PNRR project PE0000023-NQSTI is acknowledged. We also acknowledge support from the Italian Space Agency (ASI) through the "High dimensional quantum information" (HDQI) project.
The authors would like to thank Dr Isaac Nape and Dr Paola Concha Obando for useful discussions.
Disclosures.The authors declare no conflicts of interest.
Data availability.Data underlying the results presented in this paper may be obtained from the authors upon reasonable request. |
2309.03130 | MyoDex: A Generalizable Prior for Dexterous Manipulation | Human dexterity is a hallmark of motor control. Our hands can rapidly
synthesize new behaviors despite the complexity (multi-articular and
multi-joints, with 23 joints controlled by more than 40 muscles) of
musculoskeletal sensory-motor circuits. In this work, we take inspiration from
how human dexterity builds on a diversity of prior experiences, instead of
being acquired through a single task. Motivated by this observation, we set out
to develop agents that can build upon their previous experience to quickly
acquire new (previously unattainable) behaviors. Specifically, our approach
leverages multi-task learning to implicitly capture task-agnostic behavioral
priors (MyoDex) for human-like dexterity, using a physiologically realistic
human hand model - MyoHand. We demonstrate MyoDex's effectiveness in few-shot
generalization as well as positive transfer to a large repertoire of unseen
dexterous manipulation tasks. Agents leveraging MyoDex can solve approximately
3x more tasks, and 4x faster in comparison to a distillation baseline. While
prior work has synthesized single musculoskeletal control behaviors, MyoDex is
the first generalizable manipulation prior that catalyzes the learning of
dexterous physiological control across a large variety of contact-rich
behaviors. We also demonstrate the effectiveness of our paradigms beyond
musculoskeletal control towards the acquisition of dexterity in 24 DoF Adroit
Hand. Website: https://sites.google.com/view/myodex | Vittorio Caggiano, Sudeep Dasari, Vikash Kumar | 2023-09-06T16:10:49Z | http://arxiv.org/abs/2309.03130v1 | # _MyoDex_: A Generalizable Prior for Dexterous Manipulation
###### Abstract
Human dexterity is a hallmark of motor control. Our hands can rapidly synthesize new behaviors despite the complexity (multi-articular and multi-joints, with 23 joints controlled by more than 40 muscles) of musculoskeletal sensory-motor circuits. In this work, we take inspiration from how human dexterity builds on a diversity of prior experiences, instead of being acquired through a single task. Motivated by this observation, we set out to develop agents that can build upon their previous experience to quickly acquire new (previously unattainable) behaviors. Specifically, our approach leverages multi-task learning to implicitly capture task-agnostic behavioral priors (_MyoDex_) for human-like dexterity, using a physiologically realistic human hand model - MyoHand. We demonstrate _MyoDex_'s effectiveness in few-shot generalization as well as positive transfer to a large repertoire of unseen dexterous manipulation tasks. Agents leveraging _MyoDex_ can solve approximately 3x more tasks, and 4x faster in comparison to a distillation baseline. While prior work has synthesized single musculoskeletal control behaviors, _MyoDex_ is the first _generalizable_ manipulation prior that catalyzes the learning of dexterous physiological control across a large variety of contact-rich behaviors. We also demonstrate the effectiveness of our paradigms beyond musculoskeletal control towards the acquisition of dexterity in 24 DoF Adroit Hand.
Machine Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Predictive Learning, Model Predictive Learning, Predictive Learning, Model Predictive Learning, Model Predictive Learning, Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Predictive Learning, Model Predictive Learning, Model Predictive Learning, Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Predictive Learning, Model Predictive Learning, Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Predictive Learning, Model Predictive Learning, Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Predictive Learning, Model Predictive Learning, Model Predictive Learning, Predictive Learning, Model Predictive Learning, Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Predictive Learning, Model Predictive Learning, Model Predictive Learning, Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Model Predictive Learning, Predictive
study reveals a tradeoff between the generality and specialization of the _MyoDex_ prior. The final system is configured to maximize _generalization_ and _transfer_ instead of zero-shot out-of-the-box performance. **(4)** We demonstrate the generality of our approach by applying it to learn behaviors in other high-dimensional systems, such as multi-finger robotic hands. We construct _AdroitDex_ (equivuanet to _MyoDex_ for the _AdroitHand_(Kumar, 2016)), which achieves 5x better sample efficiency over SOTA in the TCDM benchmark (Dasari et al., 2023).
## 2 Related Works
Dexterous manipulations has been approached independently by the biomechanics field to study the synthesis of movements of the overactuated musculoskeletal system, and roboticists looking to develop, mostly via data-driven methods, skilled dexterous robots and a-priori representations for generalizable skill learning. Here, we discuss those approaches.
**Over-redundant biomechanic actuation.** Musculoskeletal models (McFarland et al., 2021; Lee et al., 2015; Saul et al., 2015; Delp et al., 2007; Seth et al., 2018) have been developed to simulate kinematic information of the muscles and physiological joints. Nevertheless, the intensive computational needs and restricted contact forces have prevented the study of complex hand-object interactions and otherwise limited the use mostly to optimization methods. Recently, a new hand and wrist model - _MyoHand_(Caggiano et al., 2022; Wang et al., 2022) - overcomes some limitations of alternative biomechanical hand models: allows contact-rich interactions and it is suitable for computationally intensive data-driven explorations. Indeed, it has been shown that MyoHand can be trained to solve individual in-hand tasks on very simple geometries (ball, pen, cube) (Caggiano et al., 2022; 3). Here, we leveraged and extended the MyoHand model to perform hand-object manouvers on a large variaty of complex realistic objects.
**Behavioral synthesis.** Data-driven approaches have consistently used Reinforcement Learning (RL) on joint-based control to solve complex dexterous manipulation in robotics (Rajeswaran et al., 2018; Kumar et al., 2016; Nagabandi et al., 2019; Chen et al., 2021). In order to yield more naturalistic movements, different methods have leveraged motion capture data (Merel et al., 2017, 2019; Hasenclever et al., 2020). By means of those approaches, it has been possible to learn complex movements and athletic skills such as high jumps (Yin et al., 2021), boxing and fencing (Won et al., 2021) or playing basketball (Liu and Hodgins, 2018).
In contrast to joint-based control, in biomechanical models, machine learning has been applied to muscle actuators to control movements and produce more naturalistic behaviors. This is a fundamentally different problem than robotic control as the overactuated control space of biomechanical systems leads to ineffective explorations (Schumacher et al., 2022). Direct optimization (Wang et al., 2012; Geijtenbeek et al., 2013; Al Borno et al., 2020; Ruckert and d'Avella, 2013) and deep reinforcement learning (Jiang et al., 2019; Joos et al., 2020; Schumacher et al., 2022; Ikkala et al., 2022; Caggiano et al., 2022; Wang et al., 2022; Song et al., 2020; Park et al., 2022) have been used to synthesize walking and running, reaching movements, in-hand manipulations, biped
Figure 1: Contact rich manipulation behaviors acquired by _MyoDex_ with a physiological _MyoHand_
Figure 2: **MyoHand - Musculoskeletal Hand model(Caggiano et al., 2022).** On the left, rendering of the musculoskeletal structure illustrating bone – in gray – and muscle – in red. On the right a skin-like surface for soft contacts is overlaid to the musculoskeletal model.
locomotion and other highly stylistic movements (Lee et al., 2018, 2019). Nevertheless, complex dexterous hand-object manipulations beyond in-hand object rotation (Caggiano et al., 2022; Berg et al., 2023) have not been accomplished so far.
**Manipulation priors.** Previous attempts have tried to solve complex tasks by building priors but this approach has been limited to games and robotics. The idea of efficiently representing and utilizing previously acquired skills has been explored in robotics by looking into features across different manipulation skills e.g. Associative Skill Memories (Pastor et al., 2012) and meta-level priors (Kroemer and Sukhatme, 2016). Another approach has been to extract movement primitives (Rueckert et al., 2015) to identify a lower-dimensionality set of fundamental control variables that can be reused in a probabilistic framework to develop more robust movements.
Multi-task learning, where a model is trained on multiple tasks simultaneously (Caruana, 1997), has been also shown to improve the model's ability to extract features that generalize well (Zhang and Yeung, 2014; Dai et al., 2016; Liu et al., 2019). Multi-task reinforcement learning (RL) has been used in robotics to propose representations-based methods for exploration and generalization in games and robotics (Goyal et al., 2019; Hausman et al., 2018). However, training on multiple tasks can lead to negative transfer. As a consequence, performance on one task is negatively impacted by training on another task (Sun et al., 2020). Nevertheless, it has been argued that in (over)redundant control such as the physiological one, multi-task learning might facilitate learning of generalizable solutions (Caruana, 1997). In this work, in addition to showing that nimble contact-rich manipulation using detailed physiological hand with musculoskeletal dynamics is possible, we present evidence that a generalizable physiological representation via Multi-task reinforcement learning - _MyoDex_ - can be acquired and used as priors to facilitate both learning and generalization across complex contact rich dexterous tasks.
## 3 Overactuated Physiological Dexterity
Human hand dexterity builds on the fundamental characteristics of physiological actuation: muscles are multi-articular and multi-joints, the dynamics of the muscle are of the third order, muscles have pulling-only capabilities, and effectors have intermittent contact with objects. To further our understanding of physiological dexterity, we embed the same control challenges - by controlling a physiologically accurate musculoskeletal model of the hand (see Sec. 3.1) - in complex manipulation tasks (see Sec. 3.2).
### MyoHand: A Physiologically Accurate Hand Model
In order to simulate a physiologically accurate hand model, a complex musculoskeletal hand model comprised of 29 bones, 23 joints, and 39 muscles-tendon units (Wang et al., 2022) - MyoHand model - implemented in the MuJoCo physics simulator (Todorov et al., 2012) was used (see Figure 2). This hand model has previously been shown to exhibit a few dexterous _in-hand_ manipulation tasks (Caggiano et al., 2022), which makes it a good candidate for our study seeking generalization in dexterous manipulation.
We extended the MyoHand model to include translations and rotations at the level of the shoulder. We limited the translation on the frontal (range between \([-0.07,\ 0.03]\)) and longitudinal (range between \([-0.05,\ 0.05]\)) axis to support the natural shoulder and wrist rotation (elbow is considered maximally extended i.e. a straight arm). For the rest of the paper we will refer to the whole system as _MyoHand_.
### Dexterous Behaviors Studied
In this study, we need a large variability of manipulations to explore the generality of our method against a wide range of solutions, hence it was important to include 1) objects with different shapes, and 2) complexity in terms of desired behaviors requiring simultaneous effective coordination of finger, wrist, as well as arm movements.
Our task set _MyoDM_ (inspired by TCDM benchmarks (Dasari et al., 2023)) is implemented in the MuJoCo physics engine (Todorov et al., 2012) and consists of 33 objects and 57 different behaviors. Every task setup (see Figure 3) consists of a tabletop environment, an object from the ContactDB dataset (Brahmbhatt et al., 2019), and the MyoHand.
Dexterous manipulation is often posed as a problem of achieving the final desired configuration of an object. In addition to the final posture, in this study, we are also interested in capturing the detailed temporal aspect of the entire manipulation behavior. Tasks like drinking, playing, or cyclic movement like hammering, sweeping, etc., that are hard to capture simply as goal-reaching, can be handled by our formulation (Sec. 4) and are well represented in the _MyoDM_.
The tasks considered in _MyoDM_ entail a diverse variety of object manipulation (relocations+reorientations) behaviors requiring synchronized coordination of arm, wrist, as well as in-hand movements to achieve the desired object behaviors involving simultaneous translation as well as rotation (average \(\pm\) std, \(28^{\circ}\pm 21^{\circ}\)). The range of motions of the shoulder with fixed elbow alone is not sufficient to enable the entire range of desired object rotations without involving in-hand and wrist maneuvers. The angle between the palm and object ranges upwards of \(20^{\circ}\) in our final acquired behaviors. The wrist is one of the most complex joints
to control because it is affected simultaneously by the balanced activation of more than 20 muscles whose activations also control finger movements. Careful maneuvering of objects within the hand requires simultaneous synchronization of numerous antagonistic finger muscle pairs, failing which leads to loss of object controllability; highlighting the complexities of controlling a physiological musculoskeletal hand during these complex manipulations.
## 4 Learning Controllers for Physiological Hands
In this section, we discuss our approach to build agents that can learn contact-rich manipulation behaviors and generalize across tasks.
### Problem formulation
A manipulation task can be formulated as a Markov Decisions Process (MDP) (Sutton and Barto, 2018) and solved via Reinforcement Learning (RL). In RL paradigms, the Markov decision process is defined as a tuple \(\mathcal{M}=(\mathcal{S},\mathcal{A},\mathcal{T},\mathcal{R},\rho,\gamma)\), where \(\mathcal{S}\subseteq\mathbb{R}^{n}\) and \(\mathcal{A}\subseteq\mathbb{R}^{m}\) represents the continuous state and action spaces respectively. The unknown transition dynamics are described by \(s^{\prime}\sim\mathcal{T}(\cdot|s,a)\). \(\mathcal{R}:\mathcal{S}\rightarrow[0,R_{\max}]\), denotes the reward function, \(\gamma\in[0,1)\) denotes the discount factor, and and \(\rho\) the initial state distribution. In RL, a policy is a mapping from states to a probability distribution over actions, i.e. \(\pi:\mathcal{S}\to P(\mathcal{A})\), which is parameterized by \(\theta\). The goal of the agent is to learn a policy \(\pi_{\theta}(a|s)=argmax_{\theta}[J(\pi,\mathcal{M})]\), where \(J=\max_{\theta}\mathbb{E}_{s_{0}\sim\rho(s),a\sim\pi_{\theta}(a_{t}|s_{t})}[ \sum_{t}R(s_{t},a_{t})]\)
Figure 4: **Learning Frameworks. Left - Single Task Framework: policies were obtained by training policies to solve the individual tasks. Right - Multi-task framework: A single policy (_MyoDex_) was obtained by learning all tasks at once.**
Figure 3: **Task setup and a subset of _object-hand_ pair from our task-set. Every task setup consisted of a tabletop environment, an object, and the MyoHand. The MyoHand was shaped with a compatible posture and positioned near an object (i.e. pre-grasp posture).**
### Learning Single-Task Controllers
**Single task agents.** The single task agents are tasked with picking a series of actions (\([a_{0},a_{1},...,a_{T}]\)), in response of the evolving states (\([s_{0},s_{1},...,s_{T}]\)) to achieve their corresponding object's desired behavior \(\hat{X}_{object}=[\hat{x}_{0},...,\hat{x}_{T}]\).
We adopt a standard RL algorithm _PPO_(Schulman et al., 2017) to acquire a goal-conditioned policy \(\pi_{\theta}(a_{t}|s_{t},\hat{X}_{object})\) as our single task agents. Details on state, actions, rewards, etc are provided in Section 5. Owing to the third-order non-linear actuation dynamics and high dimensionality of the search space, direct optimization on \(\mathcal{M}\) leads to no meaningful behaviors.
Pre-grasps implicitly incorporate information pertaining to the object and its associated affordance with respect to the desired task (Jeannerod, 1988; Santello et al., 2002). We leveraged (Dasari et al., 2023)'s approach of leveraging pre-grasp towards dexterous manipulation with robotic (Adroit (Kumar, 2016)) hand and extend it towards MyoHand. The approach uses the hand-pose directly preceding the initiation of contact with an object i.e. a proxy to pre-grasp, to guide search in the high dimensional space in which dexterous behaviors evolve. This approach yeilds a set of single-task expert agents \(\pi_{i}\) with \(i\in I\) where \(I\) is the set of tasks (see Figure 4-left).
### Framework for Multi-Task Physiological Learning
**Multi-task agent.** Ideally, an agent would be able to solve multiple tasks using a goal-conditioning variable. Thus, we additionally train a single agent to solve a subset of tasks in parallel (see Figure 4-right). This approach proceeds in a similar fashion as the single-task learner, but agent's experiences are sampled from the multiple tasks in _parallel_. All other details of the agent \(\pi_{\theta}^{\#}(a_{t}|s_{t},\hat{X}_{object})\) (e.g. hyperparameters, algorithm, etc.) stay the same.
Similar to the single task agents, we encode manipulation behaviors in terms of goal-conditioned policies \(\pi_{\theta}(a_{t}|s_{t},\hat{X}_{object})\) and employ a standard implementation of the PPO (Schulman et al., 2017) from Stable-Baselines (Raffin et al., 2021) and pre-grasp informed formulation from (Dasari et al., 2023)'s to guide the search for our multi-task agents as well. See Section A.4.2 for details. The hyperparameters were kept the same for all tasks (see Appendix Table A.1).
## 5 Task Details
Next, we provide details required to instantiate our _MyoDM_ task suite-
**State Space.** The state vector \(s_{t}=\{\phi_{t},\dot{\phi}_{t},\psi_{t},\dot{\psi}_{t},\tau_{t}\}\) consisted of \(\phi\) a 29-dimensional vector of 23 hand and 6 arm joints and velocity \(\dot{\phi}\), and object pose \(\psi\) and velocity \(\dot{\psi}\). In addition, positional encoding \(\tau\)(Vaswani et al., 2017), used to mark the current simulation timestep, was appended to the end of the state vector. This was needed for learning tasks with cyclic motions such as hammering.
**Action Space.** The action space \(a_{t}\) was a 45-dimensional vector that consists of continuous activations for 39 muscles of the wrist and fingers (to contract muscles), together with 3D translation (to allow for displacement in space), and 3D rotation of the shoulder (to allow for a natural range of arm movements).
**Reward Function.** The manipulation tasks we consider involved approaching the object and manipulating it in free air after lifting it off a horizontal surface. The hand interacts with the object adjusting its positions and orientation (\(X=[x_{0},...,x_{T}]\)) for a fixed time horizon. Similar to (Dasari et al., 2023), this is translated into an optimization problem where we are searching for a policy that is conditioned on desired object trajectory \(\hat{X}=[\hat{x}_{0},...,\hat{x}_{T}]\) and optimized using the following reward function:
\[R(x_{t},\hat{x}_{t}):=\lambda_{1}exp\{-\alpha\|x_{t}^{(p)}-\hat{ x}_{t}^{(p)}\|_{2}-\\ \beta|\angle x_{t}^{(o)}-\hat{x}_{t}^{(o)}|\}+\lambda_{2}\mathbb{1 }\left\{lifted\right\}-\lambda_{3}\left\|\overline{m}_{t}\right\|_{2} \tag{1}\]
where \(\angle\) is the quaternion angle between the two orientations, \(\hat{x}_{t}^{(p)}\) is the desired object position, \(\hat{x}_{t}^{(o)}\) is the desired object orientation, \(\mathbb{1}\left\{lifted\right\}\) encourages object lifting, and \(\overline{m}_{t}\) the is overall muscle effort.
**Progress metrics.** To effectively capture the temporal behaviors, we treat dexterous manipulation as a task of realizing desired object trajectories (\(\hat{X}\)). To capture temporal progress, similar to (Dasari et al., 2023), we use three metrics to measure task performance. The _success metric_, \(S(\hat{X})\) reports the fraction of time steps where object error is below a \(\epsilon=1cm\) threshold. It is defined as: \(S(\hat{X})=\frac{1}{T}\sum_{t=0}^{T}\mathbb{1}\left\|x_{t}^{(p)}-\hat{x}_{t}^{ (p)}\right\|_{2}<\epsilon\). The _object error metric_\(E(\hat{X})\) calculates the average Euclidean distance between the object's center-of-mass position and the desired position from the desired trajectory: \(E(\hat{X})=\frac{1}{T}\sum_{t=0}^{T}\left\|x_{t}^{(p)}-\hat{x}_{t}^{(p)}\right\| _{2}\). In addition, we also used the _object orientation metric_: \(O(\hat{X})=\frac{1}{T}\angle_{t=0}^{T}(x_{t}^{(o)}-\hat{x}_{t}^{(o)})\)1.
Footnote 1: For interpretability, we often omit orientations because center-of-mass error and orientation error were highly correlated in practice i.e. Pearson-correlation \(>0.785\)
## 6 Results
First, we study if we can solve the _MyoDM_ task set, one task at a time (see Sec. 6.1). Next, we illustrate that our _MyoDex_ representation can be used as _a prior_ for accelerating
learning novel, out-of-domain tasks (see Sec. 6.2). Finally, we present a series of ablation studies to understand various design choices of our approach (see Sec. 6.4).
### Learning Expert Solutions for Single-Task Setting
We begin by asking, is it possible to learn a series of complex dexterous manipulation behaviors (see Sec. 3.2) using a MyoHand? Our single-task learning framework is applied to solve a set of 57 _MyoDM_ tasks independently, without any object or task-specific tuning (see Table A.1). The resulting "expert policies" were able to properly manipulate only a subset of those objects, while moving and rotating them to follow the target trajectory (see Figure 1 for a sequence of snapshots). This was quantified by using 2 metrics (Sec. 5): a Success Metric and an Error Metric. Our single-task framework achieves an average success rate of \(66\%\) solving 32 out of 57 tasks (see Fig. 5 and experts in Fig. A.3) and an average (ecludean distance) error of \(0.021\). We encourage readers to check our project website for videos and further qualitative analysis of the learned behaviors.
### Accelerating Out-of-Domain Learning via _MyoDex_
The single-task framework was not able to solve all task in our task set, even individually which further establishes complexity of behavior acquisition with high DoF MyoHand and the difficulty of our _MyoDM_ task set. Furthermore, it creates controllers that can only function within a specific scenario/task. Next, we will demonstrate that by simultaneously training on multiple tasks during the reinforcement learning loop we can achieve a _MyoDex_ prior that can overcome single-task limitations. _MyoDex_ is a prior that can be fine-tuned to solve a larger amount of tasks. In addition, a single multi-task policy based on training _MyoDex_ at convergence can solve multiple tasks.
For building the _MyoDex_ prior, we consider a subset of 14 _MyoDM_ tasks with a large variability of object and movements (see Sec. 6.4.3 for the effects of task choice) and we trained one policy to solve all the set of tasks at once. We stopped the training at \(12.5k\) iterations (at the beginning of the error plateau - see Figure A.1). At this iteration, we tested potential _zero-shot_ generalization capabilities with the MyoHand positioned near a novel object with a compatible posture and conditioned on a new task trajectory. While the policy was not able to zero-shot solve these new tasks (success rate \(\leq 10\%\)), we do observe (see Fig. 6) that the hand can succesfully grasp and lift the unseen objects. This leads us to believe that the _MyoDex_ representation can be used as a _prior_ for accelerating transfer learning.
However, this is not the only way to accomplish a general multi-task representation. An established baseline is a student-teacher distillation (see Sec. A.1), which trains a single student policy to imitate the 14 expert policies (from prior experiments) via behavior cloning.
We fine-tune both the _MyoDex_ and the student policy on the remaining out-of-domain set of 43 _MyoDM_ tasks (using single-task RL) for additional iterations. Figure 7 presents learning curves for the fine-tuned models based on _MyoDex_, fine-tuned student baselines, and trained (from scratch) single-task expert policies in terms of success rate and errors, respectively. Note how the _MyoDex_ based policy is able to learn the tasks significantly faster than either the baseline or the single-task policies. Among the solved out-of-domain tasks, _MyoDex_ based policies were about \(4\)x faster than student based policy (\(1.9k\) vs \(7.7k\)), and approximately \(3\)x faster than single-task expert policy (\(1.9k\) vs \(5.7k\), Table 1). Additionally, it achieves a _higher overall task performance in comparision to the single-task experts_, which plateau at a significantly lower success rate, likely due to exploration challenges. Table 1 shows this trend in extra detail. The _MyoDex_ representation allows to solve more tasks (\(37\) vs \(22\), see Table 1 and Table A.2) and achieve a higher overall success rate (\(0.89\) vs \(0.69\)) than the single-task expert, which in turn outperforms the student baseline. This leads us to conclude that the _MyoDex_ representation can act as a generalizable prior for learning dexterous manipulation policies on a musculoskeletal MyoHand. It is both able
Figure 5: **Distribution of single task solutions.** Distribution of maximums success rate for single-task solutions on 57 different tasks. Only 32 out of 57 tasks i.e. \(56\)%, were solved with a success rate above \(80\%\). Training performed over \(12.5k\) iterations.
Figure 6: **Zero-shot generalization.**_MyoDex_ successfully initiated manipulations on new objects and trajectories. Hand rendering includes skin-like contact surfaces (see Fig. 2)
substantially accelerate learning new tasks, and indeed leads to a _stronger_ transfer to new tasks.
### Multi-Task Learning with _MyoDex_
Additionally, _MyoDex_ can also be used to recover one single policy that can solve multiple tasks. We compared the results of the _MyoDex_ training at convergence against the student policy (from the distillation of experts) on the same set of 14 tasks. See a summary of the results Figure A.2. The converged _MyoDex_ based policy's success rate improves by \(>2\)x over the student policy. We present an explanation in Section 8 of why distilling from experts that have acquired incompatible behaviors in an over-redundant musculoskeletal system fails at learning multi-task policies. Indeed, expert policies found a local solution that does not help to learn other tasks e.g. experts used as a-priori do not help to fine-tune other tasks (see Fig. A.5). In contrast, our multi-task framework avoids this pitfall, since it simultaneously learns one policy without any implicit bias, and can reach similar levels as reached by individual experts in isolation.
### _MyoDex_ Ablation Study
The previous set of experiments demonstrated that _MyoDex_ contains generalizable priors for dexterous manipulation. The following ablation study investigates how changing the number of pre-training iterations as well as the number of tasks during pre-training affect the _MyoDex_'s capabilities.
#### 6.4.1 Effects of iterations on the _MyoDex_ representation
In our experiment, the multi-task policy at \(12.5k\) iterations is defined as the _MyoDex_ prior. At this number of iterations, the policy was able to achieve \(\sim 35\%\) success rate (see Fig. A.1). This solution provided both few-shot learning (task solved within the first environment iteration) most of the _MyoD_M set of 57 tasks. Here, in order to probe the sensitivity of _MyoDex_ prior to the stage of learning at which the representation is extracted, we compared _MyoDex_ against representations obtained earlier i.e. \(2.5k\) and \(7.5k\), and one later i.e. \(37.5k\) stages of learning. Figure 8 shows the results on the fine-tuning of all the 57 tasks for the 4 different representations. Early representations are slower but, with enough iterations, they are able to solve almost all tasks (\(98\%\) (56 / 57) and \(91\%\) (52 / 57) respectively for the representations at \(2.5k\) and \(7.5k\)). Conversely, later representations, show few-shot learning (10 tasks) but they are able to learn only a reduced amount of tasks (\(61\%\) (35 / 57)). Hence, _MyoDex_ trained at \(12.5k\) iterations strikes a balance, facilitating fast initial learning (including few-shots) while being general enough to support a diverse collection of out-of-domain tasks (see Figure 8).
Another way to look at the effect of learning and generalizable solutions over the iterations is to look to muscle synergies as they express the amount of muscle co-contraction shared across tasks. In our study, we utilized the concept of Variance Accounted For (VAF, see Sec. A.4) to quantify the number of synergies needed to reconstruct the needed muscle activations to solve the task. Higher VAF achieved with fewer muscle synergies indicates that it is possible to use fewer combinations of muscle co-contractions to generate the needed muscle activations. Our findings indicate that
\begin{table}
\begin{tabular}{|l|c|c|c|} \hline
**Based on** & **Solved** & **Success** & **Iter. to solve** \\ \hline Expert & \(51\%\) (22/43) & \(0.69\pm 0.30\) & \(5.7k\pm 1.5k\) \\ Student & \(30\%\) (13/43) & \(0.54\pm 0.35\) & \(7.7k\pm 1.9k\) \\ _MyoDex_ & \(86\%\) (37/43) & \(0.89\pm 0.25\) & \(1.9k\pm 2.1k\)_ \\ \hline \end{tabular}
\end{table}
Table 1: _MyoDex_ transfer statistics on unseen (43) tasks – _Solved_ indicates the percentage (ratio) of solved tasks (success \(\geq 80\%\)). _Success_ indicates the success metric stats on all 43 tasks at \(12.5k\) iterations. _Iter. to solve_ indicates the stats on min iterations required by the solved task to achieve \(\geq 80\%\) success. Values are expressed as average \(\pm\) std.
Figure 7: **Fine-tuning on 43 Out-of-domain tasks. Metrics until \(5k\) iterations of the fine tuning of 43 out-of-domain tasks. Convergence is assumed at \(12.5k\) iterations. Left - Success Metric. Right - Error Metric. Continuous lines show average and shaded areas the standard deviation of success and error metrics. The dashed line represents the value at convergence i.e. \(12.5k\) iterations.**
early on in the training process (i.e., around 2.5k iterations, see Figure A.8), a substantial number of synergies (more than 12) is needed to achieve a high level of signal reconstruction. This suggests that while the policy is capable of discovering some solutions in the muscle space, synergies are not able to cover all the variability of the signal. Indeed, this representation helps to overcome some local minima hence it is particularly well-suited for facilitating transfer to new tasks.
Around 12.5k iterations, we observed a peak in the capacity of fewer synergies to account for most of the signal (see Figure A.8). At this point we have identified solutions in the muscle space that are highly reusable across multiple tasks.
However, at 37.5k iterations, we found that a greater number of synergies were required to explain most of the original signals. This indicates that specialized co-contractions are emerging to address specific tasks demands. While these synergies are effective at solving similar tasks with few or zero shots, their specialization may limit their ability to tackle dissimilar tasks.
Overall, our results suggest that our representation of synergies is a powerful tool for facilitating transfer learning, especially in the early stages of training when more generalized solutions are required. As training progresses, the emergence of specialized co-contractions enables efficient learning and transfer to similar tasks. Still, with even more training, specialized solutions are developed.
#### 6.4.2 Effect of the number of environments on _MyoDex_ training
In the above experiment, we showed the _MyoDex_ representation based on 14 environments. An analysis showing the effect of multi-task learning on environment diversity illustrates that the use of 14 environments represented a balance between trainign the multi-task policy effectively and transfer/generalization ability it possses. We compared _MyoDex_ trained on 6, 14, and 18 environments at \(12.5k\) iterations and tested on a set of 39 new environments. _MyoDex_ based on 6 and 18 environments leads to lower performance with respect to 14 environments both in terms of success rate and the number of solved environments (see Table 2).
#### 6.4.3 How Training Tasks Affect _MyoDex_
The choice of objects and tasks to train _MyoDex_ can significantly impact the effectiveness of the representation. We study the effect of pre-training task distribution on the effec
\begin{table}
\begin{tabular}{|l|c|c|} \hline
**Based on** & **Success** & **Solved** \\ \hline _MyoDex6_ & \(0.78\pm 0.32\) & \(72\%\)\((28/39)\) \\ _MyoDex14_ & \(\mathbf{0.92\pm 0.21}\) & \(95\%\)\(\mathbf{(37/39)}\) \\ _MyoDex18_ & \(0.91\pm 0.2\) & \(87\%\)\((34/39)\) \\ \hline \end{tabular}
\end{table}
Table 2: **Fine-tuning statistics based on different _MyoDex_ priors._**_MyoDex_ trained with different environments as priors and fine-tuned on 39 environments. Results reported in terms of average and standard deviation of success and percentage of solved tasks i.e. \(\geq 80\%\).
Figure 8: **Fine-tuning from representations obtained at different iterations.** Fine-tuning from representations obtained earlier i.e. \(2.5k\) and \(7.5k\) iterations, _MyoDex_ i.e. \(12.5k\) iterations, and later i.e. \(37.5k\) iterations. Earlier representations show no few-shot generalization but better coverage with 56 out of 57 tasks solved, while later representations show few-shot generalizations but have less coverage with 35 out of 57 tasks solved. The continuous line represents the average and the shaded area is the standard deviation of the success metrics. The dashed line represents the value at convergence i.e. 12.5k iterations.
Figure 9: **Pre-Training task distribution.** The distributions of our task collection in terms of its variability (standard deviation - STD). Represented on each axes are the STD of the absolute positional (X-Axis) and rotational (Y-axis) displacements from the respective initial object poses in the desired object trajectories in our task set. In circle are all the 57 tasks involved in our study In pink [Orig.(diverse)] are the original tasks used for training MyoLex. In blue [Altern.1(diverse)] is a new task set we use for training an alternate instance of MyoLex prior used in ablation studies.
tiveness of MyoDex priors. We selected two new task sets. First, a _diverse_ tasks collection - _MyoDex Alt Diverse_ (Figure 9 in blue) with the same similar attributes of the original dataset (in pink). Second, a _homogenous_ task collection - _MyoDex Alt Homogenous_ (Figure 9 in red) - with tasks with little motion variance (e.g. mostly lifting). We found that _MyoDex Alt Diverse_ - trained on the alternative diverse tasks - was able to improve performance over time, while _MyoDex Alt Homogenous_ - trained on the alternative homogenous tasks - had its performance plateau early on during training (see Figure A.7). Indeed, when used for transfer on new tasks, _MyoDex Alt Diverse_ is able to match the original _MyoDex_ performance, while _MyoDex Alt Homogenous_ does not (see Figure 10). This shows that the variety of manipulation/tasks in the pretraining is fundamental to achieve high performance in a larger set of downstream tasks and _MyoDex_ is not sensitive to the specific choice of tasks.
### Extension to other high dimensional system
To further investigate the applicability of our approach to other high dimensional systems, we set out to build a generalizable representation for the robotic _Adroit Hand_(Rajeswaran et al., 2018) commonly studied in robot learning. Adroit is a 24 degree-of-freedom (DoF) modified shadow hand with 4 extra DoF at the distal joint. Following the approach behind _MyoDex_, a general representation of manipulation prior - _AdroitDex_ - was obtained. We use the same 14 tasks that we used for training _MyoDex_. In the Figure 11 we show the performance of _AdroitDex_ on 34 unseen tasks on the TCDM benchmark (Dasari et al., 2023). _AdroitDex_ achieves a success rate of \(74.5\%\) in about 10M iteration steps, which is approximately 5x faster than the PGDM baseline (Dasari et al., 2023), which needed 50M iteration steps to achieve the same result (see Figure 11).
## 7 Conclusion
In this manuscript, we learn skilled dexterous manipulation of complex objects on a musculoskeletal model of the human hand. In addition, by means of joint multi-task learning, we showed that it is possible to extract generalizable representations (_MyoDex_) which allow faster fine-tuning on out-of-domain tasks and multi-task solutions. Ultimately, this study provides strong bases for how physiologically realistic hand manipulations can be obtained by pure exploration via Reinforcement Learning i.e. without the need for motion capture data to imitate specific behavior.
## 8 Discussion on the role of Synergies
Why does _MyoDex_ help the overactuated musculoskeletal system to solve multiple tasks? If we look at the coordination of muscle activations - muscle synergies (see Appendix A.4) - we notice that _MyoDex_ shows a larger number of similar activations (see Figure A.4) vs experts/distilled policies. This is because the expert solutions find one mode/solution to solve a task that does not incorporate information from other tasks. Naive distillation propogates this effect to the student policy. In contrast, _MyoDex_ learns to coordinate muscle contraction. Indeed, fewer muscle coordination/synergies seem to explain most of the behavior (see Figure A.8, at 12.5K iterations). All in all, those observations are in line with the neuroscience literature where muscle synergies have been suggested as the physiological substrate to obtain faster and more effective skill transfer (Yang et al., 2019; Cheung et al., 2020; Dominici et al., 2011; Berger et al., 2013).
Figure 11: **Fine-tuning a generalizable representation on Adroit subtasks:**_AdroitDex_. A general representation of manipulation on the same 14 tasks used for trainign _MyoDex_ was finetuned on 34 unseen tasks on the TCDM benchmark (Dasari et al., 2023). Curves shows average (continuous) and std (shaded area). _AdroitDex_ beats previously reported SOTA on TCDM benchmarks while being 5x more sample efficient.
Figure 10: **Effect of pre-training task distribution on _MyoDex_ performance. _MyoDex Alt Diverse_ (trained on tasks of similar diversity – in blue) is able to better match the original _MyoDex_ performance in comparision to _MyoDex Alt Homogenous_ (trained on homogenous tasks collection).**
## 9 Limitations and Future work
While we demonstrated that _MyoDex_ can produce realistic behavior without human data, one important limitation is understanding and matching the results with physiological data. Indeed, our exploration method via RL produced only one of the very high dimensional combinations of possible ways that a human hand could hypothetically grab and manipulate an object. For example, there are several valid ways to hold a cup e.g. by using the thumb and one or multiple fingers. Although our investigation points us in the right direction regarding the physiological feasibility of the result, these findings have yet to be properly validated with clinical data and user studies. Future works will need to consider the ability to synthesize new motor behaviors while simultaneously providing muscle validation.
|
2309.15034 | Measurement-induced phase transition in a single-body tight-binding
model | We study the statistical properties of a single free quantum particle
evolving coherently on a discrete lattice in ${\rm d}$ spatial dimensions where
every lattice site is additionally subject to continuous measurement of the
occupation number. Our numerical results indicate that the system undergoes a
Measurement-induced Phase Transition (MiPT) for ${\rm d}>1$ from a
$\textit{delocalized}$ to a $\textit{localized}$ phase as the measurement
strength $\gamma$ is increased beyond a critical value $\gamma_{c}$. In the
language of surface growth, the delocalized phase corresponds to a
$\textit{smooth}$ phase while the localized phase corresponds to a
$\textit{rough}$ phase. We support our numerical results with perturbative
renormalization group (RG) computations which are in qualitative agreement at
one-loop order. | Tony Jin, David G. Martin | 2023-09-26T16:03:09Z | http://arxiv.org/abs/2309.15034v2 | # Measurement-induced phase transition in a single-body tight-binding model
###### Abstract
We study the statistical properties of a single free quantum particle evolving coherently on a discrete lattice in \(d\) spatial dimensions where every lattice site is additionally subject to continuous measurement of the occupation number. Using perturbative renormalization group (RG) analysis, we show that the systems undergoes a Measurement-induced Phase Transition (MiPT) for \(d>2\) from a _delocalized_ to a _localized_ phase as the measurement strength \(\gamma\) is increased beyond a critical value \(\gamma_{\rm c}\). In the language of surface growth, the delocalized phase corresponds to a _smooth_ phase while the localized phase corresponds to a _rough_ phase. We support our analytical computations with numerical analysis which are in qualitative and quantitative agreement with the theory.
Recently, it has been discovered that quantum chaotic systems subject to continuous or projective measurements could undergo a phase transition characterized by a change of the scaling properties of the entanglement entropy with time or system size, a phenomenon now referred to as Measurement-induced Phase Transition (MiPT) [1]. MiPT constitutes a fascinating problem at the crossroad of statistical physics and quantum information. As such, it has attracted a tremendeous amount of interest in the recent years [2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13]. MiPTs are often characterized by a transition from an _area law_ phase, _i.e_ a phase where the entanglement entropy (EE) of a subsystem doesn't scale with its size, to a _volume law_ phase, _i.e_ a phase where the EE scales with the system size. Such a scaling transition occurs upon increasing the strength of the measurement and is observed in various systems such as in 1d interacting chaotic many-body systems.
However, surprisingly, the existence of a MiPT between two non-trivially entangled phases for free or Gaussian fermions undergoing measurements remains an actively debated question. While the original study of entanglement in 1d free fermions [14] showed no signs of a phase transition, more recent numerical and theoretical investigations showed either the existence of a phase where the EE scales as \(\log L\)[15; 16; 17] or \((\log L)^{2}\)[18] while another recent study [19] argued that the observed transitions are in fact sharp crossovers.
In this work, we provide additional insights on this subject by studying the simpler, yet non trivial single-body problem of a particle evolving coherently on a discrete lattice in \(d\) spatial dimensions, where every lattice site is subject to independent, continuous measurements of its occupation number-See Fig.1. Combining perturbative renormalization group (RG) methods and numerical simulations, we show that, while we do not find evidence of a transition in \(d=1\), there exists a phase transition from a _smooth/delocalized_ phase to a _rough/localized_ phase when \(d>2\). Interestingly, this shows that many-body effects are not necessary to observe a MiPT and corroborates the result obtained in [20] for a classical random walker undergoing continuous measurements.
ModelWe consider a single quantum particle on a square lattice of \(N^{d}\) sites with periodic boundary conditions. Let \(\{\left|\mathbf{j}\right\rangle\}_{\mathbf{j}\in[1,N]^{d}}\) denote the position basis. The dynamics is described by a unitary tight-binding evolution \(\tilde{H}:=-\tau\sum_{\{\left|\mathbf{e}\right|=1\}}\left|\mathbf{j}\right\rangle \left\langle\mathbf{j}+\mathbf{e}\right|\) where \(\{\left|\mathbf{e}\right|=1\}\) is the set of vectors of norm \(1\). In addition, each site undergoes
Figure 1: Our model consists of a single quantum walker on a \(d\)-dimensional square lattice of \(N^{d}\) sites undergoing a unitary tight-binding evolution in addition to independent continuous measurements of the occupation number at every lattice site. In \(d>2\), we observe a phase transition from a smooth/delocalized phase to a rough/localized phase upon increasing the measurement strength \(\gamma\) beyond a critical value \(\gamma_{\rm c}\). The snapshots show a typical density profile in each of these phases for \(d=3\), where one of the spatial direction has been projected out. **Parameters:**\(\tau=1.5\), \(N=41\), \(dt=0.01\). For the smooth phase \(\gamma=0.9\) and for the rough phase \(\gamma=3\).
continuous measurements of strength \(\gamma\) of the local occupation \(\hat{n}_{\mathbf{j}}:=\left|\mathbf{j}\right\rangle\left\langle\mathbf{j}\right|\) resulting in the stochastic differential equation (SDE) [21]:
\[d\left|\psi\right\rangle=-iH\left|\psi\right\rangle dt \tag{1}\] \[+\sum_{\mathbf{j}}\left(-\frac{\gamma}{2}(\hat{n}_{\mathbf{j}}-\langle \hat{n}_{\mathbf{j}}\rangle_{t})^{2}dt+\sqrt{\gamma}(\hat{n}_{\mathbf{j}}-\langle\hat{n }_{\mathbf{j}}\rangle_{t})dB_{t}^{\mathbf{j}}\right)\left|\psi\right\rangle\,\]
where \(\langle\bullet\rangle_{t}:=\text{tr}(\rho_{t}\bullet)\). In (1), the \(\{B_{t}^{\mathbf{j}}\}_{j\in[1,N]^{d}}\) are \(N^{d}\) independent Brownian processes with average \(\mathbb{E}[dB_{t}^{\mathbf{j}}]=0\) and Ito rules \(dB_{t}^{\mathbf{j}}dB_{t^{\prime}}^{\mathbf{k}}=\mathbf{1}_{0}(t-t^{\prime})\delta_{\mathbf{j },\mathbf{k}}dt\) where \(\mathbf{1}_{0}\) is the indicator function. This model was originally introduced in [22] for the free fermionic case and has been subsequently studied in [14; 15; 16] in the context of MiPTs - see also [23; 24] for applications to transport and thermal engines.
In terms of the basis elements \(\psi_{\mathbf{j}}\) defined as \(\left|\psi\right\rangle=\sum_{\mathbf{j}}\psi_{\mathbf{j}}\left|\mathbf{j}\right\rangle\), Eq.(1) can be written as
\[d\psi_{\mathbf{j}}= i\tau\sum_{\{\left|\mathbf{e}\right|=1\}}\psi_{\mathbf{j}+\mathbf{e}}dt-\frac {\gamma}{2}\psi_{\mathbf{j}}\big{(}1-2|\psi_{\mathbf{j}}|^{2}+\sum_{\mathbf{m}}|\psi_{\bm {m}}|^{4}\big{)}dt+\sqrt{\gamma}\psi_{\mathbf{j}}\big{(}dB_{t}^{\mathbf{j}}-\sum_{\bm {m}}|\psi_{\mathbf{m}}|^{2}dB_{t}^{\mathbf{m}}\big{)}. \tag{2}\]
Throughout the rest of the manuscript, we will fix the initial condition of the system to be \(\psi_{\mathbf{j}}(t=0)=N^{-d/2}\) for all \(\mathbf{j}\). Note that by construction, \(\sum_{\mathbf{j}}|\psi_{\mathbf{j}}|^{2}\) is preserved on every realization of the noise.
Even though (2) describes the dynamics of a single particle, getting an exact solution of such a SDE is in general a formidable task. One way to make progress is to restrict it to what we will refer to as the _delocalized_ phase, _i.e_ to assume that \(|\psi_{\mathbf{j}}|\) is of order \(N^{-d/2}\). Under this assumption, keeping the leading order in \(N^{-1}\) in (2) gives the simpler expression
\[d\psi_{\mathbf{j}}=\bigg{(}i\tau\sum_{\{\left|\mathbf{e}\right|=1\}}\psi_{\mathbf{j}+\mathbf{ e}}-\frac{\gamma}{2}\psi_{\mathbf{j}}\bigg{)}dt+\sqrt{\gamma}\psi_{\mathbf{j}}dB_{t}^{ \mathbf{j}} \tag{3}\]
which is now _local_. We further take the continuous limit by introducing the lattice spacing \(b\) and the continuous quantities \(\vec{r}=\mathbf{j}b\), \(\varphi(\vec{r}=\mathbf{j}b)=b^{-d/2}\psi_{\mathbf{j}}\), \(d\eta(\vec{r},t):=b^{-d/2}dB_{t}^{\mathbf{j}}\), \(D:=b^{2}\tau\), \(\lambda:=\gamma b^{d}\). Up to a global phase, (3) then becomes
\[d\varphi=\bigg{(}iD\nabla^{2}\varphi-\frac{\gamma}{2}\varphi\bigg{)}\,dt+\sqrt {\lambda}\varphi d\eta. \tag{4}\]
It is important to note that the noise becomes multiplicative in (4). This allows us to draw an analogy between (4) and the Stochastic Heat Equation (SHE), thereby relating (3) to KPZ physics [25; 26]. Such an analogy was already fruitfully exploited in [20], where it led to an intuitive understanding of a MiPT in a classical context. The difference with this previous study is that we deal with an _imaginary_ diffusion term \(D\) as well as a real "mass" term \(\gamma/2\). Even though these differences lead to quantitative modifications with respect to the SHE, we will see that one of its main feature, namely the existence of a phase transition with lower critical dimension 2, remains present in (4).
We expect (3) to be valid as long as \(|\psi_{\mathbf{j}}|\) remains close to the homogeneous profile of order \(N^{-d/2}\). Such an assumption is verified when the renormalization flow is directed towards the delocalized phase: in this case, \(|\psi_{\mathbf{j}}|\) is indeed driven closer to the homogeneous profile. On the opposite, if the renormalization flow is directed towards the localized phase, \(|\psi_{\mathbf{j}}|\) is driven away from the homogeneous profile and we don't expect (3) to remain valid at long time. However, although (3) does not describe the strongly localized regime, it still allows us to make quantitative assertions concerning the boundary region between the two phases.
_Martin-Siggia-Rose (MSR) action_ We now proceed to derive the MSR action [27; 28]\(Z\) associated to (4). The details of the derivation are presented in the SM [29]. Let the superscript \(a\) denote the auxiliary fields. We have that
\[Z=\int\mathcal{D}[\varphi,\bar{\varphi},\varphi^{a},\bar{\varphi}^{a}]e^{iS_{0} +iS_{\nu}}, \tag{5}\]
where the bar denotes complex conjugation, and \(S_{0}\), \(S_{\nu}\) are respectively the quadratic and quartic part of the action:
\[S_{0}=\int d^{d}\vec{r}dt(\bar{\varphi},\bar{\varphi}^{a})\mathbf{G }_{0}^{-1}\begin{pmatrix}\varphi\\ \varphi^{a}\end{pmatrix}, \tag{6}\] \[\mathbf{G}_{0}^{-1}=\frac{1}{2}\begin{pmatrix}0&-\frac{\gamma}{2}+ \partial_{t}-iD\nabla^{2}\\ -\frac{\gamma}{2}-\partial_{t}+iD\nabla^{2}&0\end{pmatrix},\] (7) \[S_{\nu}=\frac{i}{8}\int d^{d}\vec{r}dt\big{(}\lambda^{\text{I}} \left(\bar{\varphi}^{a}\right)^{2}\varphi^{2}+\lambda^{\text{II}}\bar{\varphi}^{a }\bar{\varphi}\varphi^{a}\varphi+\text{c.c}\big{)}. \tag{8}\]
In (8), we introduced the labels I, II for the interacting terms, as they will behave differently under renormalization. For the microscopic theory (3), we have \(\lambda^{\text{I}}=\lambda^{\text{II}}=\lambda=\gamma b^{d}\).
Inverting (7) yields the free propagator in momentum \(\vec{q}\) and frequency \(\omega\):
\[G_{0}^{R}(\vec{q},\omega)=\frac{2i}{Dq^{2}-\omega-i\frac{\gamma}{2}} \tag{9}\]
where the \(R\) label refers to the retarded propagator. The advanced propagator \(A\) is obtained by complex conjugation \(G_{0}^{A}(\vec{q},\omega)=\bar{G}_{0}^{R}(\vec{q},\omega)\).
_Renormalization flow_ We proceed to the one-loop perturbative renormalization group (RG) analysis of (4). We employ standard momentum-shell analysis [28]. Let \(\Lambda\) be the microscopic momentum cut-off of the theory. The critical exponents associated to \(t\), \(\varphi\), \(\bar{\varphi}\) are named respectively \(z\), \(\chi\) and \(\bar{\chi}\). The flow is parametrized by \(l\).
At the one-loop level, there are no diagrams renormalizing the part of the action proportional to \(\partial_{t}\) and \(\nabla^{2}\). Imposing the stationarity of the corresponding prefactors under the flow gives
\[\chi+\bar{\chi}+d=0,\quad z=2. \tag{10}\]
The one-loop contribution to the renormalization flow of the "mass" term \(\gamma\) and the interaction \(\lambda^{\mathrm{I},\mathrm{II}}\) are depicted on Fig.2. Computing their contribution to the renormalization flow leads to [29]:
\[\begin{split}\frac{d\gamma}{dl}=& 2\gamma-\mathrm{sgn}\left( \gamma_{R}\right)K_{d}\left(2\lambda^{\mathrm{I}}+\lambda^{\mathrm{II}}\right),\\ \frac{d\lambda^{\mathrm{I}}}{dl}=&(2-d)\lambda^{ \mathrm{I}}+\mathrm{sgn}\left(\gamma_{R}\right)K_{d}\frac{\left(\lambda^{ \mathrm{I}}\right)^{2}}{\gamma+2iD\Lambda^{2}},\\ \frac{d\lambda^{\mathrm{II}}}{dl}=&(2-d)\lambda^{ \mathrm{II}}+\mathrm{sgn}\left(\gamma_{R}\right)K_{d}\frac{\left(\lambda^{ \mathrm{II}}\right)^{2}}{\gamma_{R}}.\end{split} \tag{11}\]
where \(\mathrm{sgn}\) is the sign function, \(\gamma_{R}:=\Re(\gamma)\) and we introduced \(K_{d}:=\frac{\Lambda^{d}}{\Gamma(d/2)2^{d-1}\pi^{d/2}}\) with \(\Gamma\) the Euler function.
The qualitative properties of the phase diagram can be understood by considering the simpler case of \(\lambda^{\mathrm{I}}=0\), \(\gamma=\gamma_{R}\) and restricting the study to the domain \(\gamma\geq 0\). In this case, the flow equations can be integrated exactly to yield
\[\gamma=\gamma_{0}e^{(2-\frac{d}{2})l}\sqrt{\frac{1+ce^{dl}}{1+c}},\quad\lambda =\lambda_{0}e^{(2-\frac{d}{2})l}\sqrt{\frac{1+c}{1+ce^{dl}}}, \tag{12}\]
where \(\gamma_{0}:=\gamma(l=0)\), \(\lambda_{0}:=\lambda(l=0)\) and \(c:=\frac{d\gamma_{0}}{2K_{d}\lambda_{0}}-1\). The asymptotic behavior of these equations depends on the sign of \(c\), i.e on the dimension and the initial ratio \(\gamma_{0}/\lambda_{0}\). For \(c<0\) we see that \(\gamma\to 0\) and \(\lambda\to\infty\) as \(l\to\frac{1}{2}\log(-c^{-1})\). For \(c>0\), the asymptotic behavior at large \(l\) is given by \(\gamma\propto e^{2l}\), \(\lambda\propto e^{(2-d)l}\). For \(c=0\), a quick inspection of (12) shows that the flow remains on the line of fixed \(\gamma/\lambda\) with \(\gamma\propto\lambda\propto e^{(2-\frac{d}{2})l}\). By analogy with the physics of surface growth processes [30], we will define the system to be _rough_ whenever \(\lambda\to\infty\) and _smooth_ whenever \(\lambda\to 0\). Building on our analysis of (12), we observe that the existence of the smooth phase is only possible when \(d>2\) and depends on the initial ratio \(\gamma_{0}/\lambda_{0}\). As highlighted in Fig.1, we can interpret the rough phase as a _localized_ phase where measurements dominate and the smooth phase as a _delocalized_ phase where diffusive transport dominates.
We show on Fig3.**b** the simplified RG flows (12) in 1d (left) and 3d (right) with the value of the microscopic cutoff \(\Lambda\) fixed to \(\pi/1.226595\)[31]. We see that in the 1d case, all the lines eventually lead to a divergent \(\lambda^{\mathrm{II}}\) while in the 3d case, we have a critical line separating the smooth/delocalized phase where \(\lambda^{\mathrm{II}}\to 0\) from the rough/localized phase where \(\lambda^{\mathrm{II}}\to\infty\).
To obtain the full phase diagram beyond (12), we numerically integrate the complete flow equations (11). In Fig.3.**a**, we explore the 3 dimensional parameter space \(\{\gamma_{R},\lambda^{\mathrm{I}}_{R},\lambda^{\mathrm{II}}\}_{l=0}\) while keeping the last two other initial conditions fixed, _i.e._\(\lambda^{\mathrm{I}}_{I}(0)=\gamma_{I}(0)=0\). A given point in this space belongs to the localized phase if \(\lambda^{\mathrm{II}}\to\infty\) when starting from this point and conversely to the delocalized phase if \(\lambda^{\mathrm{II}}\to 0\) (indicated by blue voxels on Fig.3.**a**). In the microscopic theory (2), there is initially one parameter beside the diffusion coefficient and we have that \(\gamma_{R}=\lambda^{\mathrm{I}}_{R}=\lambda^{\mathrm{II}}=\gamma\). Therefore, the microscopic theory (2) starts on the initial line \(\gamma_{R}(0)=\lambda^{\mathrm{I}}_{R}(0)=\lambda^{\mathrm{II}}(0)\) (red dashed line) in the parameter space of Fig.3.**a**. The critical value \(\gamma_{c}\) for the microscopic parameter corresponds to the point where the line \(\gamma_{R}(0)=\lambda^{\mathrm{I}}_{R}(0)=\lambda^{\mathrm{II}}(0)\) crosses from one domain to the other - see Fig.3.**a**. Note that we characterize the transition with \(\lambda^{\mathrm{II}}\) because, starting from the same initial conditions, we necessarily have \(|\lambda^{\mathrm{I}}|\leq\lambda^{\mathrm{II}}\) and thus, \(\lambda^{\mathrm{II}}\) will always diverge first.
_Numerical results_ In this section, we provide numerical simulations of the complete microscopic equations (2) to confirm the previous discussion. In order to characterize the different phases, we introduce the height \(h_{j}\) as
\[h_{j}:=\frac{1}{\sqrt{\gamma}}\log\left(|\psi_{j}|^{2}\right). \tag{13}\]
Drawing on the analogy with the classical case [20], we expect that the width \(w\) will follow a Family-Vicsek [30;
Figure 2: One-loop contributions to the RG of the mass and interactions. Dashed lines designate the auxiliary field \(\varphi^{a}\) and full lines the field \(\varphi\).
32] type scaling according to
\[w:=\sqrt{\frac{1}{N}\sum_{j}(h_{j}-\langle h\rangle_{\mathrm{s}})^{2}}\propto N^{ \alpha}f\left(\frac{t}{N^{\alpha/\beta}}\right), \tag{14}\]
where the bracket denotes the spatial average \(\langle h\rangle_{\mathrm{s}}=\frac{1}{N}\sum_{j=1}^{N}h_{j}\) and \(f(x)\propto x^{\beta}\) for \(x\ll 1\) while \(f(x)\propto 1\) for \(x\gg 1\). The universal exponents \(\alpha\) and \(\beta\) characterize the dynamical phases and are respectively called the _roughness_ and _growth_ exponents.
We show on Fig.4 the dependence of \(\alpha\) as a function of \(\gamma\) in \(d=1\) and \(d=3\) for different system sizes. In 1d, we see that all the curves collapse to the value \(\alpha\approx 1\) indicating a rough/localized phase while in 3d, we see a clear crossing of the curves at a finite value of \(\gamma\) indicating a phase transition from a smooth/delocalized phase with \(\alpha\approx 0\) to a rough/localized phase with \(\alpha\approx 0.32\).
Finally, we note that one simple yet non trivial prediction from (11) is that the flow equations are invariant under the rescaling of \(D,\lambda^{\mathrm{I,II}}\) and \(\gamma\) by the same multiplicative factor. Thus, plotting the dependence of \(\gamma_{c}\) with respect to \(D\) should simply give a straight line with slope \(1\). We show on Fig.3.**c** the values of \(\gamma_{c}\) computed from the microscopic dynamics (2) (red dots) compared to the straight line (blue) with slope \(1\) passing by \(\{\gamma_{c}=1.14\), \(\tau=1\}\). A linear regression of the numerical data gives a slope of \(\approx 1.19\) with correlation coefficient \(\approx 0.999\), slightly above the expected value. This discrepancy could be due to higher order terms that we neglected in the perturbative RG.
_Conclusion_ We have provided numerical and analytical arguments showing the existence of a MiPT for a single free particle undergoing continuous measurement for \(d=3\) and its absence for \(d=1\). Our work is one of the first to demonstrate the critical role played by dimensionality to observe the existence of a transition or lack thereof in a quantum setting. Compared to previous studies in the literature, it is remarkable that many-body effects play no role in the emergence of this transition.
Our studies raise a number of interesting questions. We first note that our characterization of the transition differs from more conventional approaches in the literature: we do not compute the temporal or spatial scaling of EE. It would be valuable in subsequent studies to look at the many-body case where the EE could be defined and characterized in order to connect better with the previously known phenomenology.
One exciting possibility is that our transition is already visible at the level of quantities linear in the density matrix, for instance transport-related quantities. In 1d, it is known that measurements induce a crossover from a ballistic to diffusive transport [33; 34; 35; 36; 37] in free fermionic system. If we associate the ballistic behavior to the delo
Figure 4: Critical exponent \(\alpha\) as a function of \(\gamma\) for \(d=1\) (left) and \(d=3\) (right) as a function of the measurement strength \(\gamma\) obtained in numerical simulations of (2). The different curves correspond to different system sizes. In 3d, we see the existence of a delocalized and localized phase while there is only a localized phase for \(d=1\). **Parameters**: \(\tau=1\), \(dt=0.01\).
Figure 3: **a** Phase diagram for \(d=3\). Each voxel corresponds to a set of initial conditions \(\{\gamma_{R},\lambda_{R}^{1},\lambda^{\mathrm{II}}\}_{l=0}\) with \(\lambda_{1}^{1}(0)=\gamma_{I}(0)=0\). The blue voxels indicate the smooth/delocalized phase and all the other points are in the rough/localized phase. The red voxels line is the diagonal \(\gamma_{R}(0)=\lambda_{R}^{1}(0)=\lambda^{\mathrm{II}}(0)\). The intersection between the red line and the blue domain indicates the value \(\gamma_{c}\) where the phase transition occurs. **b** Plots of different trajectories in the \(\{\gamma,\lambda^{\mathrm{II}}\}\) plane for the simpler case (12) in \(d=1\) and in \(d=3\). We see that for \(d=1\) all the trajectories eventually bend upwards, indicating the existence of a single phase. On the opposite, in \(d=3\) there exists a critical line separating a phase where \(\lambda^{\mathrm{II}}\to\infty\) from a phase where \(\lambda^{\mathrm{II}}\to 0\). **c** Dependence of \(\gamma_{c}\) on \(\tau\). The renormalization flow equations (11) predict a simple straight line with slope \(1\), which is shown in blue (passing by the point \(\{\gamma_{c}=1.14\), \(\tau=1\}\)). The red dots correspond to values of \(\gamma_{c}\) extracted from numerical simulations of (2). They can be fitted by a slope of \(\approx 1.19\) with correlation coefficient \(\approx 0.999\).
calized phase and the diffusive behavior to the localized one, a tempting conjecture is that this crossover in 1d becomes a genuine phase transition in higher spatial dimensions. Having this other route for characterizing MiPT would be particularly interesting in the context of experiments, since the measurement of EE is computationally very heavy, often requiring tomography of the full quantum trajectories and/or costly post-selection procedure [38, 39].
On the theoretical side, we note that a recent interesting body of literature has proposed non-linear sigma models as good effective descriptions of free fermionic or spin chains under measurements [17, 18, 19]. It would be interesting to understand if these effective field theories are compatible with our formalism in the single-body limit.
Finally, we note that that (4) is interesting in itself, as it seems to be a complex version of the stochastic heat equation. Performing a Cole-Hopf [40, 41] transform on (4), i.e introducing \(h:=\frac{1}{\sqrt{\lambda}}\log\varphi\), we get, up to a constant shift in time,
\[dh=i\frac{D}{\sqrt{\lambda}}\left(\nabla^{2}h+(\nabla h)^{2}\right)+d\eta, \tag{15}\]
which can be thought of as a complex version of the celebrated KPZ equation [25]. To the best of our knowledge the mathematical properties of such an equation has not been explored before. As our study shows different qualitative and quantitative behavior than the real case, one may expect that the complex KPZ equation entails its own rich phenomenology.
**Acknowledgments** Both authors thank Xhek Turkeshi and Gabriel Artur Weiderpass for illuminating discussions. Part of this project was developed at "_Les Gustins"_ summer school.
_Note added.--_During the completion of this manuscript, the existence of a MiPT in free fermions in 2d undergoing random projective measurements was put forward in [42] using field theoretical methods relying on a mapping to a non-linear sigma model.
|
2309.11252 | The Scenario Refiner: Grounding subjects in images at the morphological
level | Derivationally related words, such as "runner" and "running", exhibit
semantic differences which also elicit different visual scenarios. In this
paper, we ask whether Vision and Language (V\&L) models capture such
distinctions at the morphological level, using a a new methodology and dataset.
We compare the results from V\&L models to human judgements and find that
models' predictions differ from those of human participants, in particular
displaying a grammatical bias. We further investigate whether the human-model
misalignment is related to model architecture. Our methodology, developed on
one specific morphological contrast, can be further extended for testing models
on capturing other nuanced language features. | Claudia Tagliaferri, Sofia Axioti, Albert Gatt, Denis Paperno | 2023-09-20T12:23:06Z | http://arxiv.org/abs/2309.11252v1 | # The Scenario Refiner:
###### Abstract
Derivationally related words, such as "runner" and "running", exhibit semantic differences which also elicit different visual scenarios. In this paper, we ask whether Vision and Language (V&L) models capture such distinctions at the morphological level, using a a new methodology and dataset. We compare the results from V&L models to human judgements and find that models' predictions differ from those of human participants, in particular displaying a grammatical bias. We further investigate whether the human-model misalignment is related to model architecture. Our methodology, developed on one specific morphological contrast, can be further extended for testing models on capturing other nuanced language features.
## 1 Introduction
Vision and language (V&L) models are trained to ground linguistic descriptions in visual data. These models differ in pre-training and architecture. In particular, there are differences in the cross-modal information exchange between the textual and visual streams of the models [1, 13], even though sometimes, as shown for V&L models based on the BERT architecture [1], architectural differences have little impact on downstream performance for many benchmarks [1].
Pre-trained V&L models achieve high performance on diverse benchmarks, such as question answering, image retrieval and word masking [13]. However, they have limitations in tasks requiring _fine-grained_ understanding [1], including the ability to reason compositionally in visually grounded settings [14], distinguish spatial relationships and quantities [15, 16], and identify dependencies between verbs and arguments [1]. Most of these fine-grained linguistic phenomena are at the interface between syntax and semantics.
Far less attention has been paid to grounding fine-grained linguistic features at the morphological level. We aim to address this gap by investigating multimodal alignment at the morphological level. We focus on derived nouns with the agentive suffix _-er_ (e.g. _baker_) and the corresponding verbal form (_baking_). Such derivationally related pairs involve both category-level and semantic contrasts, with corresponding differences in the typical visual scenarios they evoke. For instance, human judges would accept the description _x is baking_ for a variety of visual scenes depicting a person (hereafter referred to as 'the subject') performing a particular action. Only a subset of such images would, however, also be judged as corresponding to _x is a baker_, since the agentive noun introduces additional expectations, for example about the way the subject is dressed or the physical environment they are in. By analysing the same stem (e.g. _bake_) in different parts of speech, we explore the ability of V&L models to capture the subtle differences in meaning and visual representation. To do this, we rely on a zero-shot setting in which we test the probability with which pretrained V&L models match an image to a corresponding text containing an agentive noun or a verb, comparing this to human judgments about the same image-text pairs.
Our contributions are: (i) a methodology for testing V&L models on morphological contrasts; (ii) a dataset of images that highlights the contrast between verbs and derived nouns, annotated with human judgements; (iii) an analysis of the V&L models' predictions on the contrast between derivationally related verbs and nouns, in comparison to human judgements.
Related work
### Models
Various V&L model architectures have been proposed, differing a.o. in the way visual vs. textual features are processed. One important distinction, common among models based on the BERT architecture, is between single- and dual-stream models. The former concatenate inputs in the two modalities and process them through a common transformer stack; the latter first process each modality through its own transformer stack, before performing cross-modal attention at a later stage (Bugliarello et al., 2021). Another influential architecture is the dual encoder (Radford et al., 2021), which is trained to project visual and textual embeddings into a common multimodal space. Among their pretraining objectives, BERT-based V&L models typically include _image-text matching_, whereby the model returns a probability that an image corresponds with a caption. Thus, such models can be tested zero-shot on image-text pairs. For dual encoders, similar insights can be obtained by comparing the distance in multimodal space between a text and an image embedding.
We aim to understand the impact of these architectures on the morphological contrast between word categories and whether the classification depends on specific visual information. Three models with different architectures and pre-training phases are tested: CLIP (Radford et al., 2021), ViLT (Kim et al., 2021), and LXMERT (Tan and Bansal, 2019).
Clipemploys a _dual encoder_ architecture and projects image and text embeddings in a common space, such that corresponding image-text pairs are closer than non-corresponding ones. CLIP is pre-trained using cross-modal contrastive learning on internet-sourced image-text pairs, resulting in strong multimodal representations (Radford et al., 2021). Two different visual backbones are used for the image encoder: ResNet50 (He et al., 2016), which uses attention pooling; and the Vision Transformer (Dosovitskiy et al., 2020) which is modified by the addition of an additional layer normalisation to the combined patch and position embedding. The text encoder is a Transformer which operates on a lower-casted byte pair encoding (BPE) representation of the text. CLIP computes the cosine similarity between an image and a text.
Lxmertfollows a _dual-stream_ approach, utilising three encoders: an object relationship encoder which acts upon the output of a faster-RCNN visual backbone (Ren et al., 2015), a language encoder, and a cross-modality transformer stack which applies attention across the two modalities. The pre-training involves five tasks, including masked cross-modality language modelling and image question answering, enabling the model to establish intra-modality and cross-modality relationships (Tan and Bansal, 2019). LXMERT is also pretrained with an image-text alignment head, which computes the probability that a text and an image correspond.
ViLT(Kim et al., 2021) is the simplest V&L architecture used in this study. It is a single-stream model in which a single transformer stack processes the concatenation of visual and textual features. In contrast to other models, no pre-trained visual backbone is used; rather, the model works directly on pixel-level inputs, in the spirit of Dosovitskiy et al. (2020). It has been shown that the usage of word masking and image augmentations improves its performance (Kim et al., 2021). In ViLT, the embedding layers of raw pixels and text tokens are shallow and computationally light. This architecture thereby concentrates most of the computation on modelling modality interactions. Like LXMERT, ViLT is also pre-trained with an image-text alignment head, in addition to the multimodal masked modelling objective.
### Related studies
Our work is related to studies focusing on the _typicality_ of the word-image relationship and the interplay with category labels for images depicting people. For example, people can be described using generic expressions referring to gender or more specific expressions highlighting individual properties or aspects. Visual properties that align with our conceptual knowledge of the noun may lead us to prefer agentive expressions over generic nouns such as "man" or "woman" (Corbetta, 2021). Gualdoni et al. (2022, 2022) proposed ManyNames, a small dataset that explores the factors that affect naming variation for visual objects, for instance, the different conceptualisations of the same object (e.g., "woman" vs. "tennis player") or the disambiguation of the nature of the object (e.g., "horse" vs. "pony"). Understanding the effects of context and naming preferences is crucial for V&L models to gain comprehensive understanding of visual scenes. The _typicality of the context_ determines the occurrence of specific names based on the global scene
where the subject is situated.
The current study explores the impact of typicality of the context at the morphological level. Derivational relations, relating two words or whole paradigms of words Bonami and Strandova (2019), involve contrasts at different levels, including form, syntax - where the words are related but belong to different word categories - and semantics, where the meaning of one member contrasts with the meanings of the other members. For instance, _runner_ and _run_ belong to the same paradigm, but the suffix _-er_ changes the word category and alters the referential meaning of the verb. For example, "the man is a runner" evokes a fit person who frequently trains, while "the man is running" could equally well portray a man casually running to catch a train. Thus, derived noun subjects should embody characteristics of the verb and/or common knowledge. Therefore, syntactic and relational knowledge has to be integrated with semantic knowledge, common imaginary and visual information, as has been argued from the language acquisition perspective Tyler and Nagy (1989).
## 3 Methodology
### Dataset
We create the Scenario Refin_er_ dataset highlighting the cognitive and semantic differences between the verb and its derived noun by contrasting one image with two annotations. The dataset is based on 18 word pairs, each consisting of a verb in the _-ing_ form and a derived agentive (_-er_) noun. The pairs are summarised in Table 1. The lexical pairs are classified into four conceptual domains: the professional domain (like _baker_ or _teacher_), the sports domain (like _runner_ or _skier_), the artistic domain (like _dancer_ or _painter_), and general (_lower_ or _smoker_).
Six images were selected for each of the 18 word pairs. These were manually selected from various sources: Visual Genome Krishna et al. (2017), Wikipedia Commons, MSCOCO Lin et al. (2014) and Geograph ([https://www.geograph.org.uk/](https://www.geograph.org.uk/)).
For the 18 word pairs, we want to compare images which correspond to the stereotypical representation of the agent role described by the derived noun, versus the more general scenario described by the verb. In order to depict the subject denoted by a derived noun, the images need to include additional information compared to the verb, for example, specific objects like tools or outfits for _painter_ or _surfer_; or a specific environment like a stage for _dancer_ or _singer_. The verbs correspond to a more general scenario, which creates a linguistic and visual contrast with the scenario evoked by the derived noun. This allows us to examine the contrast in parts of speech and their typicality within the defined global scene Gualdoni et al. (2022).
For each word pair, 6 images were selected. Each image is accompanied by two captions, as shown in Figure 1. Each caption received a judgement on a Likert scale.
### Data collection
We implemented a survey on Qualtrics and distributed it on Prolific. The survey included 162 images, consisting of 54 fillers and (18 \(\times\) 6 =) 108 target images representing the 18 selected lexical pairs.
\begin{table}
\begin{tabular}{|c c||c c|} \hline
**Noun** & **Verb** & **Noun** & **Verb** \\ \hline supporter & supporting & lover & loving \\ baker & baking & surfer & surfing \\ runner & running & swimmer & swimming \\ hunter & hunting & driver & driving \\ painter & painting & skier & skiing \\ walker & walking & dancer & dancing \\ singer & singing & gamer & gaming \\ teacher & teaching & reader & reading \\ cleaner & cleaning & smoker & smoking \\ \hline \end{tabular}
\end{table}
Table 1: Noun-verb pairs in the Scenario Refin_er_ dataset
Figure 1: Sample of stimuli for _supporter-supporting_ and _driver-driving_
Our survey also included fillers of several types. In one type, images were accompanied by a verb-based description and a derived noun in _-er_, enhanced by an adjective based on the mood or facial expression of the depicted subjects. For instance, a smiling subject wearing appropriate outfit on a ski slope was paired with the captions "The man is a _happy skier_" and "The man _is skiing_". This type of filler aimed to investigate if participants would alter their evaluation when the mental representation of the derived noun is reinforced by additional linguistic information. Another type of filler contrasted the verb and its derived adjective in _-ive_, offering insights into the classification of other members in the morphological paradigm. For example, four men intensely engaged in a video game were paired with the sentences "The men are _competitive_" and "The men _are competing_". A third type of filler contrasted verbs to bare adjectives, descriptive or emotional, to determine participants' preference between verbal and adjectival descriptions. For instance, a couple swimming happily in a lake was matched with "The man and woman are happy" and "The man and woman are swimming"; an image of a man speaking in a classroom was paired with "The man is upright" and "The man is teaching". The fourth type of filler included images with true and false descriptions of the visual content, used to maintain participants' attention and allowing to control the quality of their responses.
For each image, participants were asked to what extent both captions describe the visual scenario, using a seven-point Likert scale ranging from _totally disagree_ to _totally agree_. By asking to evaluate both captions for each picture, it is possible to extract a reliable measure of contrast between the derived noun and the verb.
In order not to risk rough human evaluations and minimise participant dropout rates due to the length of the survey, the target images were divided equally between two surveys (each with a total of 81 images where 54 were target images and 27 fillers).
Twenty native British English speakers completed the online questionnaire and were randomly assigned to one of the two surveys. Thus, each image is evaluated by 10 participants for both captions. For the instructions see Appendix A
## 4 Results
Our analysis proceeds in two stages. We first consider the _category preference_: for an image with two captions (one with a derived noun and one with a verb), we ask whether human judges (resp. V&L models) exhibit a preference for the noun or the verb with respect to a given image. We then compute correlations between the preferences exhibited by human judges and by models for the two categories.
### The word category preference
To analyse which of the two captions is preferred for each image by human judges, we compare the average ratings of the annotations. For V&L models, we consider the difference in probability estimated by a model's image-text matching head (in the case of ViLT and LXMERT) for the caption containing the noun or verb, or the difference in cosine distance between image and caption embeddings (in the case of CLIP). Note that we include results for three versions of CLIP, with different visual backbones. We use a Fisher test to determine whether there is a significant difference in category preference between human judges and V&L models.
Table 2 displays the proportion of times the derived noun or the verb was preferred by humans and by each of the models.
Human judgmentsOverall, human judges exhibit a preference for captions containing the verb, with only a small percentage of preferences for captions containing agent nominals. These types of classifications are distributed across different domains. This could be due to variation in the images in the extent to which they gave clear visual cues as to the role of the person depicted. There were some exceptions to this trend. In the sports domain, these included images of a skier wearing skiing gear with a cape, and a couple of surfers in surfing attire with surfboards. In the profession domain, they included two images depicting individuals engaged in driving and one image of teachers with pupils posing for a class photo. Four agent nominals belonged to the artistic and general domains, such as images of women dancing on a stage, two subjects getting cigarettes, and a woman in a bookshop. On the other hand, the difference in preference some noun-verb pairs was lower than for others (with differences in the 0-0.5 range). An example is shown in Figure 2, where participants interpreted both
captions as appropriate. Interestingly, the versions of CLIP and LXMERT seem to agree with the human ratings in this example, showing low contrast between the verb and the noun, with LXMERT assigning higher probability to verb caption for (c) and CLIP estimating lower distance between image and verb caption for (a). On the other hand, ViLT assigned a higher probability to the verbal caption for all the images in Figure 2.
V&L modelsUnlike participants, V&L models exhibit a **tendency to prefer deverbal nouns to verbs**. The exceptions are CLIP with the ViT-B/32 backbone, and ViLT, both of which have a slightly higher preference for captions with verbs. The performance of CLIP seems to depend on the visual backbone. Of the three versions, ViT-L/14 displays the greatest similarity to human judgments. We observed a tendency for ViT-B/32 to prefer captions with derived nouns where there are clear visual cues suggesting a role or activity, such as the microphone and the stage in Figure 3. In contrast, while CLIP-RN50 prefers the noun caption in Figure 3(a), it shows the opposite trend, in favour of the verb-based caption, in (b), perhaps because the stage is less clearly visible.
The difference between the judgements of humans vs. V&L models is statistically significant (Fisher's exact test, p \(<\) 0.001 for all contrasts between models and human judgments).
### Correlations between judgements
We also estimate the correlation between human and automatic judgements as a more fine-grained measure than binary preference. Overall, the correlation between the human and the automatic judgements varies depending on architecture and on the conceptual domain.
We assess correlations between three kinds of values: the (human- or model-produced) scores for a) noun and b) verb-based captions, as well as c) the difference between the noun and verb scores. We refer to the latter as the _morphological contrast_.
Participant consistencyTo assess the consistency of collected human judgements, we split participants randomly into two equal-sized samples and calculate Pearson correlation coefficients between the average scores of the two samples. The resulting correlation coefficients for all conceptual domains are reported in Table 3. Correlation coefficients for noun, verb and contrast are generally consistent, with the exception of the artistic do
\begin{table}
\begin{tabular}{l r r} \hline & **Derived noun** & **Verb** \\ \hline Humans & 8.3\% & 91.7\% \\ CLIP ViT-L/14@336px & 51.9\% & 48.1\% \\ CLIP RN50x64 & 52.8\% & 47.2\% \\ CLIP ViT-B/32 & 49.1\% & 50.9\% \\ ViLT & 47.2\% & 52.8\% \\ LXMERT & 51.9\% & 48.1\% \\ \hline \end{tabular}
\end{table}
Table 2: Preference for derived noun vs. verb, in human judgments and V&L model image-text alignment.
Figure 3: Mean (M) human judgments and standard deviations (SD) for an example image set corresponding to _singer-singing_
Figure 2: Mean (M) human judgments and standard deviations (SD) for an example image set corresponding to _lower-loving_.
main, for which correlations between judgments for verb-based captions, and as a consequence, also for the contrast, exhibit more variation.
Models vs human judgmentsTable 4 displays the overall correlations between human judgments and model image-text alignment for verbs, nouns and the morphological contrast. The correlations are moderate-to-weak, suggesting a lack of alignment between human intuitions and V&L models. This is consistent with our earlier observation that models tend to exhibit different preferences for nouns versus verbs, compared to humans. Interestingly, ViLT emerges as the most correlated model with human judgement in the verbal evaluation, but it exhibits the least correlation in the evaluation of the derived noun. Additionally, ViLT displays a moderate positive relationship with the contrast between verb and derived noun, whereas the other models demonstrate weaker positive correlations or very weak negative correlations with this particular contrast.
Table 5 breaks down correlations by conceptual domain. In the professional domain, correlations are generally stronger, especially for ViLT, LXMERT and CLIP ViT-B/32. Overall, it appears that models correlate with human judges in some domains more than others. Nevertheless, correlations are often negative, and these results suggest a qualitative difference between the image-text alignment performed by models, and the types of knowledge and inferences that humans bring to bear to support the grounding of nominal agentive versus verbal forms in visual stimuli.
## 5 Discussion
The findings revealed a discrepancy between models and human judgments. Humans displayed a preference for captions containing verbs, whereas V&L models exhibited a preference for nominal descriptions. Participants prefer the derived noun only for a few instances that had additional characteristics elicited by visual elements, or by the kind of action performed by the human subjects in the images. For instance, they prefer the derived noun for two images showing a person getting or purchasing cigarettes (_smoker-smoking_), meaning that participants interpreted the _intention_ as a characteristic that corresponds to the derived noun. In contrast, the tested models appeared to prioritise more the action itself rather than the individual who performs the action.
However, examining certain lexical pairs, we observed a greater variance in the pattern of interpretation, highlighting the difficulty in defining the human evaluation of the derived noun. For example, in the sport domain, participants rarely seem to rely on the outfit worn by the subject to base their interpretation, with the exception of _skier_, which happened to be paired only with an image of a subject also exhibiting their competition number. As a surprising contrast, two pictures for _runner-running_ similarly depicted subjects with their competition numbers are not evaluated as such by participants. Specifically, one image depicts a man running in a race track, while the other image depicts three men wearing specific outfits running in the countryside. The contrast between the means of the human evaluation is less than or equal to 0.50, indicating the preference for the verbal description.
The models, too, exhibit variety in the subject classification for these images. For example, while CLIP-ViT-L/14@336p, CLIP-ViT-B/32 and ViLT display a similar preference for the nominal form, as humans do, for _skier-skiing_, CLIP-RN50x64 and LXMERT prefer the verb-based caption. Similarly, while participants slightly prefer the verb for the subjects wearing a competition number for _runner-running_, models prefer the nominal description. The three versions of CLIP strongly prefer the derived noun for these subjects, ViLT prefers the verbal description only for the single subject running in a race track and LXMERT prefers the verbal description only for the three subjects running in the countryside. While CLIP exhibited a preference for the derived noun in presence of additional visual elements, ViLT and LXMERT do not seem to base their preference on such a visual cue since they assign a high probability to the verbal description too.
## 6 Conclusion
We studied the morphological difference between derived nouns in _-er_ and verbs for visual grounding, comparing human judgements with pre-trained Vision and Language models. The dataset we presented allows us to assess vision and language models on their understanding of verbs, deverbal agent nouns, and most importantly the contrast between the two. Our results show that while some models, especially ViLT, show strong results for some of the conceptual domains, they do not support the conclusion that models ground the morphological
differences between derived nouns and verbs in a humanlike way.
Highlighting and investigating such a morphological and cognitive difference can refine and improve the alignment of textual and visual input of V&L models. By exploring the visual classification at the morphological level, the aim was to investigate not only the linguistic and morphological influence in the automatic recognition of subjects carrying certain visual information, but also to individuate which architecture of the model better executes the task. In our study, the single-stream ViLT model tends to correlate better with human judgments. Nevertheless, these results are based on a relatively small test set and focus on a restricted set of models, with much scope for further experimentation. In an effort to encourage the community to undertake further investigation of these phenomena, we have shared our code and our dataset text.1.
Footnote 1: [https://github.com/ClaudiaTagliaferri/Scenario_Refiner.git](https://github.com/ClaudiaTagliaferri/Scenario_Refiner.git)
|
2303.12700 | ReorientDiff: Diffusion Model based Reorientation for Object
Manipulation | The ability to manipulate objects in a desired configurations is a
fundamental requirement for robots to complete various practical applications.
While certain goals can be achieved by picking and placing the objects of
interest directly, object reorientation is needed for precise placement in most
of the tasks. In such scenarios, the object must be reoriented and
re-positioned into intermediate poses that facilitate accurate placement at the
target pose. To this end, we propose a reorientation planning method,
ReorientDiff, that utilizes a diffusion model-based approach. The proposed
method employs both visual inputs from the scene, and goal-specific language
prompts to plan intermediate reorientation poses. Specifically, the scene and
language-task information are mapped into a joint scene-task representation
feature space, which is subsequently leveraged to condition the diffusion
model. The diffusion model samples intermediate poses based on the
representation using classifier-free guidance and then uses gradients of
learned feasibility-score models for implicit iterative pose-refinement. The
proposed method is evaluated using a set of YCB-objects and a suction gripper,
demonstrating a success rate of 95.2% in simulation. Overall, our study
presents a promising approach to address the reorientation challenge in
manipulation by learning a conditional distribution, which is an effective way
to move towards more generalizable object manipulation. For more results,
checkout our website: https://utkarshmishra04.github.io/ReorientDiff. | Utkarsh A. Mishra, Yongxin Chen | 2023-02-28T00:08:38Z | http://arxiv.org/abs/2303.12700v2 | # ReorientDiff: Diffusion Model based Reorientation for Object Manipulation
###### Abstract
The ability to manipulate objects in a desired configurations is a fundamental requirement for robots to complete various practical applications. While certain goals can be achieved by picking and placing the objects of interest directly, object reorientation is needed for precise placement in most of the tasks. In such scenarios, the object must be reoriented and re-positioned into intermediate poses that facilitate accurate placement at the target pose. To this end, we propose a reorientation planning method, ReorientDiff, that utilizes a diffusion model-based approach. The proposed method employs both visual inputs from the scene, and goal-specific language prompts to plan intermediate reorientation poses. Specifically, the scene and language-task information are mapped into a joint scene-task representation feature space, which is subsequently leveraged to condition the diffusion model. The diffusion model samples intermediate poses based on the representation using classifier-free guidance and then uses gradients of learned feasibility-score models for implicit iterative pose-refinement. The proposed method is evaluated using a set of YCB-objects and a suction gripper, demonstrating a success rate of 96.5% in simulation. Overall, our study presents a promising approach to address the reorientation challenge in manipulation by learning a conditional distribution, which is an effective way to move towards more generalizable object manipulation. For more results, checkout our website: [https://utkarshmishra04.github.io/ReorientDiff](https://utkarshmishra04.github.io/ReorientDiff).
## I Introduction
Rearranging objects in a desired pose is an important skill necessary for daily activities at home as well as for specific arrangement and packing applications in the industry. Performing such a task requires extracting object information from visual-sensor data and planning a pick-place sequence [1, 2]. While a single-step pick-place sequence is a viable solution, placing the object at a specific position and orientation is not always feasible. Reorientation is a helpful strategy when successfully changing an object's pose allows its placement at the target pose [3]. Reorientation ensures feasible intermediate transition poses in scenarios where there are no common grasps between the current pose and an object's desired placement pose.
In classical approaches, such a problem is usually tackled by using trajectory planners [4] to plan motion from the current pose to the desired pose via diverse candidate intermediate poses. Such an exhaustive search is expensive on time and is limited by choice of the number of intermediate pose options. Recently, Wada _et al._[3] proposed a data-driven sampling-based solution to reorientation using learned models that predict the feasibility score of an intermediate pose. While their method significantly improved the success rate and planning time, the approach relied on the target object's specification and placement pose. Lately, with the advances in language descriptor foundation models like CLIP [5], which projects images and texts to a common feature space, such specifications can be directly correlated between visual information and suitable language commands, thus empowering human-robot interaction. This motivated us to explore grounding the problem statement of reorientation on language and hence embed semantic knowledge of the task with the spatial structure of the scene [6].
In this paper, we propose ReorientDiff, a diffusion model-based reorientation pose generation pipeline for solving the task proposed by [3] for picking objects from a cluttered pile and placing it in the target pose specified through language descriptions. The core idea of our approach is to visualize the feasible intermediate poses as distributions. Such a distribution can be captured by a diffusion model and will be conditioned on the object's current and target pose, or in a more general multi-object scenario, on the pile of pickable objects and the occupancy of the target location. Note that diffusion models have also been successfully used for motion planning [7, 8], grasp planning [9], and object rearrangement [10] applications.
To enable interaction using natural language directly, we use pre-trained CLIP embeddings with an object segmentation model to structure object selection, pose prediction, and target object segmentation networks for the task. Considering the intermediate features as a generic scene and target representation in reduced dimensionality, the diffusion model samples reorientation poses conditioned on such features, which are further implicitly refined by a feasibility-score-based discriminator similar to the models used by [11, 3]. We combine a generic classifier-free conditional sampling [12] with classifier-guided sampling [13] to sample from diffusion models. For the tasks, we consider reorientation of objects in the YCB dataset [14] that are feasible for suction grippers. For each selected object, we choose target locations on multiple shelf levels and four possible target orientations. Our method samples reorientation poses in continuous space without any discretization or candidate pose selection and reaches a cumulative success rate of \(96.5\)% as evaluated on selected individual objects.
## II Related Work
### _Object Manipulation: Pick and Place_
While traditional methods have tried to solve the pick-and-place task using grasp planning [15, 16, 17] with known
object geometries or using pose estimation methods [18, 19, 20], the recent literature has focused more on vision-based object manipulation [1, 21]. Solving single-step pick and place tasks is typically achieved by planning grasp poses using segmentation and depth maps of the object, where it is considered that a picked object can be placed within the region of interest (like in a box) [22, 23]. Recent studies have also shown object rearrangement planning capabilities [6, 24] where a target location is sampled based on some user-specified goal. Then the whole-pipeline for generating a collision free trajectory from current to target location is planned. Some works have proposed object rearrangement as a long horizon problem [2] consisting of multiple sequential pick and place actions to achieve a desired configuration.
### _Language Models for Robotics_
Language models like GPT-2 [25] and GPT-3 [26] have proven to be quite effective in grounding the task's semantics with the scene's spatial features using several foundation models. One such foundation model is CLIP [5] which encodes visual and language information into common representation space and has been helpful in learning policies for generalized pick-place tasks in planar tabletop [6] and 3D [24] manipulation and for control of embodied AI agents [27, 28]. Further, language models have also been used in language-conditioned object rearrangement planning [10, 29] and supplying high-level instructions for long-horizon planning [30].
### _Reorientation and Regrasping_
Reorientation is a vital capability required for solving complex manipulation tasks. Prior research have explored this direction by planning to reorient objects using extrinsic supports [31, 32], which enables them to re-grasp the object in a desired way. While [31] proposed a graph neural network structure for pose sequencing and [32] used an end-to-end point-cloud based model for predicting reorientation poses, [33] proposed a heuristic based method for reorienting rock structures in excavation. Recently, ReorientBot [3] was proposed to solve the reorientation task using learned feasibility prediction models and rejection sampling.
### _Generative Models for Robotics_
Generative models like VAE have been used for planning grasps [11] using visible point-cloud of objects and for constructing embedding space for high-level tasks for various downstream planning. Recently, diffusion models have been used extensively in literature for trajectory planning from imitation data [7, 8] and for generating target poses for language-conditioned object rearrangement tasks [10]. With language-guided scene and video generation applications, such models have been used for generating task-videos for robot learning [34] and generalizing to unseen scenarios [35].
## III Preliminary: Diffusion Models
Consider samples \(x_{0}\) from an unknown data distribution \(q(x_{0})\); diffusion models [36] learn to estimate the distribution by a parameterized model \(p_{\theta}(x_{0})\) using the given samples. The procedure is completed in two steps: the forward and the reverse diffusion processes. The former continuously injects Gaussian noise in \(x_{0}\) to create a Markov chain with latents \(x_{1:K}\) following transitions:
\[q(x_{1:K}|x_{0})=\prod_{k=1}^{K}q(x_{k}|x_{k-1}), \tag{1}\]
where \(q(x_{k}|x_{k-1})=\mathcal{N}(x_{k};\sqrt{1-\beta_{k}}x_{k-1},\beta_{k}\mathbf{ I})\) is the per-step noise injection following variance schedule \(\beta_{1},\dots,\beta_{K}\). This leads to the distribution \(q(x_{k}|x_{0})=\mathcal{N}(x_{k};\sqrt{\alpha_{k}}x_{0},\ (1-\bar{\alpha}_{k})\ \mathbf{I})\) following notations introduced in [37] as \(\alpha_{k}=1-\beta_{k}\) and \(\bar{\alpha}_{k}=\prod_{i=1}^{k}\alpha_{i}\). Note that \(\bar{\alpha}_{K}\approx 0\) and thus \(x_{K}\sim\mathcal{N}(0,\mathbf{I})\). The reverse diffusion learns to denoise the data starting from
Fig. 1: **Reorientation for precise target placement** The above figure represents the phenomenon of reorientation in which an object from a cluttered file has to be placed precisely in a shelf (target position shown). As the object cannot be directly place at the target location, our proposed method, ReorientDiff, samples a reorientation pose using a learned conditional distribution by a diffusion model. Such a proposed reorientation pose acts as a transition for facilitating successful placement. We also consider and take advantage of the object dynamics, as introduced by Wada _et al._[3], by which we ensure that un-grasping an object in an unstable pose will eventually allow the object to settle at some favourable pose.
\(x_{K}\) and following \(p_{\theta}(x_{k-1}|x_{k})\ =\ \mathcal{N}(x_{k-1};\mu_{\theta}(x_{k},k),\beta_{k}\mathbf{I})\) where
\[\mu_{\theta}(x_{k},k)=\frac{1}{\sqrt{\alpha_{k}}}\Big{(}x_{k}-\frac{\beta_{k}}{ \sqrt{1-\tilde{\alpha}_{k}}}\epsilon_{\theta}(x_{k},k)\Big{)}. \tag{2}\]
The parameterized model \(\epsilon_{\theta}(x_{k},k)\) is called the score-function, and it is trained to predict the perturbations and the noising schedule by the score-matching objective [38]
\[\arg\min_{\theta}\mathbb{E}_{x_{0}\sim q,\epsilon\sim\mathcal{N}(0,\mathbf{I })}\Big{[}\|\epsilon-\epsilon_{\theta}(\sqrt{\tilde{\alpha}_{k}}x_{0}+\sqrt{1 -\tilde{\alpha}_{k}}\epsilon)\|^{2}\Big{]} \tag{3}\]
In particular, such a score function represents the gradients of the learned probability distribution as
\[\nabla_{x_{k}}\log p_{\theta}(x_{k})=-\frac{1}{\sqrt{1-\tilde{\alpha}_{k}}} \epsilon_{\theta}(x_{k},k). \tag{4}\]
## IV Reorientation
Following the previous environment setup by Wada _et al._[3], we construct the reorientation scenario as a task of i) selecting an object of interest from a pile of cluttered objects, ii) calculating feasible grasp poses for picking, iii) calculating grasp poses for placement with prior knowledge of the mesh of the selected object and iv) finding suitable reorientation poses using our proposed pipeline based on diffusion models. This section describes the pipeline for creating a generic scene and task embedding space, followed by generating grasp poses and training the feasibility score models.
### _Constructing Generic Scene-Task Representations_
We define a scene as the location and occupancy of the place from where a target object should be picked and a task as the language prompt containing the descriptions for selecting the target object and deciding placement poses. A top-down RGB-D camera provides an image \(\mathcal{I}\in\mathbb{R}^{H\times W\times 3}\) and a heightmap \(\mathcal{H}\in\mathbb{R}^{H\times W\times 1}\) as the description of the pile. Motivated from previous work [6, 24] on learning semantic and spatial embeddings, we use pre-trained CLIP foundation model for obtaining semantic embeddings from the image \(\mathcal{I}\) and language \(\mathcal{L}\), and combine them with spatial embeddings for target object segmentation to get a joint embedding \(\Phi\) as generic scene-task representation as shown in Fig. 2. The embedding is further used to predict the target object as a one-hot vector of all the objects of interest and the final placement pose.
### _Sampling Grasp Poses_
We generate grasp poses by following the classical approach of converting the heightmap into a point cloud representation and eventually to a point-normal representation. The predicted target object segmenatation of the scene is then used to obtain the surface normals of the target object. After performing an edge masking using the Laplacian of the surface normals, the remaining point-normals on the surface are feasible grasp poses. While we sample grasp poses \(\eta_{1}\) for picking the object from the pile in the aforementioned manner, we assume that we have the mesh of the selected object for sampling grasp poses \(\eta_{2}\) for placing the object at the predicted pose.
### _Training Feasibility Score Models_
Following prior works [11, 3, 29], a feasibility prediction model is important for early-evaluation and rejection of unfavourable samples. Such a feasibility model predicts the probability of success of a given grasp pose in successfully grasping an object in some candidate pose for a specified scene representation. The phenomenon of grasp success evaluation in dynamic reorientation pose, as addressed by [3], is of particular interest for our setup. Modelling dynamics for every object is indeed non-trivial and adds to the complexity; hence the feasibility model implicitly takes care of the dynamics of the object after deactivating the grasp. For checking feasibility or the probability of success (\(y\)) of
Fig. 2: **Joint Embedding Construction** We use a pre-trained CLIP-ResNet50 and BPE-based Tokenizer with CLIP language model for obtaining a semantic embedding of the tabletop RGB image and instruction prompt, respectively. While keeping CLIP layers frozen, we train another ResNet50 encoder for spatial RGB-D observations and combine them with the semantic embeddings to obtain joint embeddings for visual-language inputs. We train these latent representations with respect to the object of interest, placement pose, and object location (segmentation) predictions. It is worth noting that the predicted object information is also used for predicting the placement pose and the target object segmentation. Further, the addition of skip-connection also ensures that the segmentation map construction is accurate while filling up the embedding vector with only the necessary information. The proposed pipeline shown above creates a latent space that is consistent with the three aspects of interest by minimizing information loss.
sampled grasps for candidate reorientation poses \(\mathbf{q}\), we train two models:
* For predicting success of reorientation from the current pose in a pile to a candidate pose given pick grasp poses (\(\eta_{1}\)) and scene representation, denoted as \(\mathcal{M}_{1}(y|\eta_{1},\mathbf{q},\Phi)\)
* For predicting success of post-grasp deactivation pose from the candidate pose and placement grasp poses (\(\eta_{2}\)), denoted as \(\mathcal{M}_{2}(y|\eta_{2},\mathbf{q},\Phi)\)
## V ReorientDiff: Diffusion for Reorientation
We aim to generate intermediate reorientation poses for the target object, which enables successive placement at the desired pose and is reachable from the current pose. We introduce a diffusion model based approach to sample most probable successful reorientation poses (\(\mathbf{q}\)) conditioned on the scene representation priors (\(\Phi\)), denoted as \(p(\mathbf{q}|\Phi)\), which already contains the spatial and semantic information about the scene and the task. The denoising process can be further flexibly conditioned by sampling from modified distributions of the form
\[p_{h}(\mathbf{q})\propto p(\mathbf{q}|\Phi)h(\mathbf{q},\Phi), \tag{5}\]
where \(h(\mathbf{q},\Phi)\) can represent several grasp success probability heuristics. By separating the grasp success from reorientation candidate sampling, the diffusion model trained for reorientation poses can be reused for varied selection of picking (\(\eta_{1}\)) and placement grasp poses (\(\eta_{2}\)).
### _Classifier-free Conditional Pose Generation_
Following the distribution defined in (5), we use classifier-free guidance [12] to sample high-likelihood reorientation poses for a particular scene-task representation. We train a score-network [38], \(\epsilon_{\theta}(\mathbf{q}_{k},\Phi)\propto\nabla_{\mathbf{q}_{k}}\log p( \mathbf{q}_{k}|\Phi)\), to denoise from \(\mathbf{q}_{K}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\) to possible reorientation poses \(\mathbf{q}_{0}\) from a \(K\)-step reverse diffusion denoising process. For each step, we calculate \(\tilde{\epsilon}_{k}\) as
\[\tilde{\epsilon}_{k}=\epsilon_{\theta}(\mathbf{q}_{k},\Phi)+w_{c}\Big{(} \epsilon_{\theta}(\mathbf{q}_{k},\Phi)-\epsilon_{\theta}(\mathbf{q}_{k},\phi) \Big{)} \tag{6}\]
The scalar \(w_{c}\) implicitly guides the reverse-diffusion towards poses that best satisfy the scene-task representations. Further, we calculate the successive samples for the next \((k-1)^{th}\) step using the DDIM [37] sampling strategy and \(\tilde{\epsilon}_{k}\) as follows:
\[\tilde{\mathbf{q}}_{k-1}\longleftarrow\sqrt{\tilde{\alpha}_{k-1}}\Big{(} \frac{\mathbf{q}_{k}-\sqrt{1-\tilde{\alpha}_{k}}\ \ \tilde{\epsilon}_{k}}{\sqrt{\tilde{\alpha}_{k}}}\Big{)}+\sqrt{1-\tilde{\alpha}_ {k-1}}\ \ \tilde{\epsilon}_{k} \tag{7}\]
where, \(\tilde{\alpha}_{k}\) is as described in Section III.
### _Feasibility Guided Pose Refinement_
We use the two feasibility-score prediction models (\(\mathcal{M}_{1}\) and \(\mathcal{M}_{2}\)), which are pre-trained for predicting grasp feasibility for picking grasp, reorientation pose pairs and placement grasp, reorientation pose pairs, respectively. In such a case, the scores can be converted into probability distributions for each heuristic, defined as, for each \(i=1,2\),
\[h_{i}\equiv\ p(y=1|\eta_{i},\mathbf{q},\Phi)|_{\mathcal{M}_{i}}=\text{exp} \Big{(}-(1-\mathcal{M}_{i}(y|\eta_{i},\mathbf{q},\Phi))^{2}\Big{)}\]
Following classifier-based guidance [13] formulation for the heuristics, the reverse diffusion can be formulated as:
\[p_{h}(\mathbf{q}_{k}|\mathbf{q}_{k+1},y,\Phi)\propto\] \[p(y|\eta_{1},\tilde{\mathbf{q}}_{0}^{k},\Phi)|_{\mathcal{M}_{1}} \ p(y|\eta_{2},\hat{\mathbf{q}}_{0}^{k},\Phi)|_{\mathcal{M}_{2}} \tag{8}\]
where, \(\hat{\mathbf{q}}_{0}^{k}\) is the sample proposed at diffusion step \(k\) and defined as:
\[\hat{\mathbf{q}}_{0}^{k}=\frac{\mathbf{q}_{k}-\sqrt{1-\tilde{\alpha}_{k}}\ \ \tilde{ \epsilon}_{k}}{\sqrt{\tilde{\alpha}_{k}}} \tag{9}\]
Considering Taylor first order approximations for heuristics and standard reverse process Gaussian \((\mu_{\theta}(\mathbf{q}_{k},k,\Phi),\beta_{k}\mathbf{I})\) as described in Section III, we get the new mean (\(\mu_{\theta,h}(\mathbf{q}_{k},k,\Phi)\)) for the distribution \(p_{h}(\mathbf{q}_{k}|\mathbf{q}_{k+1},y,\Phi)\) in (8) as:
\[\mu_{\theta,h}(\mathbf{q}_{k},k,\Phi)\] \[=\mu_{\theta}(\mathbf{q}_{k},k,\Phi)+\beta_{k}\sum_{i=1}^{2}w_{i} \nabla_{\mathbf{q}_{k}}\log p(y|\eta_{i},\mathbf{q}_{k},\Phi)|_{\mathcal{M}_{i}}\] \[=\mu_{\theta}(\mathbf{q}_{k},k,\Phi)-\beta_{k}\sum_{i=1}^{2}w_{i} \nabla_{\mathbf{q}_{k}}\Big{[}1-\mathcal{M}_{i}(y|\eta_{i},\hat{\mathbf{q}}_ {0}^{k},\Phi)\Big{]}^{2}.\]
In view of (2), we then obtain the modified score
\[\epsilon_{k}\longleftarrow\tilde{\epsilon}_{k}-\sqrt{1-\bar{\alpha}_{k}}\ g_{k}\]
Fig. 3: **Forward and Reverse Diffusion Process** The above figure shows the forward diffusion and the reverse denoising and sampling process of ReorientDiff. As described in Section V, following classifier-free guidance will result in high-likelihood samples with high-variance in terms of success feasibility of the samples. Using the feasibility score gradients, we realize an implicit iterative pose refinement, as marked by the blue box in the figure. This significantly decrease variance and ensure high success feasibility of the samples.
where \(g_{k}=-\beta_{k}\sum_{i=1}^{2}w_{i}\nabla_{\mathbf{q}_{k}}\Big{[}1-\mathcal{M}_{ i}(y|\eta_{i},\hat{\mathbf{q}}_{0}^{k},\Phi)\Big{]}^{2}\). We notice that injecting noise to \(g_{k}\), as in stochastic DDIM, can slightly improve the performance. We calculate the final \(\mathbf{q}_{k-1}\) using the refined \(\epsilon_{k}\) in (7). A visual clarification of the forward and reverse diffusion is shown in Fig. 3.
## VI Results: Simulation
Based on the environment setup as discussed in Section IV, we create datasets, train diffusion and feasibility score models and evaluate them in simulation for proper placement conditions.
### _Dataset Generation and Training_
We use PyBullet [39] and an OMPL [40] based motion planner to solve for collision-free path between current pose and a candidate reorientation pose and from the reorientation pose to the ground-truth placement pose for diverse set of YCB-objects and target locations. We sampled approximately \(40000\) candidate poses following Wada _et al_. [3]. The goal properties were converted into modular language instructions, and the success of pick and place for both the steps was recorded. The scene and task properties were used to construct the joint visual-language embedding space, which was further used to train the feasibility score models using binary success labels. Eventually, we train a conditional diffusion model using only the successful reorientation poses. Such a diffusion model is reusable for diverse set of grasp poses based on the feasibility score models.
### _Performance Evaluation: Scene-Task Representation_
To evaluate the quality of the scene-task embedding network, we analyze the accuracy of the object selection and placement pose prediction along with the error in the predicted segmentation. We show a visual analysis in Fig. 4 where the output segmentation and the predicted placement pose in the shelf are shown for three scenes and tasks. For accurate shelf-level estimation, we round each object's predicted height to the nearest shelf-level height, and a similar post-processing is conducted for the object orientation. To add complexity, although we consider only four orientations: front, back, left and right, we discretize the possible orientations into \(8\) possible options and round the predicted orientation to the nearest option.
Numerically, the object selection network was \(100\%\) accurate, and the number of pixels wrongly classified was about \(1\%\) of the complete image on average over \(100\) random samples. The average error in predicting the height of the target placement after post-processing is around \(8\) mm, and the mean error in the yaw angle of the predicted pose is \(0.3\) rad.
### _Performance Evaluation: Diffusion with Guidance_
The trained classifier-free conditional diffusion model and the score feasibility models are used to perform the reverse diffusion using the classifier-free guidance with and without feasibility score guidance. Experiments comparing the performance of both the methods are shown in Fig. 5 for a set of YCB Objects [14] and different scene-task scenarios where only \(50\) candidate poses are sampled and top \(10\) high-likelihood poses are selected. The comparison shows that while the classifier-free guidance is good enough to sample high-likelihood reorientation poses, the primary purpose of the feasibility score gradients is to reduce the variance in the pose generation and ensure high success probability. A numerical analysis of the overall success is shown and compared with the rejection sampling based baseline [3] in Table I.
The reorientation success percentage holds different relevance as compared to the baseline. The baseline does two step reverse rejection sampling where reorientation search is conducted over candidates which are feasible for placement, so there might be a scenario where there is no solution possible. For the case of ReorientDiff, the reorientation success measures the capability of the diffusion model to generalize to poses which ensure reorientation and scope for future placement. Higher reorientation success and lower placement success is an indication that the model is short-sighted and is giving importance to a single step success metric. From Table I, we ensure high reorientability success along with better placement success, even without any candidate pose discretization. The overall success is based
\begin{table}
\begin{tabular}{c|c|c|c} \hline \multirow{2}{*}{**Method**} & **Success (\%)** & **Success (\%)** & **Success (\%)** \\ & **Reorient** & **Place** & **Overall** \\ \hline ReorientBot & 97.9 & 95.1 & 93.2 \\ ReorientDiff & 97.4 & 86.3 & 85.8 \\ (w/o Guide) & **98.9** & **96.5** & **94.8** \\ \hline \end{tabular}
\end{table} TABLE I: Success evaluation of the proposed method as compared to the rejection sampling based baseline. The ReorientDiff algorithm was tested for more than 300 different scene task settings consisting of equal distribution of the selected objects and all the orientations. A task is considered a success if it is completed at-least once in 4 random seeds.
Fig. 4: **Visual Analysis of Scene-Task Network Performance** The scene-task network maps the visual (row 2) image of the pile (row 1) and language (bottom row) inputs to a feature space which is used to predict the placement location (row 4) and target object segmentation (row 3).
on the accurate placement of the object from the reoriented pose, and it represents the successful completion of a task. The metric is measured by calculating the difference between the desired and the pose after final placement.
### _Performance Evaluation: K-Step Reverse Diffusion_
Sampling from a trained diffusion models is flexible and can be achieved using different levels of discretization between \(x_{K}\sim\mathcal{N}(0,\mathbf{I})\) to meaningful reorientation poses. We perform the complete analysis for multiple values of the number of reverse denoising steps \(K\) as shown in Table II. It is evident that minimizing resolution degrades overall performance, but even with much fewer resolutions, ReorientDiff can reach a descent performance.
Following our analysis on performance, we explored the time taken for overall planning of a successful reorientation pose from a given scene and corresponding task information. We provide the recorded timings for all of our ablations as well as the baseline in Table III.
Our findings show that ReorientDiff is computationally heavy due to gradient calculations for reverse denoising steps. Without using the guidance from the feasibility-score models, classifier-free guidance requires similar time as the baseline, ReorientBot. However, as we decrease the discretization resolution, the planning time decreases significantly with some trade-off in performance, as shown in Table II. We believe that using higher-order solvers such as one proposed in [41], similar level of performance as ReorientDiff (w/ \(K=256\)) can be achieved at the computation cost of \(K=50\). However, such an analysis is out of scope of the proposed methodology. Hence, from all of our visual and empirical analysis, ReorientDiff successfully proves that formulating the problem of reorientation as learning a conditional distribution is an effective way to move towards more generalizable object manipulation.
## VII Conclusion
Diffusion models are powerful generative models capable of modeling (conditional) distributions. In the proposed method, ReorientDiff exploits the capabilities of such models to predict reorientation poses conditioned on a compact scene-task representation embedding containing information about the target object and its placement location. Further, the samples are refined using learned feasibility-score models to reduce uncertainty and ensure success of the planned intermediate poses. Considering as little as \(10\) reorientation poses, we achieved an overall success rate of \(96.5\)% across variety of objects. We consider incorporating more efficient sampling schemes and better generalization performance for unseen objects and placement goals as a potential future work.
\begin{table}
\begin{tabular}{c|c|c|c} \hline \hline
**ReorientDiff K** & \begin{tabular}{c} **Success (\%)** \\ **Reorient** \\ \end{tabular} & \begin{tabular}{c} **Success (\%)** \\ **Place** \\ \end{tabular} &
\begin{tabular}{c} **Success (\%)** \\ **Overall** \\ \end{tabular} \\ \hline \(K=256\) & **98.9** & **96.5** & **94.8** \\ \(K=100\) & 99.3 & 92.4 & 91.5 \\ \(K=50\) & 97.5 & 91.1 & 88.6 \\ \hline \hline \end{tabular}
\end{table} TABLE II: Success evaluation with different levels of discretization while sampling using ReorientDiff.
Fig. 5: **Reverse Diffusion for Reorientation Pose Generation** The reverse sampling process for \(4\)\(k\)-steps at \(k=100,64,32,0\) for \(K=256\) in four different scene-task scenarios comprising of the Cracker Box, Mustard Bottle and Sugar Box in different target orientations are shown above. The scenes are shown in the left-side of every sub-figure and consists of the pile with the target object and the predicted placement location on the shelf. The language prompt defining each of the tasks is mentioned below each sub-figure. It consists of either the absolute (the object’s name) or the relative (heaviest/lightest) reference to the object and details about the target placement.
\begin{table}
\begin{tabular}{c|c} \hline \hline
**Method** &
\begin{tabular}{c} **Planning** \\ **Time (sec)** \\ \end{tabular} \\ \hline ReorientBot & 2.5 \\ ReorientDiff (w/o Guide) & 2.7 \\
**ReorientDiff @ \(K=256\)** & **5.3** \\ ReorientDiff @ \(K=100\) & 2.5 \\ ReorientDiff @ \(K=50\) & 1.5 \\ \hline \hline \end{tabular}
\end{table} TABLE III: Computational analysis of the planning time for finding a suitable reorientation pose for the proposed method, ReorientDiff, along with the baseline and all conducted ablations. |
2309.07430 | Adapted Large Language Models Can Outperform Medical Experts in Clinical
Text Summarization | Analyzing vast textual data and summarizing key information from electronic
health records imposes a substantial burden on how clinicians allocate their
time. Although large language models (LLMs) have shown promise in natural
language processing (NLP), their effectiveness on a diverse range of clinical
summarization tasks remains unproven. In this study, we apply adaptation
methods to eight LLMs, spanning four distinct clinical summarization tasks:
radiology reports, patient questions, progress notes, and doctor-patient
dialogue. Quantitative assessments with syntactic, semantic, and conceptual NLP
metrics reveal trade-offs between models and adaptation methods. A clinical
reader study with ten physicians evaluates summary completeness, correctness,
and conciseness; in a majority of cases, summaries from our best adapted LLMs
are either equivalent (45%) or superior (36%) compared to summaries from
medical experts. The ensuing safety analysis highlights challenges faced by
both LLMs and medical experts, as we connect errors to potential medical harm
and categorize types of fabricated information. Our research provides evidence
of LLMs outperforming medical experts in clinical text summarization across
multiple tasks. This suggests that integrating LLMs into clinical workflows
could alleviate documentation burden, allowing clinicians to focus more on
patient care. | Dave Van Veen, Cara Van Uden, Louis Blankemeier, Jean-Benoit Delbrouck, Asad Aali, Christian Bluethgen, Anuj Pareek, Malgorzata Polacin, Eduardo Pontes Reis, Anna Seehofnerova, Nidhi Rohatgi, Poonam Hosamani, William Collins, Neera Ahuja, Curtis P. Langlotz, Jason Hom, Sergios Gatidis, John Pauly, Akshay S. Chaudhari | 2023-09-14T05:15:01Z | http://arxiv.org/abs/2309.07430v5 | # Clinical Text Summarization: Adapting Large Language Models Can Outperform Human Experts
###### Abstract
Sifting through vast textual data and summarizing key information from electronic health records (EHR) imposes a substantial burden on how clinicians allocate their time. Although large language models (LLMs) have shown immense promise in natural language processing (NLP) tasks, their efficacy on a diverse range of clinical summarization tasks has not yet been rigorously demonstrated. In this work, we apply domain adaptation methods to eight LLMs, spanning six datasets and four distinct clinical summarization tasks: radiology reports, patient questions, progress notes, and doctor-patient dialogue. Our thorough quantitative assessment reveals trade-offs between models and adaptation methods in addition to instances where recent advances in LLMs may not improve results. Further, in a clinical reader study with ten physicians, we show that summaries from our best-adapted LLMs are preferable to human summaries in terms of completeness and correctness. Our ensuing qualitative analysis highlights challenges faced by both LLMs and human experts. Lastly, we correlate traditional quantitative NLP metrics with reader study scores to enhance our understanding of how these metrics align with physician preferences. Our research marks the first evidence of LLMs outperforming human experts in clinical text summarization across multiple tasks. This implies that integrating LLMs into clinical workflows could alleviate documentation burden, empowering clinicians to focus more on personalized patient care and the inherently human aspects of medicine.
## 1 Introduction
Documentation plays an indispensable role in the practice of healthcare. Currently, clinicians spend significant time summarizing vast amounts of textual information--whether it be compiling diagnostic reports, writing progress notes, or synthesizing a patient's treatment history across different specialists [4, 27, 33]. Even for experienced physicians with a high level of expertise, this intricate task naturally introduces the possibility for errors, which can be detrimental in a field where precision is paramount [7, 31, 87].
The widespread adoption of electronic health records (EHR) has expanded clinical documentation workload, directly contributing to increasing stress and clinician burnout [26, 32, 63]. Recent data indicates that physicians can expend two hours on documentation for each hour of patient interaction [69]. Meanwhile, documentation responsibilities for nurses consume up to 60% of their time and account for significant work stress [10, 25, 41]. These tasks divert attention from direct patient care, leading to worse outcomes for patients as well as disillusionment and decreased job satisfaction for clinicians [4, 64, 66, 77].
In recent years, large language models (LLMs) have gained remarkable traction, leading to widespread adoption of models such as ChatGPT [8], which excel at information retrieval, nuanced understanding, and text generation [9, 93]. While excellent LLM benchmarks for general NLP tasks exist [46, 94], they do not evaluate performance on relevant clinical tasks. Addressing this limitation presents a tremendous opportunity to accelerate the process of clinical text summarization, hence alleviating documentation burden and improving patient care.
Crucially, machine-generated summaries must be non-inferior to that of seasoned clinicians--especially when used to support sensitive clinical decision-making. Recent work in clinical natural language processing (NLP) has demonstrated potential on medical text [75; 86], adapting to the medical domain by either training a new model [68; 79], fine-tuning an existing model [76; 81], or supplying task-specific examples in the model prompt [53; 81]. However, adapting LLMs to summarize a diverse set of clinical tasks has not been thoroughly explored, nor has non-inferiority to humans been achieved on this task. With the overarching objective of bringing LLMs closer to clinical readiness, we aim to bridge the gap between theoretical promise and practical utility. This culminates in the following contributions:
* We implement adaptation methods across eight open-source and proprietary LLMs for four distinct summarization tasks comprising six datasets. To our knowledge, the subsequent evaluation via NLP metrics is the most comprehensive assessment of contemporary LLMs for clinical text summarization.
* Our exploration illustrates the stark benefit of model adaptation over zero-shot prompting and delves into a myriad of trade-offs concerning different models and adaptation methods, revealing scenarios where advancements in model size, novelty, or domain specificity do not translate to superior performance.
* Through a rigorous clinical reader study with ten physicians, we demonstrate that LLM summaries can surpass human summaries in terms of completeness, correctness and conciseness. This novel finding affirms the non-inferiority of machine-generated summaries in a clinical context.
* We qualitatively analyze summaries to pinpoint challenges faced by both models and humans. Such insights can guide future enhancements of LLMs and their integration into clinical workflows.
* We identify which NLP metrics most correlate with physician preferences on key attributes such as completeness, correctness, and conciseness.
Our results demonstrate that LLMs often outperform human experts for clinical text summarization across the diverse range of documents we evaluate. This implies that LLMs could be leveraged to reduce documentation load and thus support clinicians--not supplant them. Once a summary is provided, clinicians are essential to make treatment recommendations and final decisions. Ultimately, such new tools may improve the clinical workflow [3], resulting in decreased clinician strain and improved patient care. Accelerating tedious tasks will enable healthcare providers to dedicate more time to the essential human facets of medicine, such as fostering patient relationships, understanding their specific goals, and offering personalized advice.
## 2 Related Work
LLMs have demonstrated astounding performance, propelled by both the transformer architecture [82] and increasing scales of data and compute, resulting in widespread adoption of models such as ChatGPT [8]. Although several of the more expansive models, such as GPT-4 [58] and PaLM [15], remain proprietary
Figure 1: Overview. First we quantitatively evaluate each valid combination (\(\times\)) of LLM and adaptation method across four distinct summarization tasks comprising six datasets. We then conduct a clinical reader study in which ten physicians compare summaries of the best model/method against those of a human expert.
and provide access via "black-box" interfaces, there has been a pronounced shift towards open-sourced alternatives such as Llama-2 [78]. These open-source models grant researchers direct access to model weights for customization.
Popular transformer models such as BERT [23] and GPT-2 [61] established the paradigm of self-supervised pretraining on large amounts of general data and then adapting to a particular domain or task by tuning on specific data. One approach is customizing model weights via instruction tuning, a process where language models are trained to generate human-aligned responses given specific instructions [84]. Examples of clinical instruction-tuned models include Med-PALM [68] for medical question-answering or Radiology-GPT [49] and Radiology-Llama2 [49] for radiology tasks. Still other works have proposed generalist models to perform many tasks across the medical domain [54, 79]. To enable domain adaptation with limited computational resources, prefix tuning [45] and low-rank adaptation (LoRA) [35] have emerged as effective methods that require tuning less than 1% of total parameters over a small training set. LoRA has been shown to work well for medical question-answering [76] and summarizing radiology reports [81]. Another adaptation method, requiring no parameter tuning, is in-context learning: supplying the LLM with task-specific examples in the prompt [43]. Because in-context learning does not alter model weights, it can be performed with black-box model access using only a few training examples [43].
Recent work has adapted LLMs for various medical tasks, demonstrating great potential for medical language understanding and generation [48, 75, 79, 86, 91]. Specifically, a broad spectrum of methodologies has been applied to clinical text for specific summarization tasks. One such task is the summarization of radiology reports [13], which aims to consolidate detailed findings from radiological studies into significant observations and conclusions drawn by the radiologist [40]. LLMs have shown promise on this task [81] and other tasks such as summarizing daily progress notes into a concise "problem list" of medical diagnoses [29] or summarization of clinical narratives [2]. Lastly, there has been significant work on summarizing extended conversations between a doctor and patient into patient visit summaries [1, 53, 89].
While the aforementioned contributions incorporate methods to adapt language models, they often include only a small subset of potential approaches and models, and/or they predominantly rely on evaluation via standard NLP metrics. Given the critical nature of medical tasks, demonstrating clinical readiness requires including human experts in the evaluation process. To address this, there have been recent releases of expert evaluations for instruction following [27] and radiology report generation [90]. Other work employs human experts to evaluate synthesized Cochrane review abstracts, demonstrating that NLP metrics are not sufficient to measure summary quality [72]. With this in mind, we extend our comprehensive evaluation of methods and LLMs beyond NLP metrics to incorporate a rigorous clinical reader study across multiple summarization tasks. Our results are the first which demonstrate that adapted LLM summaries are comparable to---and often surpass---those created by human experts.
## 3 Approach
This section describes each LLM, adaptation method, and summarization task as depicted in Figure 1.
### Large language models
We investigate a diverse collection of transformer-based LLMs for clinical summarization tasks. This includes two broad approaches to language generation: sequence-to-sequence (seq2seq) models and autoregressive models. Seq2seq models use an encoder-decoder architecture to map the input text to a generated output, often requiring paired datasets for training. These models have shown strong performance in machine translation [12] and summarization [67]. In contrast, autoregressive model's architecture uses only a decoder. They generate tokens sequentially--where each new token is conditioned on previous tokens--thus efficiently capturing context and long-range dependencies. Autoregressive models are typically trained with unpaired data, and they are particularly useful for NLP tasks such as text generation, question-answering, and dialogue interactions [8, 14].
We include prominent seq2seq models due to their strong summarization performance [67] and autoregressive models due to their state-of-the-art performance for many general NLP tasks [94]. As shown in Table 1, our
choice of models varies widely with respect to number of parameters (2.7B to 175B) and context length (512 to 32K), i.e. the maximum number of input tokens a model can process. We organize our models into three categories:
**Open-source seq2seq models**. The original T5 "text-to-text transfer transformer" model [62] demonstrated excellent performance in transfer learning using the seq2seq architecture. A derivative model, FLAN-T5 [16, 50], improved performance via instruction prompt tuning. This T5 model family has proven effective for various clinical NLP tasks [44, 81]. Recently, the FLAN-UL2 model [17, 74] was introduced, featuring increased context length (four-fold that of FLAN-T5) and a modified pre-training procedure called unified language learning (UL2).
**Open-source autoregressive models**. The Llama family of LLMs [78] has enabled the proliferation of open-source instruction-tuned models that deliver comparable performance to GPT-3 [8] on many benchmarks despite their smaller sizes. Descendants of this original model have taken additional fine-tuning approaches, such as fine-tuning via instruction following (Alpaca [73]), medical Q&A data (Med-Alpaca [34]), user-shared conversations (Vicuna [14]), and reinforcement learning from human feedback (Llama-2 [78]). Llama-2 allows for two-fold longer context lengths (4,096) relative to our other open-source autoregressive models.
**Proprietary autoregressive models**. We include GPT-3.5 [57] and GPT-4 [58], the latter of which is widely regarded as state-of-the-art on general NLP tasks [94]. Both models offer significantly higher context length than open-source models. Additionally, the proprietary nature of these models raises an interesting point for healthcare, where data and model governance is important--especially if summarization tools are cleared for clinical use by the FDA.
### Adaptation methods
We consider two proven techniques to adapt pre-trained, general purpose LLMs to our domain-specific clinical summarization tasks:
**In-context learning (ICL)**. ICL is a lightweight adaptation method that requires no altering of model weights; instead, one includes a handful of in-context examples within the model prompt [43]. This simple approach provides the model with context, enhancing LLM performance for a particular task or domain [53, 81]. We implement this by choosing, for each sample in our test set, the \(m\) nearest neighbors training samples in the embedding space of the PubMedBERT model [18]. Note that choosing "relevant" in-context examples typically outperforms choosing examples at random [55]. For a given model and dataset, we use \(m=2^{x}\) examples, where \(x\in\{0,1,2,3,...,M\}\) for \(M\) such that no more than 1% of the \(s=250\) samples are excluded due to prompts exceeding the model's context length. Hence each model's context length limits the allowable number of in-context examples.
**Quantized low-rank adaptation (QLoRA)**. Low-rank adaptation (LoRA) [35] has emerged as an effective, lightweight approach for fine-tuning LLMs by altering a small subset of model weights--often \(<0.1\%\)[81]. LoRA inserts trainable rank decomposition matrices into the attention layers; then, using a training set of
\begin{table}
\begin{tabular}{l|c c|c|c c}
**Model** & **Context** & **Parameters** & **Proprietary?** & **Seq2seq** & **Autoreg.** \\ \hline FLAN-T5 & 512 & 2.7B & - & & \\ FLAN-UL2 & 2,048 & 20B & - & & \\ Alpaca & 2,048 & 7B & - & - & \\ Med-Alpaca & 2,048 & 7B & - & - & \\ Vicuna & 2,048 & 7B & - & - & \\ Llama-2 & 4,096 & 7B, 13B & - & - & \\ GPT-3.5 & 16,384 & 175B & & - & \\ GPT-4 & 32,768 & unknown & & - & \\ \end{tabular}
\end{table}
Table 1: We quantitatively evaluate eight models, including state-of-the-art sequence-to-sequence and autoregressive models. Unless specified, models are open-source (vs. proprietary).
samples, this method performs gradient descent on the inserted matrices while keeping the original model weights frozen. Compared to training model weights from scratch, this method is much more efficient with respect to both computational requirements and the volume of training data required. Recently, QLoRA [22] has been introduced as a more memory-efficient variant of LoRA, employing 4-bit quantization to enable the fine-tuning of larger LLMs given the same hardware constraints. This quantization does not impact performance [22]; as such, we use QLoRA for all model training. Note that QLoRA cannot be used to fine-tune proprietary models on our consumer hardware, as we do not have access to model weights.
To demonstrate the benefit of adaptation methods, we further include the baseline zero-shot prompting, i.e. \(m=0\) in-context examples.
### Data
To robustly evaluate adapted LLM performance on clinical text summarization, we choose four distinct summarization tasks, comprising six open-source datasets. As depicted in Table 2, each dataset has a varying number of samples, token lengths, and lexical variance. Here we calculate lexical variance as number of unique words across the entire dataset; hence a higher ratio indicates less repetition and more lexical diversity. We describe each task and dataset below. For examples of each task, please see Figures 9, A5, A6, A7, and A8.
#### 3.3.1 Radiology reports
Radiology report summarization takes as input the findings section of a radiology study containing detailed exam analysis and results. The goal is to summarize these findings into an impression section, which concisely captures the most salient, actionable information from the study. We consider three datasets for this task, where both reports and findings are created by attending physicians as part of routine clinical care.
Open-i[21] contains de-identified narrative chest x-ray reports from the Indiana Network for Patient Care 10 database. From the initial set of 4K studies, Demner-Fushman _et al.[21]_ selected a final set of 3.4K reports based on the quality of imaging views and diagnostic content.
MIMIC-CXR[36] contains chest x-ray studies accompanied by free-text radiology reports acquired at the Beth Israel Deaconess Medical Center between 2011 and 2016. We utilize a dataset of 128K reports [13] as preprocessed by the RadSum23 shared task at BioNLP 2023 [19, 20].
MIMIC-III[37] contains 67K radiology reports spanning seven anatomies (head, abdomen, chest, spine, neck, sinus, and pelvis) and two modalities: magnetic resonance imaging (MRI) and computed tomography (CT). This dataset originated from patient stays in critical care units of the Beth Israel Deaconess Medical Center between 2001 and 2012. We utilize a preprocessed version via RadSum23 [19, 20]. Compared to x-rays, MRIs and CT scans portray more information at a higher resolution. This leads to longer reports (Table 2), rendering MIMIC-III a more challenging summarization dataset than Open-i or MIMIC-CXR.
\begin{table}
\begin{tabular}{l l|c c c c} & & \multicolumn{2}{c}{**Number**} & \multicolumn{2}{c}{**Avg. number of tokens**} & \multicolumn{1}{c}{**Lexical**} \\
**Task (Dataset)** & **Task description** & **of samples** & Input & Target & **variance** \\ \hline \hline Radiol. reports (Open-i) & findings \(\rightarrow\) impression & 3.4K & 52 \(\pm\) 22 & 14 \(\pm\) 12 & 0.11 \\ Radiol. reports (MIMIC-CXR) & findings \(\rightarrow\) impression & 128K & 75 \(\pm\) 31 & 22 \(\pm\) 17 & 0.08 \\ Radiol. reports (MIMIC-III) & findings \(\rightarrow\) impression & 67K & 160 \(\pm\) 83 & 61 \(\pm\) 45 & 0.09 \\ Patient questions (MeQSum) & verbose \(\rightarrow\) short question & 1.2K & 83 \(\pm\) 67 & 14 \(\pm\) 6 & 0.21 \\ Progress notes (ProbSum) & notes \(\rightarrow\) problem list & 755 & 1,013 \(\pm\) 299 & 23 \(\pm\) 16 & 0.15 \\ Dialogue (ACI-Bench) & dialogue \(\rightarrow\) assessment & 126 & 1,512 \(\pm\) 467 & 211 \(\pm\) 98 & 0.04 \\ \end{tabular}
\end{table}
Table 2: Description of four distinct summarization tasks comprising six open-source datasets with a wide range of token length and lexical variance, i.e. \(\frac{\text{number of unique words}}{\text{number of total words}}\), where a higher ratio indicates higher lexical diversity.
#### 3.3.2 Patient questions
Question summarization consists of generating a condensed question expressing the minimum information required to find correct answers to the original question [5]. For this task we employ the MeQSum dataset [5], which contains (1) original patient health questions of varying verbosity and coherence selected from the U.S. National Library of Medicine (2) corresponding condensed questions created by three medical experts such that the summary allows retrieving complete, correct answers to the original question without the potential for further condensation. These condensed questions were then validated by two physicians and verified to have high inter-annotator agreement. Due to the wide variety of these questions, MeQSum exhibits the highest lexical variance of our datasets (Table 2).
#### 3.3.3 Progress notes
The goal of this task is to generate a "problem list," or condensed list of diagnoses and medical problems using the provider's progress notes during hospitalization. For this we employ the ProbSum dataset [29], extracted from the MIMIC-III database of de-identified hospital intensive care unit (ICU) admissions. ProbSum contains (1) progress notes averaging \(>1,000\) tokens and substantial presence of unlabeled numerical data, e.g. dates and test results (2) corresponding problem lists created by attending medical experts in the ICU. We access this data via the BioNLP Problem List Summarization shared task [20, 29, 30] and Physionet [38].
#### 3.3.4 Dialogue
The goal of this task is to summarize a doctor-patient conversation into an "assessment and plan" paragraph. We employ the ACI-Bench dataset [1, 88, 89], which contains (1) 207 doctor-patient conversations (2) corresponding patient visit notes, which were first generated by a seq2seq model and subsequently corrected and validated by expert medical scribes and physicians. Since ACI-Bench's visit notes includes a heterogeneous collection of section headers, for our analysis we select 126 samples containing an "assessment and plan" section. Per Table 2, this task entails the largest token counts across datasets for both the input (dialogue) and target (assessment).
## 4 Experiments
We now provide an overview of the two-step evaluation process, which is depicted in Figure 1. The first step (Section 4.1) includes a quantitative evaluation across each valid combination of model, adaptation method, and summarization task using a suite of natural language processing (NLP) metrics. The second step (Section 4.2) entails an extensive clinical reader study and qualitative evaluation by physicians.
### Quantitative Evaluation
Building upon the descriptions of models, methods, and tasks in Section 3, we now specify experimental details such as model prompts, data preparation, and software implementation. We also describe the NLP metrics used for quantitative evaluation.
#### 4.1.1 Model prompts and temperature
Prompt anatomy is displayed in Figure 2. All prompts consist of a prefix, nudging the model to exhibit medical expertise and a task-specific instruction (Table A1), directing the model to complete a particular task. For ICL, the prompt further includes \(m\) in-context examples, where # symbols serve as delimiters between examples. Recall \(m\) is defined in Section 3.2, and \(m=0\) for QLoRA. After this preamble, the final prompt component for all methods is the actual sample we wish to summarize, concluding with a colon to reinforce the model's designated task.
We choose the prompt components (expertise, instruction) by following best practices [6, 65] and qualitatively evaluating a handful of variants for each component. We note the importance of specifying desired length
in the instruction, e.g. "one question of 15 words or less" for summarizing patient questions. Without this specification, the model might generate lengthy outputs--occasionally even longer than the input text. While in some instances this level of detail may be preferred, we steer the model toward conciseness given our task of summarization.
We further note the effect of model prompt and parameters in Table 3. For example, we achieve better performance by nudging the model to have expertise in medicine than an expertise in wizardry. The model parameter temperature is also important. Temperature determines conditional probability distributions during sampling, hence impacting how often the model will output less likely tokens. Higher temperatures lead to more randomness and "creativity," while lower temperatures produce more deterministic outputs. After searching over temperature values \(\{0.1,0.5,0.9\}\) using GPT-3.5, we find the lowest value 0.1 performs best and thus set temperature to this value for all models. Intuitively, a lower value seems appropriate given our goal of factually summarizing text with a high aversion to hallucinations, or instances where the model generates factually incorrect text.
#### 4.1.2 Experimental Setup
Next, we describe the process for splitting datasets into training, validation, and test sets. For each dataset, we construct each test set by randomly drawing \(s\) samples, where \(s=250\) for all datasets except dialogue (\(s=100\)), which includes only 126 samples in total. After selecting these \(s\) samples, we choose another \(s\) as a
\begin{table}
\begin{tabular}{l l}
**Expertise** & You are an expert medical professional. \\ \hline \hline
**Instruction** & Summarize the [radiology report findings] \\ (task-specific) & into an [impression with minimal text]. \\ \hline \hline
**Examples** & Use the examples to guide word choice. \\ \(i=1,...,m\) & : \\
**\#**: delimiters & input \(i\): \{example input\} \\
**\#** & **summary \(i\): \{example summary\} \\
**\#** \\ & : \\ \hline \hline
**Input** & input \(m+1\): \{input text\} \\ & summary \(m+1\): \\ \hline \hline \end{tabular}
\end{table}
Table 3: Results can vary significantly across different model prompt and parameters. We generally find better performance when (1) using lower temperature, i.e. generating less random output, as summarization tasks benefit more from truthfulness than creativity (2) assigning the model clinical expertise in the prompt. Output generated via GPT-3.5 on the Open-i radiology report dataset.
Figure 2: Prompt anatomy. Each summarization task uses a slightly different instruction, as depicted in Table A1.
validation set for datasets which incorporate fine-tuning. We then use the remaining samples as a training set for ICL examples or QLoRA fine-tuning.
We leverage PyTorch, using the parameter-efficient fine-tuning [52] and the generative pre-trained transformers quantization [28] libraries to implement QLoRA. We fine-tune models with QLoRA for five epochs using the Adam optimizer with weight decay fix [51]. An initial learning rate of \(1e^{-3}\) was decayed linearly to \(1e^{-4}\) after a 100-step warm-up; we determined this configuration after experimenting with different learning rates and schedulers. To achieve an effective batch size of 24 on each experiment, we adjust (1) individual batch size and (2) number of gradient accumulation steps to fit on a single consumer GPU, a NVIDIA Quadro RTX 8000. Deviating from the recommended QLoRA [22] parameters rendered no improvement in performance. All open-source models are available on HuggingFace [85].
#### 4.1.3 Metrics
We use well-known summarization metrics to assess the quality of generated summaries. BLEU [59], the simplest metric, calculates the degree of overlap between the reference and generated texts by considering 1- to 4-gram sequences. ROUGE-L [47] evaluates similarity based on the longest common subsequence; it considers both precision and recall, hence being more comprehensive than BLEU. In addition to these syntactic metrics, we employ BERTScore, which leverages contextual BERT embeddings to evaluate the semantic similarity of the generated and reference texts [92]. Lastly, we include MEDCON [89] to gauge the consistency of medical concepts. This employs QuickUMLS [70], a tool that extracts biomedical concepts via string matching algorithms [56]. We restrict MEDCON to specific UMLS semantic groups (Anatomy, Chemicals & Drugs, Device, Disorders, Genes & Molecular Sequences, Phenomena and Physiology) relevant for our work. All four metrics range from \([0,100]\) with higher scores indicating higher similarity between the generated and reference summaries.
### Clinical reader study
After identifying the best model and method via NLP quantitative metrics, we perform a clinical reader study to compare reference human expert summaries against those generated by the best model/method. This analysis spans three summarization tasks: radiology reports, patient questions, and progress notes. The dialogue task is excluded due to the unwieldiness of a human reader parsing many lengthy transcribed conversations and paragraphs; see Figure A8 for an example and Table 2 for the token count.
Our readers include two sets of physicians: (1) five board-certified radiologists to evaluate summaries of radiology reports (2) five board-certified hospitalists (internal medicine physicians) to evaluate summaries of patient questions and progress notes. For each task, each physician views the same 100 randomly selected inputs and their A/B comparisons (human vs. model summaries), which are presented in a blinded and randomized order. An ideal summary would contain all clinically significant information (_completeness_) without any errors (_correctness_) or superfluous information (_conciseness_). Hence we pose the following three questions for readers to evaluate using a five-point Likert scale.
* **Completeness**: "Which summary more completely captures important information?" This compares the summaries' recall, i.e. the amount of clinically significant detail retained from the input text.
* **Correctness**: "Which summary includes less false information?" This compares the summaries' precision, i.e. instances of false information due to hallucination by the model or an error by the human expert.
* **Conciseness**: "Which summary contains less non-important information?" This compares which summary is more condensed, as the value of a summary decreases with superfluous information.
Given this non-parametric, categorical data, we assess the statistical significance of responses using a Wilcoxon signed-rank test with Type 1 error rate = 0.05, adjusted for multiple comparisons using the Bonferroni correction. We estimate intra-reader correlation based on a mean-rating, fixed agreement, two-may mixed effects model [42] using the Pingouin package [80]. Additionally, readers provide comments on notable samples to identify interesting observations for qualitative analysis.
We create and deploy the reader study via Qualtrics. Figure A4 demonstrates the user interface. To obfuscate
any formatting differences between the human and model summaries, we apply simple post-processing to standardize capitalization, punctuation, newline characters, etc.
## 5 Results and Discussion
### Quantitative evaluation
While considering which open-source models to evaluate, we first assess the benefit of fine-tuning open-source models on medical text. For example, Han _et al._[34] released Med-Alpaca, a version of Alpaca [73] which was further instruction-tuned with medical Q&A text, consequently improving performance for the task of medical question-answering. Despite Med-Alpaca's adaptation for the medical _domain_, Figure 3 shows that it actually performs worse than Alpaca for our _tasks_ of clinical text summarization. This suggests that--in addition to domain adaptation--task adaptation is also important. With this in mind, and considering that Alpaca is commonly known to perform worse than our other open-source autoregressive models Vicuna and Llama-2 [14, 94], for simplicity we exclude Alpaca and Med-Alpaca from further analysis.
Next, we compare ICL vs. QLoRA across the remaining open-source models with the Open-i radiology report dataset in Figure 4 and with patient health questions in Figure A1. We choose these datasets because their shorter context lengths allow for training with lower computational cost. FLAN-T5 generally performs best
Figure 4: Summarization performance comparing one in-context example (ICL) vs. QLoRA methods across all open-source models on the Open-i radiology report dataset. FLAN-T5 achieves best performance on both methods for this dataset. QLoRA typically outperforms ICL with the better models (FLAN-T5, Llama-2), although this often shifts with more in-context examples (see Figure A3). Figure A1 contains similar results with patient health questions.
Figure 3: Comparing Alpaca vs. Med-Alpaca. As most data points are below the dashed lines denoting equivalence, we conclude that Med-Alpaca’s fine-tuning with medical Q&A data results in worse performance for our clinical summarization tasks. See Section 5.1 for further discussion. Note that each data point corresponds to the average score of \(s=250\) samples for a given experimental configuration, i.e. {dataset \(\times\)\(m\) in-context examples}.
with QLoRA, although Llama-2 is often comparable. QLoRA typically outperforms ICL (one example) with the better models FLAN-T5 and Llama-2. As the number of in-context examples increases, however, ICL often surpasses QLoRA on most metrics for Open-i and MIMIC-CXR (Figure A3). Surprisingly, FLAN-T5 (2.7B) outperforms its fellow seq2seq model FLAN-UL2 (20B), despite being an older model with significantly fewer parameters.
Figure 5 displays MEDCON scores for all models against number of in-context examples up to the maximum context length permitted by each model and dataset. This graph also includes the best performing model (FLAN-T5) with QLoRA as a reference, depicted by a horizontal dashed line. Compared to zero-shot prompting (\(m=0\) examples), adapting with even \(m=1\) example delivered significantly improved performance in almost all cases, underscoring the importance of adaptation methods. While ICL and QLoRA are competitive for open-source models, proprietary models GPT-3.5 and GPT-4 far outperform other models and methods given sufficient in-context examples. Among open-source models, seq2seq (FLAN-T5, FLAN-UL2) performs better than autoregressive (Llama-2, Vicuna) models on radiology reports but worse on patient questions and progress notes. Given that these latter datasets have higher lexical variance (Table 2) and more heterogeneous formatting compared to radiology reports, we posit that autoregressive models may perform better with increasing data heterogeneity and complexity. For a similar graph across all metrics, see Figure A3.
Figure 5: MEDCON scores vs. number of in-context examples across models and datasets. We also include the best model fine-tuned with QLoRA (FLAN-T5) as a horizontal dashed line for valid datasets. Zero-shot prompting (\(m=0\) examples) often yields considerably inferior results, underscoring the need for adaptation methods. Note the allowable number of in-context examples varies significantly by model and dataset. See Figure A3 for results across all four metrics (BLEU, ROUGE-L, BERTScore, MEDCON).
Figure 6: Model win rate: a head-to-head winning percentage of each model combination, where red/blue intensities highlight the degree to which models on the vertical axis outperform models on the horizontal axis. GPT-4 generally achieves the best performance. While FLAN-T5 is more competitive for syntactic metrics such as BLEU, we note this model is constrained to shorter context lengths (see Table 1). When aggregated across datasets, seq2seq models (FLAN-T5, FLAN-UL2) outperform open-source autoregressive models (Llama-2, Vicuna) on all metrics.
Figure 6 compares models using win rates, i.e. a head-to-head winning percentage of each model combination across the same set of samples. In other words, for what percentage of samples do model A's summaries have a higher score than model B's summaries? GPT-4 further solidifies its position as the best overall. FLAN-T5 performs better on the syntactical BLEU metric but worse on others, suggesting FLAN-T5 excels more at matching word choice than matching semantic or conceptual meaning. Note that FLAN-T5 is constrained to much shorter context length (512) than GPT-4 (32K).
We conclude the best model and method is GPT-4 with a maximum allowable number of in-context examples. Next we evaluate this configuration in a clinical reader study.
### Clinical reader study
We conduct a clinical reader study across three distinct summarization tasks to compare summaries generated by GPT-4 against the reference summaries created by human experts. Pooled results across our physicians in Figure 7 demonstrate that GPT-4 summaries are more complete and contain fewer errors compared to human summaries. The distributions of reader responses in Figure 8 show that human summaries were preferred in only a minority of cases (19%), while in a majority GPT-4 was either non-inferior (45%) or preferred (36%). Table A2 contains scores separated by individual readers, while Table A3 affirms the reliability of scores across readers by displaying positive intra-reader correlation values. Based on physician feedback, we provide a rigorous qualitative analysis to illustrate strengths and weaknesses of summaries by GPT-4 and humans; see Figures 9, A5, A6, and A7.
#### 5.2.1 Completeness
Completeness is measured by the reader's response to the following question: "Which summary more completely captures important information?" We observe that GPT-4 summaries are more complete on average than human summaries, achieving statistical significance across all three summarization tasks with p-values \(<0.001\) (Figure 7). This suggests that GPT-4 excels at identifying and understanding the most relevant information from the source text.
We provide intuition for completeness by investigating a specific example in progress notes summarization. In Figure A6, GPT-4 correctly identified conditions that were missed by the human expert, such as "hypotension", "anemia", and "COPD". GPT-4 was more complete in generating its progress note summary but also missed
Figure 7: \(|\) Clinical reader study. Top: Study design comparing the summarization of GPT-4 vs. that of human experts on three attributes: completeness, correctness, and conciseness. Bottom: Results. GPT-4 summaries are rated higher than human summaries on all attributes. The most pronounced difference occurs in completeness. Meanwhile for correctness, the radiology reports task most benefits from GPT-4. Highlight colors correspond to a value’s location on the color spectrum. Asterisks denote statistical significance by Wilcoxon signed-rank test, *p-value \(<0.001\).
historical context (a history of "hypertension", or "HTN"). While the potential to miss context can be a limitation that affects completeness, it may be preferable if the goal is to focus on the current state of a patient's health. Alternatively, incorporating prior notes into the model prompt could help capture longitudinal changes.
#### 5.2.2 Correctness
Correctness is measured by the reader's response to the following question: "Which summary includes less false information?" GPT-4 generated fewer errors compared to human summaries (Figure 7). This margin was statistically significant (p-value \(<0.001\)) overall and on two of three summarization tasks. For radiology reports, GPT-4 always matched or outperformed the human expert; five readers identified zero instances (out of 100) in which the human outperformed GPT-4 (Figure 8).
As an example of GPT-4's superior correctness performance on the radiology report summarization task, GPT-4 avoided common human errors related to lateral distinctions (right vs. left, Figure 9). For the problem list summarization task, Figure A6 demonstrates that GPT-4 avoided a mistake (including "UTF") that was incorrectly documented by the human--for this example, the physician reader commented that "[the human] is hallucinating," a phrase often used to describe mistakes made by LLMs. Despite this promising performance, GPT-4 was not perfect across all tasks. We see a clear example in Figure A7 where GPT-4 mistakenly generated ("hallucinated") several conditions in the problem list that were false, such as "eosinophilia".
Hallucinations present a notable barrier to the clinical integration of LLMs, especially considering the high degree of accuracy required for medical applications. Our reader study results for correctness illustrate that hallucinations are made less frequently by GPT-4 than by humans. This implies that incorporating LLMs could actually reduce summarization errors in clinical practice. Beyond the scope of our work, there's further potential to reduce hallucinations through incorporating checks by a human, checks by another LLM, or using a model ensemble to create a "committee of experts" [11, 39].
Finally, both GPT-4 and human experts faced challenges correctly interpreting ambiguous user queries in patient health questions. Notably, GPT-4's responses exhibited a distinctive trait that often affected its correctness specificity. In Example 1 of Figure A5, when the input question mentioned "diabetes and neuropathy," GPT-4 mirrored this phrasing literally. In contrast, the human expert interpreted it as "diabetic neuropathy." This highlights GPT-4's tendency toward a literal approach without interpretation, which can be either advantageous or limiting. In Example 2 of Figure A5, GPT-4 simply reformulated the input question about tests and their locations, while the human inferred a broader query about tests and treatments. In both cases, GPT-4's summaries leaned toward literalness, a trait that readers sometimes favored and sometimes did not. In future work, a systematic exploration of model temperature could further illuminate this trade-off.
Figure 8: | Distribution of reader scores for each summarization task across evaluated attributes (completeness, correctness, conciseness). Horizontal axes denote reader preference between GPT-4 and human summaries as measured by a five-point Likert scale. Vertical axes denote frequency count, with 1,500 total reports for each plot. GPT-4 summaries are more often preferred across all attributes. The largest gain in correctness occurs on radiology reports, as no false information was found in GPT-4 summaries for this task. See Figure 7 for overall scores.
#### 5.2.3 Conciseness
metric scores and the magnitude of the reader scores. Since these features are inversely correlated, for clarity we display the negative correlation coefficient values.
Compared to other metrics, BLEU correlates most with completeness and least with conciseness. Given that BLEU measures sequence overlap, this result seems reasonable, as more text provides more "surface area" for overlap; more text also reduces the brevity penalty that BLEU applies on generated sequences which are shorter than the reference [59]. For correctness, the semantic metric (BERTScore) and conceptual metric (MEDCON) correlate most strongly with reader preference. While Figure 10 provides a relative ranking of metrics, the low magnitude of correlation values discourages any attempts of using metrics as reliable proxies for reader preference.
We advocate that human evaluation is essential when assessing the clinical feasibility of new methods, especially as model-generated summaries become increasingly viable. These NLP metrics rely on a reference; our reader study demonstrates the reference can be fallible, even when it is created by medical experts. Further, metrics only account for a partial amount of variance in the evaluation outcomes. When human evaluation is not feasible, Figure 10 suggests that syntactic metrics are better at measuring completeness, while semantic and conceptual metrics are better at measuring correctness.
### Pitfalls
We now discuss weaknesses in our work which motivate further study.
In our quantitative analysis, we select state-of-the-art and highly regarded LLMs with a diverse range of attributes. This includes the 7B-parameter tier of open-source autoregressive models, despite some models such as Llama-2 having larger versions. We consider the benefit of larger models in Figure A2, finding this improvement marginal for Llama-2 (13B) compared to Llama-2 (7B). While there may exist open-source models which perform slightly better than our selections, we do not believe this would meaningfully alter our analysis--especially since the clinical reader study employs GPT-4, which is state-of-the-art [94].
Encouragingly, we achieve strong results by performing a basic search across 1-3 options for each task instruction (Table A1) and model temperature value. Prompt phrasing and model hyperparameters can be very important for a LLM, as demonstrated in the literature [71, 83] and in Table 3. This suggests better results could be achieved via further study of prompt engineering and model hyperparameters, which we leave for future work.
The radiology report human summaries occasionally recommend further studies or refer to prior studies, e.g.
Figure 10: Spearman correlation coefficients between NLP metrics and reader preference assessing completeness, correctness, and conciseness. The semantic metric (BERTScore) and conceptual metric (MEDCON) correlate most highly with correctness. Meanwhile, syntactic metrics BLEU and ROUGE-L correlate most with completeness. Section 5.3 contains further description and discussion.
"... not significantly changed from prior" in Figure 9. These instances are out of scope for the LLM, as it does not have access to prior studies nor the purview to make recommendations. Hence for our clinical reader study, physicians were told to disregard these phrases. However in future work, it would be interesting to provide more context via prior studies and allow the LLM to make a treatment suggestion.
One limitation is that we do not consider the inherently context-specific nature of summarization. For example, a gastroenterologist, radiologist, and oncologist may have different preferences for summaries of a cancer patient with liver metastasis. Or perhaps an abdominal radiologist will want a different summary than a neuroradiologist. Further, individual clinicians may prefer different styles or amounts of information. While we do not explore such a granular level of adaptation, this may not require much further development: since our best results were obtained via ICL with only a few dozen examples, one could plausibly adapt using examples curated for a particular specialty or clinician.
We emphasize that our study does not encompass all clinical document types, and extrapolating our results is tentative. For instance, our progress notes task employs ICU notes from a single medical center. These notes may be structured differently from non-ICU notes or from ICU notes of a different center. Additionally, more challenging tasks may require summarizing longer documents or multiple documents of different types. Addressing these cases demands two key advancements: (1) transcending GPT-4's current context length of 32,768 tokens, potentially through multi-query aggregation or methods which increase context length [24, 60], and (2) introducing open-source datasets that encompass broader tasks and lengthier documents.
## 6 Conclusion
In this research, we adapt LLMs and evaluate their outputs for clinical text summarization, thoroughly analyzing eight models across a diverse set of summarization tasks. Our quantitative results underscore the advantages of adapting models to specific domains and tasks. The ensuing clinical reader study demonstrates that summaries generated by LLMs are often favored over those created by human experts due to higher scores for completeness, correctness, and conciseness. The subsequent qualitative exploration provides deeper insights into the limitations of both LLMs and human experts. Novel evidence from our research suggests a promising avenue for LLMs as tools to reduce documentation burden and empower clinicians to focus more directly on patient care.
## 7 Reproducibility
In an effort to disseminate these methods for further validation and clinical impact, we will make our code publicly available at github.com/StanfordMIMI/clin-summ. While all datasets are already in the public domain, we will share our preprocessed versions for those which do not require Physionet [38] access: Open-i [21] (radiology reports), MeQSum [5] (patient questions), and ACI-Bench [89] (dialogue).
## 8 Acknowledgements
Microsoft provided Azure OpenAI credits for this project via both the Accelerate Foundation Models Academic Research (AFMAR) program and also a cloud services grant to Stanford Data Science. Further compute support was provided by One Medical, which Asad Aali used as part of his summer internship. Curtis Langlotz is supported by NIH grants R01 HL155410, R01 HL157235, by AHRQ grant R18HS026886, by the Gordon and Betty Moore Foundation, and by the National Institute of Biomedical Imaging and Bioengineering (NIBIB) under contract 75N92020C00021. Akshay Chaudhari receives support from NIH grants R01 HL167974, R01 AR077604, R01 EB002524, R01 AR079431, and P41 EB027060; from NIH contracts 75N92020C0008 and 75N92020C00021; and from GE Healthcare, Philips, and Amazon. |
2303.17888 | Analyzing travel time reliability of a bus route in a limited data set
scenario: A case study | In this information era commuters prefer to know a reliable travel time to
plan ahead of their journey using both public and private modes. In this
direction reliability analysis using the location data of the buses is
conducted in two folds in the current work; (i) Reliability analysis of a
public transit service at route level, and (ii) Travel time reliability
analysis of a route utilizing the location data of the buses. The reliability
parameters assessed for public transit service are headway, passenger waiting
time, travel speed, and travel time as per the Service Level Benchmarks for
Urban Transport by the National Urban Transport Policy, Government of India.
And travel time reliability parameters such as Buffer Time Index, Travel Time
Index, and Planning Time Index are assessed as per Federal Highway
Administration, Department of Transportation, U S. The study is conducted in
Tumakuru city, India for a significant bus route in a limited data sources
scenario. The results suggest that (i) the Level of Service of the public
transit service needs improvement. (ii)around 30% excess of average travel time
is needed as buffer time. (iii) more than double the amount of free flow travel
time must be planned during peak hours and in the worst case. In the future,
the analysis conducted for the route can be extended for citywide performance
analysis in both folds. Also, the same method can be applied to cities with
similar demographics and traffic-related infrastructure. | Ashwini B P, R Sumathi, Sudhira H S | 2023-03-31T08:45:09Z | http://arxiv.org/abs/2303.17888v1 | # Analyzing Travel Time Reliability of a Bus Route in a Limited Data Set Scenario: A Case Study
###### Abstract
In this information era commuers prefer to know a reliable travel time to plan ahead of their journey using both public and private modes. In this direction reliability analysis using the location data of the buses is conducted in two folds in the current work; (i) Reliability analysis of a public transit service at route level, and (ii) Travel time reliability analysis of a route utilizing the location data of the buses. The reliability parameters assessed for public transit service are headway, passenger waiting time, travel speed, and travel time as per the Service Level Benchmarks for Urban Transport by the National Urban Transport Policy, Government of India. And travel time reliability parameters such as Buffer Time Index, Travel Time Index, and Planning Time Index are assessed as per Federal Highway Administration, Department of Transportation, U S. The study is conducted in Tumakuru city, India for a significant bus route in a limited data sources scenario. The results suggest that (i) the Level of Service of the public transit service needs improvement. (ii)around 30% excess of average travel time is needed as buffer time. (iii) more than double the amount of free flow travel time must be planned during peak hours and in the worst case. In the future, the analysis conducted for the route can be extended for citywide performance analysis in both folds. Also, the same method can be applied to cities with similar demographics and traffic-related infrastructure.
Automatic Vehicle Location, Intelligent transportation, Travel time variability, Travel time reliability 10/11/2022 12/02/2023
## 1 Introduction
Congestion on roads is a major challenge [1] for mobility [2] in urban areas across the globe. The extensive use of private vehicles [3] is one of the major causes of congestion that hampers the overall mobility of a city. Also, excessive use of private vehicles gives rise to, more greenhouse gases and leads to global warming [4]. To avoid these, an alternative mode of transport such as public transit [5] has to be promoted. The most common public transit service available in urban areas/cities in countries like India is buses. Bringing in a modal shift of commuters from private mode to public transit mode is a colossal task, and the role of the transit operations planners [6] is significant in this direction. Transit operations planners are the major stakeholders responsible for the optimization of transit operations like scheduling routes, monitoring service, evaluation, etc. Optimizing transit operations and improving the Level of Service (LoS) [7] will attract more passengers to public transit.
With the implementation of the Intelligent Transportation System (ITS), attempts are being made worldwide to provide a better LoS to commuters. ITS [8] intends to provide services that are innovative such as; information systems [9], navigation systems, adaptive traffic management, incident management, integrated transport management, etc. The growth of Information and Communication Technology (ICT) [10][11] has opened a variety of data sources [12] related to transportation such as Global Positioning System(GPS), smart cards, automatic fare collection, mobile phone footprints, floating car data, crowdsourced data, etc.
Researchers across the globe have studied various aspects of travel time[13][14] in cities. Some of the most researched issues are regarding reliability [15] in different traffic scenarios [16][17], which are driven by variability[18][19] in various spatial-temporal scales. Reliability of travel time is a major issue and driving force for the modal shift of commuters. Reliability of public transit service is characterized by providing [20] updated information with a consistent travel time [21], minimum waiting time [22] at the bus stop, optimal dwell time [23][24], adherence with the schedule, optimized operations [25], headways, and regularity [26] between successive service, etc.
Several works assess their reliability features based on the standard defined by Federal Highway Administration (FHWA), United States[27]. Parameters such as buffer time, Buffer Time Index (BTI), Planning Time Index (PTI), and Travel Time Index (TTI) as defined by FHWA are assessed
for a selected route in Mysore and other cities of India [28][29]. Also, several works have made effective use of GPS-based data along with other supplementary data sets to assess the reliability of routes with heterogeneous composition[15][30][31][32]. Authors in [25] propose a method for bus route reliability assessment based on the copula function in a case study conducted in China. In [33] the authors used location data of a bus route in Melbourne, Australia, and flow data using loop detectors are used in the framework developed to predict the variance and mean travel time.
Authors in [34] assess the impact of external factors on the travel time of public transit buses on the roads of Tri-City Agglomeration in northern Poland and conclude that for developing models to estimate the travel time of public transit vehicles, the sections of the networks have to be analyzed for traffic behavior and the available infrastructure taking into account the dwell time of vehicles. Authors in [35] have conducted an innovative study to assess the quality of service parameters for taxi and ride-hailing and concluded that ride-hailing service is better than taxis in waiting time, cost, and travel time in Jakarta Greater area, Indonesia using survey data. Overall, it is inferred from the existing literature that several research works are conducted using statistical, analytical, and global standard methods, using GPS, smart cards, passenger counters, Lidar and Radar measurements [36], and weather data [37].
_Motivation and objectives:_ Most of the existing works are conducted in metropolitan, tier-1, and tier-2 cities with mature traffic-related infrastructure and multiple data sources. In the current scenario, most of the population resides in small cities and urban areas, and effective management of these areas is vital, but the lack of infrastructure is a challenge. Hence with the available location data, a few reliability parameters are assessed in Tumakuru city in two folds as follows
1. Reliability analysis of a public transit service to assess headway, passenger waiting time, and travel speed and time for the study route
2. Travel time reliability analysis of a route considering the buses as probe vehicles to assess BTI, PTI, and TTI.
The findings of this research will emphasize the applications of location data of transit buses in assessing the reliability which was otherwise a tedious field study. Institutions can employ similar methods to assess reliability periodically in a cost-effective manner. The study area, methods followed, and results are discussed in future sections.
## 2 Study Area and Data
The case study is conducted in Tumakuru city, a small-scale city with 370 thousand population. The buses are the only sources of local mass transit in the city. Tumakuru city service is operated by Karnataka Road Transport Corporation (KRTC). Currently, fifteen routes are operating with varying route lengths of 5-15 km. The bus network is connected by segments of national and state highways, and arterial and sub-arterial roads. All city service buses are equipped with a GPS-enabled bus tracking system. The location data of the buses in March 2021 have been used for this analysis. Sample logs are given in Fig. 1.
Route number 201: Tumakuru Bus Stand (TBS) - Kyathasandra (KYA) is selected for the study. The route is further divided into four segments based on the land use pattern. Segment one is Central Business District (CBD), the second segment is Inner City (IC), whereas segment three is the Inner Suburban (ISU) area. The fourth segment is the Outer Suburban (OSU) area, using the national highway. The information on route 201 is given in table 1, and the route map is in Fig 2. The plot of the travel time and the travel speed at the route level is presented in Fig. 3.
Figure 1: Sample logs of route Tumakuru Bus Stand to Kyathasandra
Figure 2: Route map of route Tumakuru Bus Stand to Kyathasandra
## 3 Methods
### Public Transit Service Reliability Analysis
Public transit service performance is evaluated based on four components; convenience, comfort, reliability, and security [38]. Reliability is one of the important components that in turn is based on the variability of travel time, passenger waiting time, headway, and punctuality [39]. In this context, with the available location data and the schedule, a few public transit reliability attributes such as headway, passenger waiting time, and travel speed and time for the route under study are assessed. The analysis is conducted based on the Service Level Benchmarks(SLBs) [40] for Urban Transport defined by National Urban Transport Policy (NUTP) - Ministry of Urban Development (MoUD), Government of India.
#### 3.1.1 Headway
It is the time elapsed between two consecutive buses to a common destination at a particular bus stop [41]. The headways at the origin of the route on 10 weekdays are extracted from the location data. Aggregates of the number of buses departing at each Departure Time Window (DTW) between 5:00 in the morning to 21:00 are compared with the scheduled headway. According to the schedule, there are 36 trips for the TBS-KYA route. The comparison is given in Fig. 4 it is observed that there are few DTWs during which the headways differ (violating schedule), illustrating dynamic adjustments in the timetable. The overall headway of the route is estimated using (1).
\[\textbf{{Headway}}\ =\textbf{1}/\textbf{n}*\sum_{\textbf{l}=1}^{\textbf{n}-1} \textbf{(ST}_{\textbf{l}+\textbf{l}}-\textbf{ST}_{\textbf{l}})\ \ \ \ \ \textbf{(1)}\]
Where the number of trips along the route is \(n\), _ST\({}_{i}\)_ is the start time of the trip \(i\). The average headway of the route is 25 minutes. For estimating the headway during peak hours, the number of buses scheduled during peak hours is divided by minutes of peak hours. For the current study, 10:00 AM -11:00 AM is considered as peak hour as per [42][43], three buses are scheduled during peak hour, therefore headway during peak hours is 20 minutes
#### 3.1.2 Passenger waiting time
It is the total time, the public transit users spend at the bus stop for the arrival of the bus for the desired route. Passenger waiting time is estimated based on the headway, with the assumption that passengers' arrival rate is uniform at the bus stops. As per the SLBs [40] by NUTP - MoUD, Govt. of India, the
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Route parameters & Route overall & Segment 1 & Segment 2 & Segment 3 & Segment 4 \\ \hline Origin –Destination & TBS-KYA & TBS-Bhadramma & \begin{tabular}{c} Bhadramma \\ Choultry \\ Circle \\ \end{tabular} & \begin{tabular}{c} SS Circle- \\ Batawadi \\ \end{tabular} &
\begin{tabular}{c} Batawadi- \\ Kyathasandra \\ \end{tabular} \\ \hline Length & 6.9 kilometers & 1.76 kilometers & 1 kilometer & 2.09 kilometers & 2.05 kilometers \\ \hline Number of bus stops & 9 & 3 & 2 & 3 & 1 \\ \hline Number of signalized intersections & 6 & 3 & 1 & 1 & 1 \\ \hline Number of lanes & 2 & 2 & 2 & 2 & 3 \\ \hline Land use pattern & - & CBD & IC & ISU & OSU \\ \hline \hline \end{tabular}
\end{table}
Table 1: Route information
Figure 3: Box plots (a) travel time (b) travel speed of the TBS-KYA route during the study period
Average Passenger Waiting Time (APWT) is estimated using (2).
\[\textbf{{APWT}}(\textbf{{route}})=\textbf{{1}}/\textbf{{2}}(\textbf{{Headway of route}}) \tag{2}\]
The APWT of the route TBS-KYA is 12.5 minutes overall, and 10 minutes during peak hours. The APWT of each DTW is shown in **Fig. 5**. According to SLBs, the LoS is at level 3 out of 4 levels. The reference LoS for passenger waiting time according to SLBs of NUTP is given in Table 2.
#### 3.1.3 Travel time
The total travel time from a passenger's point of view is the time from which the passengers start towards the bus stop to the time to reach the final venue. From the service provider's point of view, the travel time is the time taken from the departure of the bus from origin to destination. The analysis of the travel time in the current study is from a service provider's point of view.
_Free-flow travel time vs. Running time:_ A free-flow scenario is when there are zero delays i.e., no congestion, waiting times, or disturbances for travel. The trips during 5:00 - 6:00 AM are considered for analyzing the free-flow travel speed and time of the public transit buses. Running time is the travel time of the bus without any delay such as dwell time and intersection delay but in the presence of normal traffic conditions along the route. Trips on weekdays during peak hours excluding the time for acceleration, deceleration, dwell time, and intersection delay [44] are used for analysis. A total of 40 trips, 20 during 5:00 -6:00 AM and 20 during peak hours on weekdays are extracted from location data. The estimations of free flow travel time and speeds, and running time and speeds are summarised in Table 3. An excess running time of around 300 seconds is observed for running time, which contributes to a 52.5% increase in travel time as compared to the free-flow travel time.
\begin{table}
\begin{tabular}{c c} \hline Level of & Average waiting \\ Service & time for public \\ & transit users \\ \hline
1 & \(<\)\(\simeq\)4 mins \\
2 & 5 - 6 mins \\
3 & 7 -10 mins \\
4 & \(>\)=11 mins \\ \hline \end{tabular}
\end{table}
Table 2: Reference table for passenger waiting time[40]
Figure 4: Headway of the buses
Figure 5: Average waiting time at each DTW
_Private mode vs. Public Transit mode:_ The most common private vehicle used in Tumakuru city is two-wheelers. According to [43], the mode share of two-wheelers was 60% in 2012. In this context, the travel time of two-wheelers is compared against the bus travel time along the route. The comparison of the private with the public mode can be conducted in many folds [45] such as security, cost, speed, travel time, freedom, comfort, etc. In this study, the comparison is limited to the speed of the vehicle and the travel time along the bus route. 10 two-wheeler trips during peak hours of weekdays were conducted during the study period (March 2021) to analyze the two-wheeler travel time in Tumakuru city. The observations are tabulated in Table 4. An excess travel time of around 370 seconds is observed when public transit buses are used to travel.
SLBs by NUTP, MoUD, India has defined the LoS for transit speed along the major corridors using the motorized personal vehicles and public transit service, the reference is given in Table 5. Based on that, the LoS of personal vehicle speed considering two-wheelers (LoS1) is at level 2, and the LoS of public transit buses (LoS2) is at level 3 (using a weighted average of length). According to column 4 of reference Table 5, LoS1+LoS2 is 5 therefore the overall LoS is at level 3.
### Travel Time Reliability Analysis of a Route
The FHWA is a part of the Department of Transportation, U S and it specializes in transportation. FHWA has defined reliability parameters for a road network. Considering the buses as probe vehicles a few reliability parameters are assessed using only location data in the study area. The assessment of these parameters can be handy for several other study areas that lack other data sources for reliability analysis. In this work parameters namely buffer time index, travel time index, and planning time index are assessed. The 95\({}^{\text{th}}\) Percentile Travel Time (95\({}^{\text{th}}\) PTT) needed for calculating the indexes is extracted from the location data of
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline & \multicolumn{4}{c}{Two-Wheeler} & Bus & \\ \cline{3-6} Segments & Segment Length in km & Mean & Mean & Mean & Mean & Mean & Excess travel time \\ \cline{3-6} & Segment Length in km/hour & Speed in km/hour & Travel time in seconds & Speed in km/hour & Speed in time in seconds & & by bus \\ \hline S1 & 1.76 & 17 & 375 & 12.5 & 500 & 125 \\ S2 & 1 & 25 & 146 & 13.5 & 280 & 134 \\ S3 & 2.09 & 30 & 250 & 17.5 & 300 & 50 \\ S4 & 2.05 & 40 & 185 & 30 & 250 & 65 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Comparison of two-wheeler and bus travel along the route
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & \begin{tabular}{c} Segment \\ Length \\ in kilometers \\ \end{tabular} & \begin{tabular}{c} Free flow \\ Bus \\ speed \\ \end{tabular} & \begin{tabular}{c} Free flow \\ travel time in \\ seconds \\ \end{tabular} & \begin{tabular}{c} Running \\ travel time \\ in seconds \\ \end{tabular} & \begin{tabular}{c} Excess \\ travel time \\ time \\ \end{tabular} &
\begin{tabular}{c} Percentage \\ increase in \\ travel time \\ \end{tabular} \\ \hline S1 & 1.76 & 38 & 181 & 19 & 333 & 152 \\ S2 & 1 & 40 & 93 & 28 & 128 & 35 \\ S3 & 2.09 & 41 & 188 & 27 & 278 & 90 & 52.50\% \\ S4 & 2.05 & 49 & 157 & 36 & 205 & 48 \\ Route & 6.9 & 42 & 619 & 28 & 944 & 325 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparison of free flow and running speeds and travel time
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & \begin{tabular}{c} Average \\ speed of \\ private \\ vehicles: \\ LoS1 \\ \end{tabular} & \begin{tabular}{c} Average \\ speed of \\ buses: \\ LoS2 \\ \end{tabular} &
\begin{tabular}{c} Overall \\ LoS2+Lo \\ S3 \\ \end{tabular} \\ \hline
1 & \(>\)=30 & \(>\)=20 & 2 \\
2 & 25-29 & 15-19 & 3-4 \\
3 & 15-24 & 10-14 & 5-6 \\
4 & \(<\)=14 & \(<\)=9 & 7-8 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Reference table for motorized transit speed [40]
trips used in the previous section. The 95th PTT indicates the worst travel time on a heavy travel day. The Free Flow Travel Time (FFTT) is estimated using the speed limit and distance of each section. The average running time of the buses estimated in the previous section is considered the Average Travel Time (ATT) in the current section.
Buffer Time Index: It is the extra time that might be added during travel is called buffer time. This buffer time includes expected delays [46]. BTI is expressed in terms of percentage, indicating the percentage excess buffer time needed to be planned by the commuters to arrive on time at least 95% of the time under normal delays. The formula to compute BTI is given in (3).
\[BTI=\left(\left(95th\ PTT-ATT\right)/ATT\right)*100 \tag{3}\]
Planning Time Index: The PTI gives the travel time that must be planned including adequate delays [47] including expected and unexpected delays to arrive at the destination on time. It is a ratio of 95th PTT to that of the FFTT, as given in (4). It is an important parameter that is useful for planning trips where the commuters arrive as per plan in the worst case in 95% of cases.
\[PTI=\left(95th\ PTT/FFTT\right) \tag{4}\]
Travel Time Index: It is the ratio of ATT during peak hours to FFTT. i.e., TTI indicates the excess travel time (average) for trips during peak hours as against the FFTT. The formula to estimate the TTI is given in (5).
\[TTI=\left(ATT/FFTT\right) \tag{5}\]
The reliability measures are estimated based on equations (3)-(5), and the estimates are summarized in **Table 6**. It is observed from the results in **Table 6** that, segment 2 has the highest BTI (40.63 %) and segment 1 has the highest PTI (3.22), and TTI (2.52) indicating low reliability as compared to other sections. Overall, it is concluded that the travel time of passengers in the study route is high and reliability measures are low.
The reliability parameters are applied for the data of Departure Time Windows (DTW) from 7_8 to 19_20 at the route level. The plot of ATT, 95th PTT, and FFTT is presented in Fig. 6, and PTI and the TTI are presented in Fig. 7.
## 4 Discussion
Travel time reliability [48] is an important feature for commuters and recognizing the variability in travel time and speed can serve to improve it. In heterogeneous traffic conditions with common lanes for general traffic and buses, gaining insights into the travel time behavior of public transit buses is a challenge.
The variability in headway is analyzed, and it is 25 minutes (average) throughout the day and 20 minutes during peak hours. Headway is a major factor that influences reliability, improving the headway can influence the modal shift of the commuters. The average waiting time of the passengers is 12.5 minutes overall and 10 minutes during peak hours. According to the Service Level Benchmarks [40] defined by the National Urban Transport Policy [49], Ministry of Urban Development, Government of India, it is recommended for cities with million-plus populations to maintain a maximum waiting time of 12 minutes. Measures to improve the headway and passenger waiting time are to be considered with priority to sustain and further improve the current scenario as per the NUTP.
A comparison of travel time and speeds of public transit buses with the two-wheeler is conducted and the overall LoS as per SLBs by NUTP is at level 3. Level 3 demonstrates slowness majorly because of high signal density [50], congestion at critical intersections, and inappropriate timing of signals [51] according to NUTP. An average excess travel time of 374 seconds during peak hours is observed as compared to two-wheelers, which account for around 40% excess travel time. This is a major reason for commuters to use the private mode. To overcome this problem, a reliable information system along with deep coverage of services in the network is recommended. Also planning signal priority for public transit buses at intersections is suggested
The free-flow bus speeds and travel time are compared with the running speed and travel times. The results emphasize that there is a total of 52.5% increase in travel time. The bus route is mixed mode and the presence of other vehicles on road, the stochastic behavior of traffic, delays at the intersections, and dwell time reduce the bus speed and increase the travel time. A dedicated lane [20] for buses and prioritizing buses at the intersections [52][53] can resolve most of the mentioned problems.
Based on the definitions provided by the FHWA, a few reliability parameters namely, BTI, TTI, and PTI are assessed. The results demonstrated an average BTI of 30%, PTI of 2.78, and TTI of 2.14 which are high compared to the results presented in the existing works[15][28] for other cities of India. The range of the reliability scores demonstrates unique travel time behavior in each segment, and to improve the reliability of the route, the treatments have to be provided at the segment level. The route segments belong to the National highway, State highway, Municipality, and Urban development authority. This needs integrated planning and common guidelines. The MoUD, India has suggested the 'Urban Roads Code' [54] for this purpose, and these institutions have to collaborate in this direction to handle the situation.
The analysis of reliability parameters conducted in the study is limited to a major route of the city. This can be extended to assess all other routes of the city and assess the service at the city level. Unlike big cities with multi-mode public transit services such as metro trains, local trains, buses, etc. Tumakuru has only buses as its public transit service mode. There are more than 100 cities like Tumakuru in India, and cities in other Asian countries, the issues, solutions, and recommendations presented to this city can be extended to other cities.
## 5 Conclusion
The reliability of travel is vital for commuters to plan trips. As public transit services are the most used mode of travel in most small cities and urban areas, the reliability of transit services is of equal importance. In this regard, reliability analysis is conducted in two folds; (i) reliability analysis of the public transit service of a study route and (ii) Travel time reliability analysis of a route considering the bus as a probe vehicle. The study is carried out in Tumakuru city, India. Analysis of a few parameters such as headway, passenger
Figure 7: The PTI and TTI at each DTW
waiting time, and speed & travel time in free flow, running, and peak hours scenario is conducted and assessed based on the Service Level Benchmarks suggested by National Urban Transport Policy, MoUD, Govt. of India. The headway is 25 minutes overall and the average waiting time during peak hours is 10 minutes. Excess travel of 40% is observed as compared to a two-wheeler and an excess travel time of 52.5% is observed when compared with a free-flow travel time. According to the benchmarks, the LoS of all the parameters needs improvement.
A few reliability parameters defined by FHWA namely Buffer Time Index (BTI), Travel Time Index (TTI), and Planning Time Index(PTI) of the bus route is estimated. The scores of the reliability parameters of the bus route suggest that around 30% excess time is needed as buffer time as compared to the average travel time. The PTI is 2.78 indicating travelers need to plan for an excess of 2.78 times the Free Flow Travel Time (FFTT) in the worst case, and TTI is 2.14 indicating commuters should plan for 2.14 times the FFTT during peak hours. The values indicate the reliability is low compared to other cities[15][28]. The analysis was conducted using limited data such as location and schedule data, highlighting the application of the location data in a cost-effective way for locations that lack additional data sources and traffic-related infrastructure. There are several similar cities like Tumakuru in India, and other countries, and the analysis and recommendations presented can be extended to other cities in the future.
## 6 Acknowledgments
The authors are grateful to the authorities of Tumakuru Smart City Limited for providing the essential data (automatic vehicle location logs) of the Tumakuru city transit service buses.
**Author contributions**
**Ashwini B P:** Conceptualization, Methodology, Data collection and pre-processing, Writing-Original draft, **R Suimathi:** Methodology, Data collection, Writing-reviewing, and editing **Sudhira H S:** Conceptualization, Visualization, Investigation, Writing-Reviewing, and Editing.
**Conflicts of interest**
The authors declare no conflicts of interest.
|
2309.06706 | Simultaneous Machine Translation with Large Language Models | Real-world simultaneous machine translation (SimulMT) systems face more
challenges than just the quality-latency trade-off. They also need to address
issues related to robustness with noisy input, processing long contexts, and
flexibility for knowledge injection. These challenges demand models with strong
language understanding and generation capabilities which may not often equipped
by dedicated MT models. In this paper, we investigate the possibility of
applying Large Language Models (LLM) to SimulMT tasks by using existing
incremental-decoding methods with a newly proposed RALCP algorithm for latency
reduction. We conducted experiments using the \texttt{Llama2-7b-chat} model on
nine different languages from the MUST-C dataset. The results show that LLM
outperforms dedicated MT models in terms of BLEU and LAAL metrics. Further
analysis indicates that LLM has advantages in terms of tuning efficiency and
robustness. However, it is important to note that the computational cost of LLM
remains a significant obstacle to its application in SimulMT.\footnote{We will
release our code, weights, and data with publication.} | Minghan Wang, Jinming Zhao, Thuy-Trang Vu, Fatemeh Shiri, Ehsan Shareghi, Gholamreza Haffari | 2023-09-13T04:06:47Z | http://arxiv.org/abs/2309.06706v2 | # Simultaneous Machine Translation with Large Language Models
###### Abstract
Large language models (LLM) have demonstrated their abilities to solve various natural language processing tasks through dialogue-based interactions. For instance, research indicates that LLMs can achieve competitive performance in offline machine translation tasks for high-resource languages. However, applying LLMs to simultaneous machine translation (SimulMT) poses many challenges, including issues related to the training-inference mismatch arising from different decoding patterns. In this paper, we explore the feasibility of utilizing LLMs for SimulMT. Building upon conventional approaches, we introduce a simple yet effective mixture policy that enables LLMs to engage in SimulMT without requiring additional training. Furthermore, after Supervised Fine-Tuning (SFT) on a mixture of full and prefix sentences, the model exhibits significant performance improvements. Our experiments, conducted with Llama2-7B-chat on nine language pairs from the MUST-C dataset, demonstrate that LLM can achieve translation quality and latency comparable to dedicated SimulMT models.1
Footnote 1: We will release our code, weights, and data with publication.
Minghan Wang, Jinming Zhao, Thuy-Trang Vu, Fatemeh Shiri, Ehsan Shareghi, Gholamreza Haffari Department of Data Science & AI
Monash University
Simultaneous Machine Translation, Large Language Model, Incremental Decoding
## 1 Introduction
With the advent of ChatGPT, Large Language Models (LLMs) have emerged as a focal point of research within the broader NLP academic community. Their formidable ability to adhere to instructions enables them to address various conventional NLP problems through conversational interactions. This trend, in turn, motivates researchers to adapt a wider array of traditional NLP tasks to LLMs, with the expectation that LLMs can achieve performance on par with, or even surpass, dedicated specialized models.
Machine translation (MT), as a crucial generative NLP task, typically demands models to possess robust multilingual capabilities. Moreover, achieving high-quality translations often requires models to have a substantial amount of commonsense knowledge. Numerous studies [1, 2] have already demonstrated that LLMs perform comparably to dedicated machine translation models in high-resource languages. However, there have been no successful instances of LLMs employed in the branch of machine translation known as Simultaneous Machine Translation (SimulMT) [3]. Unlike offline translation, in SimulMT, the source text accumulates incrementally over time, and the translation model needs to provide translations incrementally and synchronously. During this process, the model requires to have a certain policy to make decisions on taking READ or WRITE actions [3]. Existing approaches, including fixed policy like "wait-k" [4, 5], adaptive policies such as monotonic attention and imitation learning [6, 7], and incremental decoding with offline model [8, 9, 10], have already been successfully applied to sequence-to-sequence models like Transformer [11]. This leads us to
Figure 1: The illustration of the pipeline of our framework where the incremental source text is colored blue, and incremental target text is colored pink. RALCP denotes the Relaxed Agreement Longest Common Prefix algorithm proposed by us (§2.2.2).
pose the research question: **How can we transform an LLM into a simultaneous translation model?** To adapt LLMs to this task, we need to address the following key challenges:
* To effectively handle accumulating source context and ensure that the target is generated incrementally under the decoder-only architecture of LLM.
* To design a reading and writing policy for LLM to achieve a well-balanced performance and latency.
* To bridge the discrepancy between an LLM's standard pre-training data and the SimulMT's incremental nature (i.e., the training data of LLMs assumes that user instructions and context are complete, but during the inference process of SimulMT, the source context is partial and incremental).
In this paper, we have leveraged the insights from conventional simultaneous translation models and combined the "wait-k" [4, 5] policy with the incremental decoding approach [8, 9, 10] to design a mixture policy that aligns with LLMs (See Figure 1). This policy enables the adaptation of LLMs to SimulMT tasks without the need for any specialized training on the learning of such policy. After subjecting the model to a single epoch of SFT using limited multilingual data, it achieves performance on par with dedicated simultaneous translation models. To address the challenge of suboptimal translation quality when the model encounters partial source inputs as context, we further incorporated a small amount of prefix data generated by ChatGPT (1000 examples per language). These prefix data were combined with full-sentence pairs in SFT. Experimental results demonstrate that the inclusion of prefix data can lead to improved performance in certain language pairs.
## 2 Method
### Prompt Design of Incremental States
While there are significant differences in the decoding process between SimulMT models and offline MT models, the fundamental approach to guiding LLMs in translation remains consistent. This approach continues to rely on constructing prompts composed of instructions + context as input, prompting LLMs to perform text completion. To elaborate further, in offline translation, we usually construct a prompt as follows: "**[INST] Translate the following sentence from English to German:**\(S\)**[/INST]**", where \(S\) is the source sentence. LLM then provides the translation in the content completed after "[/INST]". The completed translation can be denoted as \(T\).
In simultaneous translation, we keep the content of the instruction unchanged and consider the source text as a time-dependent variable-length sequence \(S_{t}\). Additionally, we treat the accumulated translation content as another variable-length sequence \(T_{t}\). At this point, the model's input is time-dependent, and we define \(X_{t}\) as the input to the model at time step \(t\). \(X_{t}\) can be obtained through the prompting function \(X_{t}=\phi(S_{t},T_{t})\), which puts \(S_{t}\) and \(T_{t}\) in the same sequence starting with the instruction: "**[INST] Translate the following sentence from English to German:**\(S_{t}\)**[/INST] \(T_{t}\)". By employing this approach, we can effectively manage the ongoing source and target content separately and structure them into standardized prompts (line 6 in Algo 1).
### Mixture Policy
Regarding the policy for reading and writing, we introduce a hybrid policy based on "wait-k" [4] and incremental decoding [8, 9, 10]. Formally, we define the policy function and action as \(a_{t}=\pi(S_{t},T_{t},t),\) where \(a_{t}\in\{\mathbb{R},\mathbb{W}\}\). When \(a_{t}=\mathbb{R}\), the system reads the latest source token and appends it to \(S_{t}\). When \(a_{t}=\mathbb{W}\), the system returns the latest \(l\) target tokens and appends them to \(T_{t}\). A detailed illustration is shown in Algo 1 and Figure 1.
#### 2.2.1 Reading Policy
The decision-making process for the read action primarily depends on two hyperparameters, \(k\) and \(n\). \(k\) represents the
initial waiting steps, indicating that the system needs to read at least \(k\) source tokens before commencing translation [4]. \(n\) represents the number of tokens the model has to read at once before it is allowed to take write action (line 2 in Algo 1). Given that LLMs typically require more computational resources for inference compared to traditional simultaneous translation models, we opt to reduce computational latency by reading \(n\) tokens consecutively at once, thus minimizing the number of model invocations.
#### 2.2.2 Writing policy
Regarding the decision-making process for the write action, we employ the incremental decoding approach and utilize the Longest Common Prefix (LCP) algorithm to identify a sufficiently confident translation prefix. This approach has been used in [8, 9, 10] and evolved into multiple variants, which has been proved to have promising performance. Our approach is mainly based on [10]. Specifically, we employ beam search to allow the model to generate \(B\) candidate translations and subsequently use the LCP algorithm to find a common prefix with the local agreement (LA) as the translation output for this write action.
However, candidates generated by LLMs during beam search decoding may still exhibit diversity, making it challenging for the LCP algorithm to identify a prefix with LA, resulting in significant latency. To address this problem, we optimize the LCP algorithm and introduce the Relaxed Agreement Longest Common Prefix (RALCP) algorithm. RALCP employs a voting mechanism to relax the constraints on identifying the common prefix. For example, if 80% of the candidates can propose the same token, then that token is accepted as a part of the prefix. We denote \(\gamma\) as the agreement threshold, which is considered as the threshold of accepting the most frequent token at the certain position. Specifically, in conventional LCP, the prefix with local agreement is located by matching the token at the same position \(i\) for all candidate sequences, if they are holding the same token, the token will be gathered into the prefix. In RALCP, we relaxed the criteria of selecting the token by employing the voting mechanism, i.e. if the token at \(i\) has the normalized votes (frequency) larger than \(\gamma\), it will be accepted in the prefix. In our experiments, we explored \(\gamma\) ranging from 0.1 to 1.0 and found that 0.6 is an empirically balanced value.
### SFT and Prefix Training
Although the mixture policy already equips the LLM with the ability to perform simultaneous translation. To achieve even better results, we can explore the use of SFT to enhance the performance further. Thus, we finetune the LLM with LoRA [12] in a manner exactly the same as offline translation. Specifically, we put the source and target sentence into the prompt mentioned in SS2.1, and only compute loss on target text to avoid catastrophic forgetting. Meanwhile, we only tune 1 epoch on the combined training set of all nine languages.
By analyzing the model's output we observed that current policy can, to some extent, mitigate hallucinations caused by incomplete source contexts, but the model may still produce local agreed incorrect translations. This often manifests as the model attempting to complete the target to form a complete sentence, even if the completed part was not mentioned in the source sentence.
Inspired by [8], we constructed a small amount of prefix-to-prefix data, aiming to mitigate the model's tendency to attempt completion by employing them in SFT. Specifically, we randomly sampled 1000 source sentences from the training set of each language pair and truncated them into 20% to 80% of the full length uniformly, resulting in 9000 source prefixes. We then used ChatGPT to translate these source prefixes, thus obtaining target prefixes. These prefix pairs are mixed into the combined multilingual training set with full sentences inside.
## 3 Experiment
### Experimental Setup
We selected nine language pairs from the MUST-C [13] dataset, which has been commonly used in the evaluation of the performance of speech and text translation systems. These nine language pairs all have English as the source language and consist of TED talk speech utterances. Each language pair contains between 100k to 200k training samples and over 2,000 test samples. During training, the combined training set has a total number of 1.9M samples (with an additional 9000 prefix samples for prefix training). We used the tst-COMMCN test set for evaluation. For evaluation metrics, BLEU [14] is used for evaluating quality and LAAL [15] is used for evaluating latency. All evaluations are conducted with the SimulEval toolkit [16].
We used Llama2-7B-chat as our LLM [17]. During SFT, LoRA adapters were configured with \(r=64\) and \(\alpha=16\), thus having the total trainable parameters to be 33M. We set the learning rate to 2e-4, the batch size to 48, and employed 4-bit quantization. A single A100 GPU is used to perform SFT for all settings only for one epoch.
We established two baseline models: 1) An offline Transformer [11] model trained (complete sentence pair training set same with LLM, but with source sentences prepended with a language tag) for 300k step with 16k tokens per batch on 4 A40 GPUs. We followed the implementation of [10] for incremental decoding inference but changed the reading policy proposed in our paper for text input. 2) A "wait-k" Transformer [4] with a fixed \(k\) set as 5 trained with the same configuration to (1). We followed the implementation of [5] for inference. To ensure a fair comparison, both baseline models contained 48M parameters, matching the learnable parameter size to LLM's LoRA adapters.
### Experimental Results
Our experiments are divided into three groups: 1) We evaluated the performance of our proposed approach for the LLM under the one-shot setting (We found that LLM under zero-shot setting often generates unexpected format in the response, thus chose to use the one-shot setting), serving as the baseline performance for this framework. 2) We conducted SFT on Llama2-7B-chat models using complete sentence pairs to assess the improvement brought by SFT. 3) We performed SFT using a mixture of ChatGPT-generated prefix pairs and complete sentence pairs to examine the effectiveness of prefix training.
From Table 1, the following findings are evident: 1) In the offline scenario, LLM exhibits a considerable gap compared to specially trained NMT models in the one-shot setting. However, after SFT, this gap is narrowed, and in some languages, LLM even surpasses NMT models. The introduction of prefix data does not lead to a significant change in performance. 2) In the streaming scenario, the one-shot performance is similar to its offline counterpart. Upon analyzing the content of the model's output, we observed that this may be due to LLM attempting to generate responses learned from chat tasks, such as "Here is the translation", which can affect the final BLEU score. 3) LLM after SFT outperforms baseline models using "wait-k" and incremental decoding in most languages, with performance close to offline decoding. 4) The addition of prefix data results in an average 1.3% improvement in BLEU but leads to an average 3.2% increase in latency. 5) When using larger beam sizes, \(n\), and \(k\), the model significantly outperforms incremental decoding baseline in languages other than en-ro and exhibits lower latency. In terms of the agreement threshold \(\gamma\) in RALCP, we studied the influence of it to the BLEU and LAAL with the en-de validation set, as shown in Figure 2. We found that 0.6 is an empirically balanced choice, thus used under all settings in our experiments. Clearly, compared with conventional LCP (\(\gamma=1.0\)), RALCP can have better balancing on the performance and latency. However, theoretically, the chosen of \(\gamma\) should be language correlated, but due to the limitation of computational resources, we use 0.6 for all language pairs.
## 4 Conclusion
In this paper, we introduced the Mixture policy that enables seamless adaptation of LLMs like Llama2-7B-chat to simultaneous translation tasks. Experimental results demonstrate that this policy allows LLMs to achieve their intrinsic one-shot offline translation performance during simultaneous decoding. After performing SFT, the models can outperform other dedicated simultaneous translation models while exhibiting lower latency. By employing prefix training, the model can achieve slight performance improvements in low-latency scenarios. In future work, we plan to validate this approach across a wider range of LLMs and languages and explore its integration with speech modalities.
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline
**Model** & **en-cs** & **en-de** & **en-es** & **en-fr** & **en-it** & **en-nl** & **en-pt** & **en-ro** & **en-ru** \\ \hline
**Offline-Transformer (b=5)** & 22.29 & 30.65 & 35.08 & 42.91 & 31.46 & 34.91 & 38.05 & 29.58 & 20.09 \\
**LLM-One-Shot (b=5)** & 10.37 & 21.79 & 27.40 & 31.25 & 19.71 & 23.80 & 23.87 & 15.44 & 13.40 \\ LLM-STF (b=5) & 20.47 & 30.73 & 36.43 & 42.77 & 32.05 & 34.51 & 37.58 & 27.45 & 20.65 \\
**LLM-PFX-SFT (b=5)** & 20.73 & 30.93 & 36.47 & 42.89 & 31.91 & 33.87 & 37.66 & 27.15 & 21.02 \\ \hline
**Wait-5-Transformer (k=5)** & 10.77 (7.59) & 15.44 (6.17) & 18.94 (6.10) & 24.45 (6.75) & 16.12 (6.41) & 18.69 (6.35) & 19.87 (6.64) & 15.26 (7.66) & 10.22 (6.96) \\
**Ince-Dec-Trans (b=5, k=3, n=3)** & 17.33 (4.89) & 26.47 (5.26) & 31.64 (6.22) & 38.85 (5.93) & 28.30 (6.35) & 31.14 (6.11) & 33.80 (6.34) & 26.62 (6.89) & 16.81 (5.54) \\
**Ince-Dec-Trans (b=10, k=6, n=6)** & 19.82 (7.32) & 28.63 (7.73) & 33.16 (8.48) & 41.53 (8.34) & 29.70 (8.68) & 33.31 (8.54) & 36.17 (9.06) & **28.51 (8.95)** & 18.48 (8.03) \\ \hline
**LLM-One-Shot (b=5, k=3, n=3)** & 10.63 (4.07) & 19.10 (3.81) & 24.48 (3.92) & 28.57 (4.03) & 17.12 (4.03) & 20.89 (3.71) & 21.86 (4.03) & 14.21 (4.08) & 12.63 (4.12) \\ LLM-STF (b=5, k=3, n=3)** & 19.09 (4.02) & 28.31 (4.07) & 33.82 (4.15) & 41.23 (4.19) & 29.46 (4.24) & 30.87 (3.92) & 35.05 (4.38) & 25.67 (4.30) & 18.29 (4.05) \\
**LLM-PFX-SFT (b=5, k=3, n=3)** & 19.80 (4.21) & 28.80 (4.15) & 33.86 (4.40) & 41.34 (4.29) & 29.07 (4.36) & 31.46 (3.99) & 34.87 (4.41) & 25.89 (4.40) & 19.21 (4.29) \\
**LLM-PFX-SFT (b=10, k=6, n=6)** & **21.31 (7.38)** & **31.06 (7.31)** & **36.34 (7.72)** & **42.59 (7.61)** & **31.53 (7.72)** & **33.92 (7.08)** & **37.56 (8.03)** & 27.03 (7.91) & **20.66 (7.82)** \\ \hline \hline \end{tabular}
\end{table}
Table 1: This table presents the overall result of our experiments (b=beam size, k=wait-k, n=read-n). The first group is the performance of offline setting for Transformer and LLM (Llama2-7B-chat) with one-shot, with SFT only, and with SFT+Prefix training (PFX). The second group contains baseline simultaneous NMT models including a wait-5 Transformer [4] and the offline Transformer applied with incremental decoding [10]. The third group presents the simultaneous decoding results evaluated with our framework applied with LLM, including the one-shot, SFT only and SFT+prefix training (PFX) setting. The metrics are annotated as **BLEU** for offline results and **BLEU (LAAL)** for streaming results. Best performed (in terms of BLEU) settings are bolded.
Figure 2: The correlation between BLEU and LAAL under different scale of \(\gamma\) in RALCP. |
2309.10164 | Asynchronous Perception-Action-Communication with Graph Neural Networks | Collaboration in large robot swarms to achieve a common global objective is a
challenging problem in large environments due to limited sensing and
communication capabilities. The robots must execute a
Perception-Action-Communication (PAC) loop -- they perceive their local
environment, communicate with other robots, and take actions in real time. A
fundamental challenge in decentralized PAC systems is to decide what
information to communicate with the neighboring robots and how to take actions
while utilizing the information shared by the neighbors. Recently, this has
been addressed using Graph Neural Networks (GNNs) for applications such as
flocking and coverage control. Although conceptually, GNN policies are fully
decentralized, the evaluation and deployment of such policies have primarily
remained centralized or restrictively decentralized. Furthermore, existing
frameworks assume sequential execution of perception and action inference,
which is very restrictive in real-world applications. This paper proposes a
framework for asynchronous PAC in robot swarms, where decentralized GNNs are
used to compute navigation actions and generate messages for communication. In
particular, we use aggregated GNNs, which enable the exchange of hidden layer
information between robots for computational efficiency and decentralized
inference of actions. Furthermore, the modules in the framework are
asynchronous, allowing robots to perform sensing, extracting information,
communication, action inference, and control execution at different
frequencies. We demonstrate the effectiveness of GNNs executed in the proposed
framework in navigating large robot swarms for collaborative coverage of large
environments. | Saurav Agarwal, Alejandro Ribeiro, Vijay Kumar | 2023-09-18T21:20:50Z | http://arxiv.org/abs/2309.10164v1 | # Asynchronous Perception-Action-Communication with Graph Neural Networks
###### Abstract
Collaboration in large robot swarms to achieve a common global objective is a challenging problem in large environments due to limited sensing and communication capabilities. The robots must execute a Perception-Action-Communication (PAC) loop--they perceive their local environment, communicate with other robots, and take actions in real time. A fundamental challenge in decentralized PAC systems is to decide _what_ information to communicate with the neighboring robots and _how_ to take actions while utilizing the information shared by the neighbors. Recently, this has been addressed using Graph Neural Networks (GNNs) for applications such as flocking and coverage control. Although conceptually, GNN policies are fully decentralized, the evaluation and deployment of such policies have primarily remained centralized or restrictively decentralized. Furthermore, existing frameworks assume sequential execution of perception and action inference, which is very restrictive in real-world applications. This paper proposes a framework for asynchronous PAC in robot swarms, where decentralized GNNs are used to compute navigation actions and generate messages for communication. In particular, we use aggregated GNNs, which enable the exchange of hidden layer information between robots for computational efficiency and decentralized inference of actions. Furthermore, the modules in the framework are asynchronous, allowing robots to perform sensing, extracting information, communication, action inference, and control execution at different frequencies. We demonstrate the effectiveness of GNNs executed in the proposed framework in navigating large robot swarms for collaborative coverage of large environments.
Graph Neural Networks, Decentralized Control, Multi-Robot Systems, Robot Swarms
## I Introduction
Decentralized collaboration for navigation of robot swarms through an environment requires high-fidelity algorithms that can efficiently and reliably handle Perception-Action-Communication (PAC) in a feedback loop (Figure 1). The primary challenge in such systems is to decide _what_ a robot should communicate to its neighbors and _how_ to use the communicated information to take appropriate actions. Graph Neural Networks (GNNs) [1] are particularly suitable for this task as they can operate on a communication graph and can learn to aggregate information from neighboring robots to take decisions [2, 3]. They have been shown to be an effective learning-based approach for several multi-robot applications, such as flocking [2, 4], coverage control [5], path planning [6], and target tracking [7]. Furthermore, GNNs exhibit several desirable properties for decentralized systems [8]: (1) _transferability_ to new graph topologies not seen in the training set, (2) _scalability_ to large teams of robots, and (3) _stability_ to graph deformations due to positioning errors.
Neural networks on graphs have been developed in the past decade for a variety of applications such as citation networks [1], classification of protein structures [9], and predicting power outages [10]. A fundamental difference between these applications and the collaboration of robot swarms is that the graph is generally static, whereas the robots are continuously moving, resulting in a dynamic and sparse communication graph. Moreover, control policies are executed on robots in real time with limited computational capabilities, which makes it imperative to provide an efficient framework for decentralized inference.
Some of these challenges have been addressed in recent works on GNNs for multi-robot systems. Tolstaya _et al._[2] proposed an aggregated GNN model that uses aggregated features with hidden outputs of the internal network layers for decentralized GNNs. Gama _et al._[4] proposed a framework for learning GNN models using imitation learning for decentralized controllers. Despite these recent advances and the conceptually decentralized nature of GNNs, evaluation and deployment of GNN policies have largely remained centralized or assume fully connected communication graphs [5, 11]. A primary reason for this is that training of GNNs is usually performed in a centralized setting for efficiency, and there is
Fig. 1: Perception-Action-Communication (PAC) in robots: The perception module utilizes the sensor data to perform tasks such as SLAM, semantic mapping, and object detection. The communication module is responsible for the exchange of information between robots via message buffers, thereby enabling the coordination and collaboration of robots. Limited communication capabilities restrict robots to exchanging messages only with neighboring robots. The planning and control module takes actions from the action module, plans a trajectory, and sends control commands to the actuators. In this paper, we use Graph Neural Networks (GNNs) to compute the messages to be exchanged, the aggregation of received messages, and action inferencing.
a lack of asynchronous PAC frameworks for evaluation and deployment of these policies in decentralized settings.
Recently, Blumenkamp _et al._[12] proposed a framework for running decentralized GNN-based policies on multi-robot systems, along with a taxonomy of network configurations. While our framework focuses more on the architecture design for PAC systems and less on networking protocols, a key difference is that we enable the asynchronous execution of different modules in the framework. In real-world applications with robots running PAC loops, a robot may need to perform perception tasks such as sensing, post-processing of sensor data, constructing semantic maps, and SLAM, which results in a computationally expensive perception module. Most prior work on GNNs for multi-robot systems [2, 4] do not consider the entire PAC loop and focus on action inference. This results in a sequential computation--first, the system evolves, providing a new state, and then the GNN module is executed to generate the message and action, and the process is repeated. In robotics, it is desirable to perform computations asynchronously, which is generally already done for low-level sensing, communication, and control. This motivates the need for a fully asynchronous PAC framework, where the GNN module can be executed at a higher frequency than the perception module. An asynchronous GNN module enables multiple rounds of GNN message aggregation, thereby diffusing information over a larger portion of the communication graph. Furthermore, perception tasks, execution of the current action, and computation of the next action can all be performed asynchronously and concurrently. This has the potential to significantly improve the performance of the system, especially in multi-core processors.
The primary _contribution_ of this paper is a learnable PAC framework composed of four asynchronous modules: perception, inter-robot communication, decentralized GNN message aggregation and inference, and low-level controller. The two key salient features of the framework are:
**(1) Decentralized GNN:** We leverage the aggregated GNN model [2] for decentralized message aggregation and inferencing. The model uses aggregated features comprising hidden outputs of the internal network layers. As the hidden layer outputs from neighboring robots are encoded in the messages, robots need not recompute graph convolutions performed by other robots, thereby distributing the computation over the robots. **(2) Asynchronous modules:** The framework is designed to execute perception and GNN computations asynchronously. This allows the message aggregation and inferencing to be performed at a much higher frequency than the computationally intensive perception tasks.
We also provide a ROS2 [13] compatible open-source implementation of the framework written primarily in C++ for PAC in robot swarms using GNNs.
## II Navigation Control Problem
The navigation control problem, in the paper, considers a homogeneous team of \(N\) mobile robots that needs to navigate through a \(d\)-dimensional environment \(\mathcal{W}\subseteq\mathbb{R}^{d}\) to minimize the expected value of an application-defined cost function. We denote the set of robots by \(\mathcal{V}=\{1,\ldots,N\}\), with the position of the \(i\)-th robot denoted by \(\mathbf{p}_{i}(t)\in\mathcal{W}\) at time \(t\). The state of the \(i\)-th robot at time \(t\) is denoted by \(\mathbf{x}_{i}(t)\in\mathbb{R}^{m}\), which in addition to robot positions may comprise velocities and other sensor measurements. The states of the multi-robot system and the control actions are collectively denoted by:
\[\mathbf{X}(t)=\begin{bmatrix}\mathbf{x}_{1}(t)\\ \mathbf{x}_{2}(t)\\ \vdots\\ \mathbf{x}_{N}(t)\end{bmatrix}\in\mathbb{R}^{N\times m},\quad\mathbf{U}(t)= \begin{bmatrix}\mathbf{u}_{1}(t)\\ \mathbf{u}_{2}(t)\\ \vdots\\ \mathbf{u}_{N}(t)\end{bmatrix}\in\mathbb{R}^{N\times d}.\]
We consider the multi-robot system to evolve as per a Markov model \(\mathbb{P}\), i.e., the state of the system at time \(t+\Delta t\) depends only on the state of the system and the control actions at time \(t\), where \(\Delta t\) is the time required for a single step.
\[\mathbf{X}(t+\Delta t)=\mathbb{P}(\mathbf{X}\mid\mathbf{X}(t),\mathbf{U}(t)) \tag{1}\]
The control problem can then be posed as an optimization problem, where the system incurs a cost given by a cost function \(c(\mathbf{X}(t),\mathbf{U}(t))\) for state \(X(t)\) and control action \(U(t)\) taken at time \(t\), and the goal is to find a policy \(\Pi_{c}^{*}\) that minimizes the expected cost [4]:
\[\Pi_{c}^{*}=\operatorname*{argmin}_{\Pi_{c}}\mathbb{E}\left[\sum_{t=0}^{ \infty}\gamma^{t}c(\mathbf{X}(t),\mathbf{U}(t))\right].\]
Here, the control actions are drawn from a conditional distribution \(\mathbf{U}(t)=\Pi_{c}(\mathbf{U}\mid\mathbf{X}(t))\). Note that the policy \(\Pi_{c}^{*}\) is centralized as the control actions, in the above formulation, require complete knowledge of the state of the system.
### _Decentralized Navigation Control_
In the decentralized navigation control problem, the robots take control actions based on their local information and the information communicated by their neighbors. Thus, we consider that the robots are equipped with a communication device for exchanging information with other robots that are within a given communication radius \(r_{c}\). The problem can
Fig. 2: Learning Perception-Action-Communication (PAC) loops using an architecture composed of three Neural Networks (NN): The perception NN, usually a Convolution NN, computes features using the sensor data obtained through perception. The core of the architecture is a collaborative NN, a Graph NN in our case, which computes the messages to be exchanged between robots and aggregates received messages. The GNN also computes features for the action NN, often a shallow Multi-Layer Perceptron (MLP), the output of which is sent to a planning and control module.
then be formulated on a _communication graph_\(\mathcal{G}=(\mathcal{V},\mathcal{E})\), where \(\mathcal{V}\) represents the set of robots and \(\mathcal{E}\) represents the communication topology of the robots. A robot \(i\) can communicate with another robot \(j\) if and only if their relative distance is less than a given communication radius \(r_{c}\), i.e., an edge \(e=(i,j)\in\mathcal{E}\) exists if and only if \(\|\mathbf{p}_{i}-\mathbf{p}_{j}\|_{2}\leq r_{c}\). We assume bidirectional communication and, therefore, the communication graph is assumed to be undirected.
A robot \(i\) can communicate with its neighbors \(\mathcal{N}(i)\), defined as \(\mathcal{N}(i)=\{j\in\mathcal{V}\mid(j,i)\in\mathcal{E}\}\). Information can also be propagated through multi-hop communication. The set of \(k\)-hop neighbors that the robot \(i\) can communicate with at most \(k\) communication hops is defined recursively as:
\[\mathcal{N}_{k}(i)=\mathcal{N}_{k-1}(i)\cup\bigcup_{j\in\mathcal{N}_{k-1}(i)} \mathcal{N}(j),\quad\text{with }\mathcal{N}_{0}(i)=\{i\}.\]
Let \(\frac{1}{\Delta t_{c}}\) be the frequency at which the robots communicate with each other, i.e., \(\Delta t_{c}\) is the time required for a single communication to take place. Then the total information acquired by robot \(i\) is given by:
\[\mathcal{X}_{i}(t)=\bigcup_{k=0}^{\lfloor\frac{\sigma_{i}^{k}}{\Delta t_{c}} \rfloor}\{\mathbf{x}_{j}(t-k\Delta t_{c})\mid j\in\mathcal{N}_{k}(i)\}.\]
Now, the control actions can be defined in terms of a decentralized control policy \(\pi_{i}\):
\[\mathbf{u}_{i}(t)=\pi_{i}(\mathbf{u}_{i}\mid\mathcal{X}_{i}(t)).\]
Denoting by \(\mathbf{U}(t)=[\mathbf{u}_{1}(t),\dots,\mathbf{u}_{N}(t)]^{\top}\) and \(\mathcal{X}(t)=[\mathcal{X}_{1}(t),\dots,\mathcal{X}_{N}(t)]^{\top}\), the decentralized control problem can be formulated as:
\[\Pi^{*}=\operatorname*{argmin}_{\Pi}\mathbb{E}\left[\sum_{t=0}^{\infty}\gamma ^{t}c(\mathcal{X}(t),\mathbf{U}(t))\right].\]
Computing the optimal decentralized policy is much more challenging than the centralized control as it contains the trajectory histories, unlike the Markov centralized controller [4]. The problem is computationally intractable for complex systems [14], which motivates the use of learning-based approaches. GNNs, in particular, are well-suited for decentralized control for multi-robot systems as they operate on the communication graph topology and can be used to learn a decentralized control policy from training data generated using a centralized controller.
**Remark 1**.: **Asynchronous Communication and Inference:** The formulation for the centralized and decentralized control problems follows the structure given by Gama _et al._[4]. However, similar to several other works on decentralized control, the formulation in [4] assumes that a single step of message aggregation and diffusion is performed at the same time step as the evolution of the system and perception tasks. In contrast, our formulation separates these tasks and executes them asynchronously at a higher frequency by explicitly parametrizing two different time steps. Asynchronous and decentralized execution of modules results in higher fidelity of the overall system.
## III Decentralized Graph Neural Networks
Graph Neural Networks (GNNs) [1] are layered information processing architecture that operate on a graph structure and make inferences by diffusing information through the graph. In the context of multi-robot systems, the graph is imposed by the communication graph, i.e., the graph nodes \(\mathcal{V}\) represent the robots, and the edges represent the communication links \(\mathcal{E}\), as discussed in Section II-A. In this paper, we focus on Graph Convolutional Neural Networks (GCNNs), a layered composition of convolution graph filters with point-wise nonlinearities (\(\sigma\)). The architecture is defined by \(L\) layers, \(K\) hops of message diffusion, and a _shape shift operator_\(\mathbf{S}\in\mathbb{R}^{N\times N},\,N=|\mathcal{V}|\) that is a based on the communication graph. The elements \([\mathbf{S}]_{ij}\) can be non-zero only if \((i,j)\in\mathcal{E}\). The input to the GCNN is a collection of features \(\mathbf{X}_{0}\in\mathbb{R}^{N\times d_{0}}\), where each element \(\mathbf{x}_{i}\) is a feature vector for robot \(i\), \(\forall i\in\mathcal{V}\). The weight parameters for learning on GNNs are given by \(\mathbf{H}_{lk}\in\mathbb{R}^{d_{(l-1)}\times d_{l}},\,\forall l\in\{1,\cdots,L \},\,\forall k\in\{1,\cdots,K\}\), where \(d_{l}\) is the dimension of the hidden layer \(l\) with \(d_{0}\) as the dimension of the input feature vectors. The convolution graph filters are polynomials of the shape shift operator \(\mathbf{S}\) with coefficients defined by the input and the weight parameters \(\mathbf{H}_{lk}\). The output \(\mathbf{Z}_{l}\) of the filter is processed by a point-wise non-linearity \(\sigma\) to obtain the output of layer \(l\), i.e., \(\mathbf{X}_{l}=\sigma(\mathbf{Z}_{l})\). The final output of the GCNN is given by \(\mathbf{X}_{L}\) and the entire network is denoted by \(\Phi(\mathbf{X};\mathbf{S},\mathcal{H})\). Figure 3 shows an architecture with two layers.
To see that GNNs are suitable for decentralized robot swarms, consider the computation \(\mathbf{Y}_{kl}=(\mathbf{S})^{k}\mathbf{X}_{(l-1)}\) for some layer \(l\). These computations can be done recursively: \(\mathbf{Y}_{kl}=\mathbf{S}(\mathbf{S})^{(k-1)}\mathbf{X}_{(l-1)}=\mathbf{S} \mathbf{Y}_{(k-1)l}\). For a robot \(i\), vector \((\mathbf{y}_{i})_{kl}=[\mathbf{Y}_{kl}]_{i}\) can be computed as:
\[(\mathbf{y}_{i})_{kl}=\sum_{j\in\mathcal{N}(i)}[\mathbf{S}]_{ij}(\mathbf{y}_{j} )_{(k-1)l} \tag{2}\]
Fig. 3: A Graph Convolution Neural Network (GCNN) with two layers. Each layer is made up of a convolution graph filter followed by a point-wise non-linearity. GNNs are particularly suitable for decentralized robot swarms as the computations respect the locality of the communication graph.
Here, \(\mathcal{N}(i)\) is the set of neighbors of robot \(i\), and \([\mathbf{S}]_{ij}\) is the \((i,j)\)-th element of the shape shift operator \(\mathbf{S}\). Since the value of \([\mathbf{S}]_{ij}\) is non-zero only if \((i,j)\in\mathcal{E}\), the computation of \((\mathbf{y}_{i})_{kl}\) only involves the features of the neighbors of robot \(i\), i.e., the computation respects the locality of the communication graph. Thus, the robot \(i\) needs to only receive \((\mathbf{y}_{j})_{(k-1)l}\) from its neighbors to compute \((\mathbf{y}_{i})_{kl}\), which makes the overall system decentralized. The collection of hidden feature vectors \((\mathbf{y}_{i})_{kl}\) forms the _aggregated message_[2]\(\mathbf{Y}_{i}\) for robot \(i\), which is precisely the information the robot \(i\) needs to communicate to its neighboring robots.
**Definition 1**.: _Aggregated Message \(\mathbf{Y}_{i}\):_ The aggregated message for a robot \(i\) is defined as:
\[\mathbf{Y}_{i}=\begin{bmatrix}\left(\mathbf{y}_{i}\right)_{01}=\left(\mathbf{ x}_{i}\right)_{0}&\left(\mathbf{y}_{i}\right)_{11}&\cdots&\left(\mathbf{y}_{i} \right)_{(K-1)1}\\ \vdots&\vdots&\ddots&\vdots\\ \left(\mathbf{y}_{i}\right)_{0l}=\left(\mathbf{x}_{i}\right)_{l-1}&\left( \mathbf{y}_{i}\right)_{1l}&\cdots&\left(\mathbf{y}_{i}\right)_{(K-1)l}\\ \vdots&\vdots&\ddots&\vdots\\ \left(\mathbf{y}_{i}\right)_{0L}=\left(\mathbf{x}_{i}\right)_{L-1}&\left( \mathbf{y}_{i}\right)_{1L}&\cdots&\left(\mathbf{y}_{i}\right)_{(K-1)L}\end{bmatrix}\]
where, \(\left(\mathbf{x}_{i}\right)_{0}\) is the input feature for robot \(i\), \(\left(\mathbf{x}_{i}\right)_{l}\) is the output of layer \(l\) of the GNN, and
\[\begin{split}\left(\mathbf{y}_{i}\right)_{kl}=\sum_{j\in\mathcal{ N}(i)}\left[\mathbf{S}\right]_{ij}(\mathbf{y}_{j})_{(k-1)l},\\ \forall k\in\{1,\cdots,K-1\},\,\forall l\in\{1,\cdots,L\}\end{split} \tag{3}\]
Note that the dimension of each vector in a row is same but the dimension across rows may be different, i.e., \(\mathbf{Y}_{i}\) is a collection of matrices and not a proper tensor.
The overall algorithm for message aggregation and inference using a GNN is given in Algorithm 1. The output of the GNN is the output of the last layer \(\mathbf{X}_{L}\). To completely diffuse a signal \(\mathbf{x}_{i}\) across the network, each robot needs to exchange messages and execute Algorithm 1 for \(KL\) times. This would require that the GNN message aggregation must be executed at a frequency that is \(KL\) times higher than the frequency of perception. Depending on the application and the number of layers of the network, this may not be always feasible. However, due to the stability properties of the GNNs, they perform well even when the message aggregation is executed at a lower frequency.
```
Input : Messages \(\{\mathbf{Y}_{j}\mid j\in\mathcal{N}(i)\}\), model parameters \(\mathcal{H}\) Output : Inference output \((\mathbf{x}_{i})_{L}\), messages \(\mathbf{Y}_{i}\) \((\mathbf{x}_{i})_{0}\leftarrow\mathbf{x}_{i}\);// Input feature for robot \(i\)
1for\(l=1\)do
2\((\mathbf{y}_{i})_{l-1}\leftarrow(\mathbf{x}_{i})_{l-1}\);
3\(\mathbf{z}_{l}\leftarrow(\mathbf{y}_{i})_{0l}\,\mathbf{H}_{lk}\);
4for\(k=1\)to\(K\)do
5\((\mathbf{y}_{i})_{kl}\leftarrow\mathbf{0}\);
6for\(j\in\mathcal{N}(i)\)do
7\((\mathbf{y}_{i})_{kl}\leftarrow\left(\mathbf{y}_{i}\right)_{kl}+\left[\mathbf{S }\right]_{ij}(\mathbf{y}_{j})_{(k-1)l}\);
8
9 end for
10\(\mathbf{z}_{l}\leftarrow\mathbf{z}_{l}+\left(\mathbf{y}_{i}\right)_{kl}\, \mathbf{H}_{lk}\);
11
12 end for
13\(\mathbf{z}_{l}\leftarrow\mathbf{z}_{l}+\mathbf{b}_{l}\); // If bias is used
14\((\mathbf{x}_{i})_{l}\leftarrow\sigma(\mathbf{z}_{l})\); // Point-wise non-linearity
15
16 end for
17\(\mathbf{Y}_{i}\leftarrow\left[\mathbf{x}_{i},(\mathbf{y}_{i})_{kl}\right]\); // Def. 1
```
**Algorithm 1**GNN Aggregation and Inference
**Remark 2**.: The computation of each individual element of \((\mathbf{y}_{i})_{kl}\) is a single matrix multiplication, as the computation of \((\mathbf{y}_{j})_{(k-1)l}\) is already performed by the neighboring robot \(j\). Thus, aggregated GNNs naturally distribute computation across robots. Furthermore, the size of the aggregated message \(\mathbf{Y}_{i}\) is defined by the architecture of the GNN, the dimension of the input feature vector, and the dimension of the output. It is independent of the number of robots in the system, making it scalable to large robot swarms. These properties make the aggregated GNNs suitable for decentralized robot swarms and, therefore, are used in the proposed framework for asynchronous PAC.
## IV Asynchronous PAC Framework
The framework is designed to efficiently perform decentralized and asynchronous Perception-Action-Communication (PAC) loops in robot swarms. It comprises four primary modules for different PAC subtasks that are executed asynchronously: (1) Perception, (2) Inter-Robot communication, (3) GNN message aggregation and inference, and (4) Low-level actuator controller. The asynchronous design of the framework is motivated by two main advantages:
**Concurrent execution:** Asynchronous modules allow non-related subtasks to be executed concurrently, especially the computationally expensive perception module and relatively faster GNN module. As a result, the GNN module can perform several steps of message aggregation, thereby diffusing information over a larger number of nodes in the communication graph, while the perception module is still processing the sensor data. Furthermore, concurrent execution significantly reduces the computation time when executed on multi-core processors, thereby enabling better real-time performance.
**Variable frequency:** Asynchronous modules allow different subtasks to be executed at different frequencies. In particular, the GNN module can be executed at a higher frequency than the perception module, which has not been possible in prior work. Similarly, as is often the case, the communication and the low-level actuator controller modules can run at a higher frequency than the GNN message aggregation module. Generally, a GNN policy computes a high-level control action, which is then executed by a low-level controller for several time steps. Even in the case where the perception task is minimal, asynchronous execution allows the GNN module to perform several steps of message aggregation while the low-level controller is executing the previous control action.
In prior work, the perception and GNN computations were considered to be executed synchronously, even if communication and low-level controller are asynchronous, affecting the computation time and the number of message aggregations that can be performed by the GNN module. This is mitigated in
our proposed framework by allowing asynchronous execution. We now describe the different modules of the framework.
### _Perception_
The perception module is responsible for getting the sensor data and processing it to obtain the input features for the GNN. The module may also contain application-specific tasks such as semantic mapping, object detection, and localization, which makes the module computationally expensive. The entire perception module is typically executed at a low frequency in applications that require significant computation. In our specific implementation for the coverage control problem in Section V, we use a CNN to generate a low-dimensional feature vector for the GNN module.
### _Inter-Robot Communication_
Robots _broadcast_ their message \(\mathbf{Y}_{i}\), for robot \(i\), to other robots within their communication range \(r_{c}\), i.e., the neighboring robots \(\mathcal{N}(i)\). They also receive messages from their neighbors: \(\mathbf{Y}_{j},\forall j\in\mathcal{N}(i)\). Generally, communication hardware may allow either receiving a message or transmitting a message at a given time. Thus, the communication module may need to be executed twice the frequency of the GNN message aggregation module. We use two buffers to maintain messages: a transmitter buffer \(\mathrm{T_{x}}\), which stores the message to be transmitted, and a receiver buffer \(\mathrm{R_{x}}\), which stores the most recent message received from each neighbor. The module is composed of three submodules: a message buffer manager, a transmitter, and a receiver.
_Message Buffer Manager:_ The message buffer manager handles the transmitter \(\mathrm{T_{x}}\) and receiver \(\mathrm{R_{x}}\) buffers. When a new message is generated by the GNN module, the message buffer performs five sequential actions (1) momentarily locks the transmitter and receiver to avoid race conditions in writing and reading the buffers, (2) loads the new message \(\mathbf{Y}_{i}\), received from the GNN module, onto the transmitter buffer, (3) sends the contents of the receiver buffer to the GNN module, (4) clears the receiver buffer, and (5) releases the lock on the buffers. Since having a lock on the communication buffers is not desirable, our implementation makes efficient use of _smart memory pointers_ in C++ to exchange the contents from the GNN module and the buffers, i.e., the actual contents are not loaded to the buffers, but the pointers to the memory locations are exchanged. Clearing the receiver buffer is critical to ensure old messages, which would have been used in the previous GNN message aggregation, are not considered further.
_Transmitter:_ The transmitter submodule broadcasts the message \(\mathbf{Y}_{i}\), using the \(\mathrm{T_{x}}\) message buffer, to neighboring robots \(\mathcal{N}(i)\). Additionally, an identification is attached to the message so that the receiving robot can rewrite old messages in the buffer with the most recent message from the same robot.
_Receiver:_ The receiver submodule receives the messages broadcast by the neighboring robots \(\mathcal{N}(i)\). If a message is received from a robot that already has a message in the receiver buffer, the message is overwritten with the most recent message, i.e., only the most recent message from a neighboring robot is stored in the receiver buffer. The size of the buffer needs to be dynamic as it is dependent on the number of neighbors, which may change over time.
### _GNN Message Aggregation and Inference_
The GNN message aggregation module has two tasks: (1) generate messages to be communicated to neighboring robots in the next time step, and (2) perform inference for control actions. Our framework uses the aggregated GNN model, described in Section III. The system is fully decentralized, i.e., each robot has its own GNN model and the inference is performed locally. An important attribute of the aggregated GNN model is that the size of the messages is dependent on the number of layers in the network, and is independent of the number of robots. Thus, the system is highly scalable to large robot swarms. The module is executed at a higher frequency than the perception module.
### _Low-Level Controller_
The low-level controller interfaces the framework with the robot actuators. It receives the control action at the end of the computation of the GNN module and executes it for a pre-defined time interval. The controller is executed at a very high frequency to ensure that the control actions are reliably executed in real-time while correcting the control commands using a feedback loop. Since the implementation of the framework is designed to work with ROS2, any existing package for low-level control can be used with the framework.
## V Evaluation on the Coverage Control Problem
We evaluate our approach for asynchronous Perception-Action-Communication (PAC) on the coverage control problem [15] for a swarm of 32 robots in simulation. Coverage control requires the collaboration of a robot swarm to provide sensor coverage to monitor a phenomenon or features of interest in an environment. An _importance density function_ (IDF) [5] is used to model the probability distribution of features of interest. The problem is widely studied in robotics and has applications in various domains, including mobile networking [16], surveillance [17], and target tracking [18]. We consider the decentralized setup where the robots have limited sensing and communication capabilities. Furthermore, the environment is not known a priori, and the robots use their sensor to make localized observations of the environment.
Coverage control is posed as an optimization problem using _Voronoi partitions_\(\mathcal{P}_{i}\) to assign each robot a distinct subregion of the environment [19, 20, 5]. The objective of the coverage problem is to minimize the cost to cover the environment, weighted by the IDF \(\Phi(\cdot)\), see [20] for details.
\[\mathcal{J}(\mathbf{p}_{1},\ldots,\mathbf{p}_{|\mathcal{V}|})=\sum_{i=1}^{| \mathcal{V}|}\int_{\mathcal{P}_{i}}\|\mathbf{p}_{i}-\mathbf{q}\|^{2}\Phi( \mathbf{q})\mathbf{d}\mathbf{q} \tag{4}\]
We model a 1024 m\(\times\)1024 m rectangular environment and robots that make 64 m\(\times\)64 m localized sensor observations. Each robot maintains a local map of size 256 m\(\times\)256 m with cumulatively added observations and an obstacle map of the
same size for the boundaries of the environment. Based on our contemporary work on coverage control, these maps are processed by a CNN with three layers of 32 latent size each. Additionally, relative positions of neighboring robots, obtained either through sensing or communication, within a communication radius of 128 m is mapped onto a two-channel (\(x\) and \(y\)) normalized heatmap. These four maps are concatenated to form the input to a CNN, which is composed of three layers of 32 latent size and generates a 32 dimensional feature vector for each robot. The sensing and processing of maps by CNN constitute the perception module of the PAC framework. The output of the CNN is augmented with the normalized position of the robot to form a 34-dimensional feature vector, which is used as the input to the GNN with two layers of 256 latent size and \(K=3\) hops of communication. The output of the GNN is processed by a multi-layer perceptron (MLP), i.e., an action NN, with one hidden layer and 32 latent size to compute the control velocities for the robot. These CNN and GNN architectures are part of an ongoing work on coverage control1.
Footnote 1: Appropriate references will be added in the final publication.
The entire model is trained using imitation learning with the ground truth generated using a centralized clairvoyant Lloyd's algorithm that has complete knowledge of the IDF and the positions of the robots. We compare our approach with a decentralized and a centralized Lloyd's algorithm [19]. The decentralized algorithm is only aware of its maps and the relative positions of neighboring robots, whereas the centralized algorithm has the combined knowledge of all the robots and their global positions. We executed these algorithms in our asynchronous PAC framework for four different operating frequencies from (1.25 Hz to 5 Hz) of the perception module. Our approach performs significantly better than the decentralized and centralized algorithms, see Figure 4. We also evaluated our approach with noisy position information, see Figure 5. These results show the efficacy of asynchronous PAC systems for decentralized control of robot swarms.
## VI Conclusion
We presented a framework for asynchronous Perception-Action-Communication with Graph Neural Networks (GNNs). Perception comprises application-specific tasks such as sensing and mapping, while the action module is responsible for executing controls on the robot. The GNN bridges these two modules and provides learned messages for collaboration with other agents. The decentralized nature of the GNN enables scaling to large robot swarms. The asynchronous capability
Fig. 4: Evaluation of coverage control algorithms, within our decentralized PAC framework, for four different operating frequencies (1.25 Hz to 5 Hz) of system evolution, which includes all perception tasks. The decentralized GNN, with CNN for perception, is trained using imitation learning with the ground truth generated using a clairvoyant Lloyd’s algorithm. The plots show the coverage cost, averaged over 30 environments, for 600 time steps and are normalized by the cost at the starting configuration of the system. Our approach significantly outperforms decentralized and centralized Lloyd’s algorithms.
Fig. 5: Evaluation of coverage algorithms with a Gaussian noise \(\epsilon\) added to the position of each robot, i.e., the sensed position \(\bar{\mathbf{p}}_{i}=\mathbf{p}_{i}+\epsilon\). The perception module is executed at 2.5 Hz for all four cases. It is interesting that the performance of Lloyd’s algorithm increases slightly with noise, as this leads to discovering more of the IDF. However, the algorithms converge to a stable configuration after more number of steps. The decentralized GNN approach outperforms the two Lloyd’s algorithms in all cases, thereby demonstrating the robustness of our approach to noisy information.
of our system allows the execution of GNN-based message aggregation and inferencing at a frequency in between that of perception and action modules. We demonstrated the effectiveness of using learnable PAC policies in a decentralized manner for the coverage control problem. The framework will allow evaluation and deployment of asynchronous PAC systems with GNNs with large robot swarms in real-world applications.
Future work entails validating the system on real robots--an essential yet challenging task due to difficulties operating a large number of robots. We also plan to evaluate our framework on other applications, such as flocking and target tracking, and analyze the compatibility of our framework with other GNN architectures.
|
2309.09353 | Many-body interactions between contracting living cells | The organization of live cells into tissues and their subsequent biological
function involves inter-cell mechanical interactions, which are mediated by
their elastic environment. To model this interaction, we consider cells as
spherical active force dipoles surrounded by an unbounded elastic matrix. Even
though we assume that this elastic medium responds linearly, each cell's
regulation of its mechanical activity leads to nonlinearities in the emergent
interactions between cells. We study the many-body nature of these interactions
by considering several geometries that include three or more cells. We show
that for different regulatory behaviors of the cells' activity, the total
elastic energy stored in the medium differs from the superposition of all
two-body interactions between pairs of cells within the system. Specifically,
we find that the many-body interaction energy between cells that regulate their
position is smaller than the sum of interactions between all pairs of cells in
the system, while for cells that do not regulate their position, the many-body
interaction is larger than the superposition prediction. Thus, such
higher-order interactions should be considered when studying the mechanics of
multiple cells in proximity. | Roman Golkov, Yair Shokef | 2023-09-17T19:11:19Z | http://arxiv.org/abs/2309.09353v1 | # Many-body interactions between contracting living cells
###### Abstract
The organization of live cells into tissues and their subsequent biological function involves inter-cell mechanical interactions, which are mediated by their elastic environment. To model this interaction, we consider cells as spherical active force dipoles surrounded by an unbounded elastic matrix. Even though we assume that this elastic medium responds linearly, each cell's regulation of its mechanical activity leads to nonlinearities in the emergent interactions between cells. We study the many-body nature of these interactions by considering several geometries that include three or more cells. We show that for different regulatory behaviors of the cells' activity, the total elastic energy stored in the medium differs from the superposition of all two-body interactions between pairs of cells within the system. Specifically, we find that the many-body interaction energy between cells that regulate their position is smaller than the sum of interactions between all pairs of cells in the system, while for cells that do not regulate their position, the many-body interaction is larger than the superposition prediction. Thus, such higher-order interactions should be considered when studying the mechanics of multiple cells in proximity.
+
Footnote †: We dedicate this article to Fyl Pincus, who promoted the field of soft matter forward, both by his own scientific achievements, and more importantly by him pushing and encouraging young scientists in the field.
Contributing authors: [email protected]; [email protected];
## 1 Introduction
Live cells exert contractile forces on their environment. The shape, size, and resulting biological function of each cell are determined by the balance of internal and external mechanical forces applied on the cell's surface, such as polymerization or contraction of cytoskeletal networks, changes in internal osmotic pressure, or forces exerted on the cell by its neighbors [1]. Actomyosin networks within living cells generate and transmit these forces to the extracellular matrix (ECM) via focal adhesions [2]. The resulting balance of forces is regulated by the cell and may
change in response to changes in the rigidity of the ECM [3; 4; 5]. It is not fully clear how cells respond to changes in the mechanical environment caused by other cells, external forces, or changes in the rigidity of the medium. In many studies, the working hypothesis has been that cells tend to maintain specific quantities through mechanical homeostasis [6; 7]. For example, by regulating the forces they apply, cells will vary the displacements they generate as their environment changes. Alternatively, cells may change the forces needed to create those displacements by regulating their deformation. Furthermore, cells modulate their shape and spatial contractility patterns in response to environmental changes.
The mechanical activity of cells is often described by force dipoles, namely pairs of equal and opposite active forces that each cell applies on its mechanical environment [8; 9; 10]. There are analogies between such force dipoles and electric dipoles that consist of two equal and opposite electric charges. Similarly, mechanical interactions between cells result from each cell generating a deformation field in the surrounding medium, which resembles the electric field formed around an electric dipole. Distant cells are, in turn, influenced by this field. Thus, matrix-mediated interactions between cells are similar but not identical to interactions between electric dipoles.
A tractable approach to theoretically describe such contractile cells, which will be employed here, is by modeling them as spherical force dipoles [11; 12; 7; 13], i.e., spherical bodies that apply isotropic contractile forces on their surrounding matrix, as depicted in Fig. 1. The mechanical response of the ECM is strongly nonlinear [14; 15; 16], which has many implications on matrix-mediated elastic interactions between cells [17; 18; 19; 20; 21; 11; 22; 13; 11; 23; 14; 15; 16; 17; 18; 19; 20; 21; 22]. Nonetheless, one can study the elastic interaction between spherical cells surrounded by a linearly elastic material [13; 7; 23]. The concepts introduced and the physical mechanisms identified in such studies are also relevant to morphologically complex cells in nonlinear materials. Specifically, despite the linear properties assumed for the ECM, the intra-cellular mechanisms for regulating each cell's mechanical activity give rise to nonlinearities that show up in inter-cellular behavior.
In this paper, we investigate how cellular regulation of mechanical activity breaks the superposition that one could naively expect to find due to the linear elastic response of the surrounding medium. We analyze situations containing multiple contractile cells and show that the total interacting energy in such cases differs from the result obtained by assuming that the interactions are pairwise additive.
## 2 Shape Regulation
We distinguish between two types of spherical force dipoles, based on the presence or absence of regulation of the forces that they apply; _dead_, but active force dipoles do not regulate the forces that they apply, and their activity does not depend, for instance on the distances to their neighbors. In our model, _live_ cells are capable of measuring external forces and deformations on their surface and adjusting the active forces that they apply according to some internal algorithm, for example, to maintain a certain displacement or a certain force on their surface.
The difference between these two types of behavior is evident when a spherical cell generates a radial and isotropic self-displacement field, i.e., the displacements induced by this cell in the absence of neighboring cells. Such a field would cause the cell to only change its volume, without any distortion of its shape. In that case, a pair of such dead active force dipoles preserves their self-displacement fields, leading to vanishing interaction energy [23]. Note that if the self-displacement field generated by each sphere is radially symmetric around the center of that sphere, then the total displacement on the surface of each cell, which is the sum of the self-displacement fields generated by these two contracting spheres, would be anisotropic, as shown in Fig 2a. This result is general for contracting objects or arbitrary
Figure 1: Each cell is modeled as a spherical force dipole, comprised of radial active forces that are isotropically distributed on the surface of a sphere
shapes, that generate self-displacements only in their principal directions [24; 25].
In comparison, two live cells that adjust the self-displacement fields that they generate have non-vanishing interaction energy. We focus on live cells with shape regulation [23], as demonstrated in Fig. 1(b),c. Namely, spherical cells that adjust the anisotropic azimuthal distribution of the active forces that they apply, such that the total displacement on their surface will be radially symmetric. This total displacement is the sum of the self-displacement that each cell generates on its surface plus the displacement fields on its surface due to the activity of the other cells in the system.
## 3 Many-body Interactions
We consider a series of identical live spherical cells of radius \(R_{0}\), arranged along a straight line, separated by equal distances \(d\) between their centers, and surrounded by a three-dimensional linear elastic material of bulk modulus \(K\) and shear modulus \(G\). In the absence of other cells, each cell contracts isotropically with a displacement \(u_{0}\) on its surface. We assume that each cell senses the displacements created on its surface by all other cells and adjusts its active force to compensate for them and prevent distortions of its spherical shape. For this complex situation of multiple activity-regulating cells, we will calculate the total elastic energy stored in the surrounding medium. We will then obtain the many-body interaction energy by subtracting from the energy of this mutual situation, the sum of the self-energies of the cells, namely the energy stored in the medium assuming that each cell contracts independently in an infinite medium, without any other cells. As we will show below, due to the nonlinearity in the regulation of the active forces that the cells apply, this many-body interaction energy differs from the sum of all two-cell interactions. We develop a formalism for an arbitrary number of cells, and will explicitly solve the geometries including three or four cells, and compare them to the pair-wise additive result obtained from the analysis of geometries of two cells. We will also consider an infinite array of equally spaced cells along a straight line, for which we will calculate the interaction energy per cell.
## 4 Displacements Created by Spherical Cells
We describe the displacements generated by cells as the sum of an isotropic constant displacement \(u_{0}\) and an anisotropic, interaction-dependent displacement \(\Delta u\) that is intended to cancel anisotropic displacements caused by other cells.
Figure 2: Two spherical force dipoles: (a) Dead force dipoles applying an isotropic elastic force, (b-c) Live force dipoles regulating the force they apply to remain spherical even in the presence of the other force dipole. Initial shape (purple), forces applied by each force dipole (red arrows), corresponding self displacements (dashed black), displacements caused by the other force dipole (dashed green), total displacement (blue), the center of the interaction-free displacement (black dot), the center of the displacements with the interactions (blue dot). For illustration purposes, the initial distance between the spheres was set to \(d=3R_{0}\), and the self-displacement to \(u_{0}=0.4R_{0}\), where \(R_{0}\) is the radius of each sphere. The Poisson ratio is \(\nu=0.3\). See text for description of the different regulation scenarios in (b-c)
The symmetry of the arrangement dictates that the displacement fields produced by cells placed at equal distances on either side of the array's center are mirror images. For simplicity, we place the origin of the coordinate system at this center and number the cells according to their distance from it, see Fig. 3. We choose the coordinate systems of the cells based on their index: left-handed for positive indices and right-handed for negative or zero indices, see Figs. 3, 4. This choice of coordinate systems is based on the system's symmetry and will simplify the calculations.
The displacement field \(\vec{u}\) around each cell must satisfy mechanical equilibrium [26]:
\[\frac{1}{1-2\nu}\nabla\nabla\cdot\overrightarrow{u}+\nabla^{2}\overrightarrow{ u}=0. \tag{1}\]
Due to the rotational symmetry around the line passing through the centers of the cells, there is no dependence on the azimuthal angle \(\phi\). Thus, we write Eq. (1) in spherical coordinates as:
\[\frac{1}{1-2\nu}\frac{\partial}{\partial r}\bigg{[}\frac{1}{r^{2 }}\frac{\partial}{\partial r}(r^{2}u_{r})\\ +\frac{1}{r\sin\theta}\frac{\partial}{\partial\theta}(u_{\theta} \sin\theta)\bigg{]}\\ +\nabla^{2}u_{r}-\frac{2}{r^{2}}u_{r}-\frac{2}{r^{2}}\frac{ \partial u_{\theta}}{\partial\theta}-\frac{2u_{\theta}\cot\theta}{r^{2}}=0, \tag{2}\] \[\frac{1}{1-2\nu}\frac{1}{r}\frac{\partial}{\partial\theta}\bigg{[} \frac{1}{r^{2}}\frac{\partial}{\partial r}(r^{2}u_{r})\\ +\frac{1}{r\sin\theta}\frac{\partial}{\partial\theta}(u_{\theta} \sin\theta)\bigg{]}\\ +\nabla^{2}u_{\theta}+\frac{2}{r^{2}}\frac{\partial u_{r}}{ \partial\theta}-\frac{u_{\theta}}{r^{2}\sin^{2}\theta}=0, \tag{3}\]
where the Laplacian in spherical coordinates, excluding terms depending on \(\phi\), is given by:
\[\nabla^{2}=\frac{1}{r^{2}\sin\theta}\bigg{[}\frac{\partial}{ \partial r}\left(r^{2}\sin\theta\frac{\partial}{\partial r}\right)\\ +\frac{\partial}{\partial\theta}\left(\sin\theta\frac{\partial}{ \partial\theta}\right)\bigg{]}. \tag{4}\]
Based on the general solution for the displacement field of a sphere with given cylindrically-symmetric displacements on its surface [26], we write the anisotropic displacements field satisfying Eqs. (2-3) outside the cell (\(r>R_{0}\)) as a multipole expansion in terms of spherical harmonics \(Y_{n}(\theta)=\sqrt{\frac{2n+1}{4\pi}}P_{n}(\cos\theta)\):
\[u_{ri}=\frac{u_{0}R_{0}^{2}}{r_{i}^{2}}+u_{0}\sum_{n=0}^{\infty }\bigg{[}n(n+3-4\nu)\frac{C_{n}^{i}R_{0}^{n}}{r_{i}^{n}}\\ -(n+1)\frac{D_{n}^{i}R_{0}^{n+2}}{r_{i}^{n+2}}\bigg{]}\,Y_{n}( \theta_{i}), \tag{5}\] \[u_{\theta i}=u_{0}\sum_{n=0}^{\infty}\bigg{[}(-n+4-4\nu)\frac{C_ {n}^{i}R_{0}^{n}}{r_{i}^{n}}\\ +\frac{D_{n}^{i}R_{0}^{n+2}}{r_{i}^{n+2}}\bigg{]}\,\frac{dY_{n}( \theta_{i})}{d\theta_{i}}, \tag{6}\]
with
\[P_{n}(x)=2^{n}\cdot\sum_{\ell=0}^{n}x^{\ell}\left(\begin{array}{c}n\\ \ell\end{array}\right)\left(\begin{array}{c}\frac{n+\ell-1}{2}\\ n\end{array}\right) \tag{7}\]
Figure 4: Three spherical cells each with radius \(R_{0}\), all applying a radial isotropic displacement \(u_{0}\) (red arrows) on their surfaces. The coordinate systems of spheres \(0\) and \(-1\) are right-handed (blue and green accordingly) and the coordinate system of sphere \(1\) is left-handed (magenta) and may be written as \(\theta_{1}=\pi-\theta_{1}^{\prime}\) where \(\theta_{1}^{\prime}\) is the commonly-used right-handed azimuthal coordinate for sphere \(1\)
Figure 3: Left-handed coordinate systems chosen for cells with positive index \(n>0\) and right-handed for zero or negative index \(n\leq 0\) for (a) odd number of spheres, (b) even number of spheres
the Legendre polynomial of order \(n\)[27]. Equations (5-6) represent the anisotropic displacements created by each cell in its coordinate system with its origin in its center. Here, \(u_{ri}\) and \(u_{\theta i}\) are the radial and angular components of the displacement field caused by cell \(i\), and the infinite sums represent the anisotropic corrections that each cell produces to cancel the shape distortion caused by its neighbors. We have inserted \(u_{0}\) and \(R_{0}\) to make the coefficients \(C_{n}^{i}\) and \(D_{n}^{i}\) dimensionless.
Using the dimensionless displacements \(\widetilde{u}_{ri}=\frac{u_{ri}}{u_{0}}\), \(\widetilde{u}_{\theta i}=\frac{u_{\theta i}}{u_{0}}\), and the dimensionless position \(\widetilde{r}=\frac{r}{R_{0}}\), we rewrite Eqs. (5) and (6) as follows:
\[\widetilde{u}_{ri}(\tilde{r}_{i},\theta_{i})=\frac{1}{\widetilde{ r}_{i}^{2}}+\sum_{n=0}^{\infty}\left[n(n+3-4\nu)\frac{C_{n}^{i}}{\widetilde{r}_{i}^{n }}\right.\\ \left.-(n+1)\frac{D_{n}^{i}}{\widetilde{r}_{i}^{n+2}}\right]Y_{n} (\theta_{i}), \tag{8}\] \[\widetilde{u}_{\theta i}(\tilde{r}_{i},\theta_{i})=\sum_{n=0}^{ \infty}\left[(-n+4-4\nu)\frac{C_{n}^{i}}{\widetilde{r}_{i}^{n}}\right.\\ \left.+\frac{D_{n}^{i}}{\widetilde{r}_{i}^{n+2}}\right]\frac{dY_ {n}(\theta_{i})}{d\theta}. \tag{9}\]
Note that Eqs. (8-9) solve Eq. (1) only when each cell is surrounded by an infinite, homogeneous linearly-elastic medium, including in the interior of the neighboring cells. Biological cells have a rigidity that differs from the rigidity of the ECM that surrounds them; thus, this assumption seems problematic. We overcome this by realizing that we may first solve the mechanical problem in which the cells are assumed to have the same linear elastic properties as the ECM. The resultant solution includes a certain stress and displacement on the surface of each cell, and the solution outside the cells is independent of how the cell generates this stress on its surface. In particular, the stress that actual cells apply on their surrounding includes passive stress coming from the rigidity of the cell plus active stress coming from the external forces generated by molecular motors inside the cell. In our analysis, we consider only the total stress and the work it performs, which determines the interaction energy, and our results are valid irrespective of the mechanical rigidity of the cells themselves. See also Ref. [13].
## 5 Cancellation Condition
To preserve isotropic displacements on their surface, live cells in our model create correcting displacements that cancel the anisotropic displacements created by their neighbors. Thus, the sum of all anisotropic displacements caused at the surface of a cell by all other cells and all the corrections applied by the discussed cell must vanish. The coefficients \(C_{n}\) and \(D_{n}\) in Eqs. (5-6) are derived in this way so that each cell can retain its spherical shape despite interacting with its neighbors. To apply the cancellation condition and to derive from it the expressions for \(C_{n}\) and \(D_{n}\), we transform the expressions for the displacement fields of each cell \(j\) to the coordinate system of the discussed cell \(i\) by substitution of the expressions for \(r_{j}\) and \(\theta_{j}\) in terms of \(r_{i}\) and \(\theta_{i}\) and then multiplying the displacement vector \(\overrightarrow{u_{j}}=(u_{rj},u_{\theta j})\) by a transformation matrix. The transformation matrix depends on the coordinate systems of the cells \(i\) and \(j\); we use the rotation matrix
\[\mathbf{B}_{s}=\left(\begin{array}{cc}\cos\left(\theta_{i}-\theta_{j}\right)& \sin\left(\theta_{i}-\theta_{j}\right)\\ -\sin\left(\theta_{i}-\theta_{j}\right)&\cos\left(\theta_{i}-\theta_{j}\right) \end{array}\right) \tag{10}\]
for \(i\) and \(j\) with the same signs, and the reflection matrix
\[\mathbf{B}_{o}=\left(\begin{array}{cc}-\cos\left(\theta_{i}+\theta_{j} \right)&\sin\left(\theta_{i}+\theta_{j}\right)\\ \sin\left(\theta_{i}+\theta_{j}\right)&\cos\left(\theta_{i}+\theta_{j}\right) \end{array}\right) \tag{11}\]
for indices with opposite signs.
The central cell \(i=0\) may be treated as having a positive or a negative sign and right or left-handed coordinate system, accordingly. In our analysis, we chose to treat the central cell as having a left-handed coordinate system, and thus, we treat its index as positive (see Fig. 3).
We write the resultant expressions for the radial and angular displacements caused by each cell \(j\) on the surface of cell \(i\) in terms of the spherical harmonics of cell \(i\) by writing the projections:
\[\left(u_{r}\right)_{n}=2\pi\int_{0}^{\pi}u_{r}(\theta)Y_{n}(\theta )sin\theta d\theta, \tag{12}\] \[\left(u_{\theta}\right)_{n}=\frac{2\pi}{n(n+1)}\int_{0}^{\pi}u_{ \theta}(\theta)\frac{Y_{n}(\theta)}{d\theta}sin\theta d\theta. \tag{13}\]
As may be seen from Eqs. (12,13), every spherical-harmonic mode of cell \(j\) contributes to all the
modes on the surface of cell \(i\). Finally, we sum the contribution from all cells and find the total displacement in each mode.
The anisotropic displacements caused by all cells \(j\neq i\) must be canceled on the surface of cell \(i\) by the corrections it applies. We write the dimensionless displacement \(\widetilde{u}_{ii}\)_created by cell \(i\) on its surface_ (namely at \(\widetilde{r}_{i}=1\)):
\[\widetilde{u}_{rii}(\theta_{i}) \equiv\widetilde{u}_{ri}(1,\theta_{i})=\sum_{n=0}^{\infty}\Bigl{[} n(n+3-4\nu)C_{n}^{i}\] \[-(n+1)(D_{n}^{i}-\sqrt{4\pi}\delta_{n,0})\Bigr{]}\,Y_{n}(\theta_{ i}), \tag{14}\] \[\widetilde{u}_{\theta ii}(\theta_{i}) \equiv\widetilde{u}_{\theta i}(1,\theta_{i})\] \[=\sum_{n=0}^{\infty}\left[(-n+4-4\nu)C_{n}^{i}+D_{n}^{i}\right] \frac{dY_{n}(\theta_{i})}{d\theta}. \tag{15}\]
The term \(\delta_{n,0}\) in Eq. (14) is a Kronecker delta, which represents the isotropic radial displacement created by cell \(i\) on its surface without the anisotropic cancellation corrections. This constant term does not depend on changes in the cell's environment. The remaining terms are different modes of additional displacement that this cell creates in response to the displacement field induced on its surface by the neighboring cells. The dimensionless displacement \(\widetilde{u}_{ji}\)_created by each cell \(j\) on the surface of cell \(i\)_ is:
\[\widetilde{u}_{rii}(\theta_{i}) =\sum_{n=0}^{\infty}\sum_{m=0}^{\infty}\left[f_{nm}^{Cr}( \widetilde{d}_{ji})C_{m}^{j}\right.\] \[\left.+f_{nm}^{Dr}(\widetilde{d}_{ji})(D_{m}^{j}-\sqrt{4\pi} \delta_{m,0})\right]Y_{n}(\theta_{i}), \tag{16}\] \[\widetilde{u}_{\theta ji}(\theta_{i}) =\sum_{n=0}^{\infty}\sum_{m=0}^{\infty}\left[f_{nm}^{C\theta}( \widetilde{d}_{ji})C_{m}^{j}\right.\] \[\left.+f_{nm}^{D\theta}(\widetilde{d}_{ji})D_{m}^{j}\right]\frac {dY_{n}(\theta_{i})}{d\theta_{i}}, \tag{17}\]
where the sum over \(m\) originates from the fact that the displacement \(\widetilde{u}_{j}\) created by cell j is given by a multipole expansion (8-9) with the corrective magnitudes \(C_{m}^{j}\) and \(D_{m}^{j}\). The sum over \(n\) originates from the fact that after the coordinate transformation when these modes are expressed in terms of the spherical harmonics in the coordinate system of cell \(i\), each mode from cell \(j\) contributes to all the modes of cell \(i\). The functions \(f_{nm}^{Cr}(\widetilde{d}_{ji})\), \(f_{nm}^{Dr}(\widetilde{d}_{ji})\), \(f_{nm}^{C\theta}(\widetilde{d}_{ji})\), and \(f_{nm}^{D\theta}(\widetilde{d}_{ji})\) depend only on the dimensionless distance \(\widetilde{d}_{ji}=\frac{d_{ji}}{R_{0}}\) between the cells. However, similarly to the transformation matrices, these functions depend on whether the indices \(i\) and \(j\) have the same or opposite signs. This follows from the choice of the coordinate systems of the cells that were described earlier, see Fig. 3. If the signs are the same, the functions further depend on the sign of the difference \(|i|-|j|\) that indicates the side at which cell \(j\) is located relative to cell \(i\). Thus we make a distinction between \(\left(f_{nm}^{Cr}\right)_{l}\), \(\left(f_{nm}^{Dr}\right)_{l}\), \(\left(f_{nm}^{C\theta}\right)_{l}\), \(\left(f_{nm}^{D\theta}\right)_{l}\) and \(\left(f_{nm}^{Cr}\right)_{r}\), \(\left(f_{nm}^{Dr}\right)_{r}\), \(\left(f_{nm}^{C\theta}\right)_{r}\), \(\left(f_{nm}^{D\theta}\right)_{r}\) for \(|j|>|i|\) and \(|j|<|i|\), accordingly, with \(i\) and \(j\) of the same signs, and \(\left(f_{nm}^{Cr}\right)_{o}\), \(\left(f_{nm}^{Dr}\right)_{o}\), \(\left(f_{nm}^{C\theta}\right)_{o}\), \(\left(f_{nm}^{D\theta}\right)_{o}\)for \(i\) and \(j\) with opposite signs. The expressions for all cases are given in Appendix A.
We now require that for live cells, the total displacement \(\widetilde{u}_{ii}(\theta_{i})+\sum_{j}\widetilde{u}_{ji}(\theta_{i})\) on the surface of cell \(i\) is isotropic. We begin by considering the simplest (but strictest) regulation scenario, for which not only is this total displacement isotropic, but its magnitude remains equal to the displacement \(u_{0}\) in the absence of interactions between the cells. Moreover, we require that the center of symmetry of each cell does not move. This will be denoted _fixed size fixed position_ (FSFP) regulation. We will also consider three additional activity regulation scenarios in which the interaction causes the cells to change their volume and/or to move, yet they remain spherically symmetric. We denote these regulation scenarios as: _variable size fixed position_ (VSFP), _fixed size variable position_ (FSVP), and _variable size variable position_ (VSVP), see Fig. 5 and Fig. 2 above. For VSFP, cells regulate their shape and rigid body motion, but not their size. In this case, we nullify \(D_{0}\), the first term of the regulating series of each cell responsible for cell size regulation. Similarly, for FSVP, in which cells regulate their shape and size but not their position, we nullify the coefficients \(C_{1}\) and \(D_{1}\), and for VSVP, in which cells regulate their shapes but not their size or position, we nullify \(D_{0}\), \(C_{1}\), and \(D_{1}\). This method is discussed in further detail in Ref. [23].
To preserve isotropic displacement on the surface of cell \(i\) we require that:
\[\widetilde{u}_{rii}(\theta_{i})+\sum_{j\neq i}\widetilde{u}_{rji}(\theta_{i}) \equiv 1, \tag{18}\]
\[\widetilde{u}_{\theta ii}(\theta_{i})+\sum_{j\neq i}\widetilde{u}_{\theta ji}( \theta_{i})\equiv 0. \tag{19}\]
Due to the symmetry of the system and our choice of coordinate systems for the cells, the coefficients of pairs of cells with opposite indices are equal, namely \(C_{m}^{j}=C_{m}^{-j}\) and \(D_{m}^{j}=D_{m}^{-j}\). Thus, for a system of \(k\) cells, we need to write the conditions (18,19) for \(k/2\) cells with a nonrepeating index \(j\) if the total number of the cells is even, and for \(k/2+1\) if it is odd.
Substituting (14,15,16,17) in (18,19) yields:
\[\sum_{n=0}^{\infty}\biggl{\{}\Bigl{[}n(n+3-4\nu)C_{n}^{i}\\ -(n+1)(D_{n}^{i}-\sqrt{4\pi}\delta_{n,0})\Bigr{]}\\ +\sum_{j\neq i}\sum_{m=0}^{\infty}\Bigl{[}f_{nm}^{Cr}(\widetilde{ d}_{ji})C_{m}\\ +f_{nm}^{Dr}(\widetilde{d}_{ji})(D_{m}-\sqrt{4\pi}\delta_{m,0}) \Bigr{]}\biggr{\}}Y_{n}(\theta_{1})=1, \tag{20}\]
\[\sum_{n=0}^{\infty}\biggl{\{}\bigl{[}(-n+4-4\nu)C_{n}^{i}+D_{n}^{i}\bigr{]}\\ \end{array}\]
\[+\sum_{j\neq i}\sum_{m=0}^{\infty}\Bigl{[}f_{nm}^{C\theta}(\widetilde{d}_{ji}) C_{m}+\\ f_{nm}^{Dr}(\widetilde{d}_{ji})D_{m}\Bigr{]}\biggr{\}}\frac{dY_{n}( \theta_{1})}{d\theta_{1}}=0. \tag{21}\]
Due to the orthogonality of the Legendre polynomials, for these infinite sums to satisfy the cancellation conditions, each term in the sums must cancel independently. Thus for all \(n\geq 1\) we require:
\[n(n+3-4\nu)C_{n}^{i}-(n+1)D_{n}\\ +\sum_{j\neq i}\sum_{m=0}^{\infty}\Bigl{[}f_{nm}^{Cr}(\widetilde{ d}_{ji})C_{n}^{j}\\ +f_{nm}^{Dr}(\widetilde{d}_{ji})(D_{m}-\sqrt{4\pi}\delta_{m,0}) \Bigr{]}=0, \tag{22}\] \[(-n+4-4\nu)C_{n}^{i}+D_{n}+\] \[\sum_{j\neq i}\sum_{m=0}^{\infty}\Bigl{[}f_{nm}^{C\theta}( \widetilde{d}_{ji})C_{n}^{j}+f_{nm}^{D\theta}(\widetilde{d}_{ji})D_{m}\Bigr{]}=0. \tag{23}\]
Note that for \(n=0\), from Eqs. (8-9) \(C_{0}\) is irrelevant; thus, we set it to zero. Moreover, since \(Y_{0}(\theta_{1})=1\), \(\frac{dY_{0}(\theta_{1})}{d\theta_{1}}=0\) and Eq. (21) holds trivially, thus for \(n=0\) we obtain only one equation, from Eq. (20):
\[-D_{0}+\sum_{m=0}^{\infty}\Bigl{[}f_{0m}^{Cr}C_{n}^{j}\\ +f_{0m}^{Dr}(D_{m}-\sqrt{4\pi}\delta_{m,0})\Bigr{]}=0. \tag{24}\]
We obtain closure of the infinite coupled linear Eqs. (22-24) by assuming that \(C_{n}=0\) and \(D_{n}=0\) for \(n>n_{\rm max}\), with some arbitrary value of \(n_{\rm max}\), which will determine the accuracy of our calculation. This is justified since we will be interested in large separations between the cells, and since the solutions decay as \(1/r^{n}\), at large \(r\), large \(n\) terms become negligible. We previously verified this numerically by increasing \(n_{\rm max}\) until convergence [23]. According to our findings, for two cells, \(n_{\rm max}=1\) for FP regulation and \(n_{\rm max}=2\) for VP regulation scenarios are sufficient to include the leading terms and to obtain good approximations for the interaction energy. Therefore, we use these values of \(n_{\rm max}\) also in the present analysis of interactions between multiple cells. The resultant expressions for the coefficients \(C_{n}\) and \(D_{n}\) for
Figure 5: Schematic drawings showing how the four possible scenarios of shape regulation depend on whether the size (mode \(n=0\)) and the position (mode \(n=1\)) are regulated or not. Initial shape (solid purple), displacement without (dashed black), and with (solid blue) interaction. In the variable size cases, the cell is maintained spherical, yet its radius changes by \(g\). In the variable position cases, the center of the cell (dot) translates by \(h\)
three and four cells along a straight line are given in Appendix B. We evaluate the forces created by the cell using Eq. (C22-C27) in Appendix C for the stress tensor in the elastic environment. Due to force balance, the cell's active force per unit area is exactly minus this elastic stress.
The case of many cells along a straight line can be solved by approximating it by an infinite, one-dimensional array of cells; an infinite number of neighbors surrounds each cell. Consequently, all cells respond similarly to their environment and create identical displacement fields. Therefore, \(C_{n}^{i}=C_{n}^{j}\) and \(D_{n}^{i}=D_{n}^{j}\) for any \(i\) and \(j\). This reduces the number of unknown coefficients from \(n_{\max}\times k\) in a finite array of \(k\) cells to \(n_{\max}\) in an infinite array, enabling us to define cancellation conditions for a single general cell rather than for \(k/2\) cells.
To be finite and solvable, we include in Eqs. (18,19) only terms coming from a limited number \(k_{\max}\) of neighboring cells, despite the assumption that there is an infinite number of cells. Similarly to \(n_{\max}\), this is justified since the displacement fields created by the cells decay as \(1/r^{n}\), so displacements produced by distant cells become negligible. As shown below, we verify this numerically by increasing \(k_{\max}\) until convergence.
## 6 Interaction Energy
We solved the equations for configurations of two, three, four, and an infinite number of cells on a straight line. For each case, we evaluated the extra work performed by each cell \(i\) by terminating the infinite sums at \(n_{\max}=1\) for FP regulation and at \(n_{\max}=2\) for VP regulation. For configurations that involved three or more cells, we compared the interaction energy obtained from the _direct solution_ of the multiple-cell geometry with the pair-wise additive prediction assuming _superposition_ of interactions between all pairs of cells within the system. The direct calculation consists of constructing and solving a set of boundary conditions of the form of Eqs. (20-21). The superposition calculation approximates three-, four-, and many-body interactions by summing all the two-cell interactions in the system.
The total elastic energy stored in the medium surrounding the cells is equal to the work performed by all cells to generate their deformations, starting from their undeformed states. Cells apply active forces only on their surfaces, thus the amount of work performed by each cell at any point on its surface may be computed by multiplying the force that the cell applies at that point by the total displacement there, divided by two. The division by two results from the integration starting from the undeformed state and reaching the deformed state as the stress in the system gradually builds up linearly with the growing displacement in our linearly elastic medium [13]. The self-energy of each cell is the elastic energy it generates when it is surrounded by the infinite ECM and is isolated from other cells. We define the interaction energy as the difference between the elastic energy of the system of cells and the sum of all the cells' self-energies and is thus equal to the extra work performed by the cells due to the presence of other cells around them.
We write the extra work performed by cell \(i\) in a case that includes \(k\) cells as:
\[W_{i}^{k}=E_{0}\widetilde{W}_{i}^{k}, \tag{25}\]
where \(E_{0}=8\pi Gu_{0}^{2}R_{0}\) is the cell's self-energy, or the work done by a single, isolated cell that creates on its surface an isotropic displacement \(u_{0}\)[23]. We find that for all the cases that we considered, the dimensionless extra work may be written as
\[\widetilde{W}_{i}^{k}=\frac{q(1-2\nu)A_{i}^{k}}{B(\nu)\widetilde{d}^{\alpha}}. \tag{26}\]
Here, \(A_{i}^{k}\) is a numerical prefactor that depends on the number \(k\) of cells in the system and on the index \(i\) of the cell within the system, but which does not depend on the medium's Poisson ratio \(\nu\). We find that this dependence may be included in \(B(\nu)\), which is the same for all cells within the system and for any number of cells in the system. Finally, \(\alpha\) is the exponent of the power law decay of the interaction energy with the distance between the cells. We find that the sign of the extra work is \(q=+1\) for FS and \(q=-1\) for VS. These signs are consistent with the theoretical understanding that FS refers to displacement homeostasis, which leads to repulsion between cells and VS to stress homeostasis, which leads to attraction [7; 13; 23]. Table 1 shows the values of \(A_{i}^{k}\), \(\alpha\), and \(B\) for different cells in configurations that include different numbers of
cells on a straight line, and for the different position regulation scenarios. Note that by symmetry, \(A_{j}^{k}=A_{-j}^{k}\).
For an infinite array of cells, all cells are equivalent. This symmetry cancels the displacements produced by the cell's neighbors, thus its position remains fixed even without position regulation, and position-regulating terms vanish in all regulation scenarios. Thus, the distinction between FP and VP becomes irrelevant. Nonetheless, we refer to the results here as VP regulation, since the positions of the cells are not actively regulated by the cells similar to the VP scenarios in the two, three, and four cell configurations.
We summed these additional works per cell to evaluate the direct interaction energy for a configuration of \(k\) cells,
\[E^{k}=\frac{q(1-2\nu)A_{\rm tot}^{k}E_{0}}{B(\nu)\bar{d}^{\alpha}}, \tag{27}\]
where \(A_{\rm tot}^{k}=\sum_{i}A_{i}^{k}\) is also given in Table 1.
We find that the sign of the interaction energy, given by \(q\), as well as the scaling with distance, given by the exponent \(\alpha\), are independent of the number of cells. Moreover, for the three- and four-cell cases, the additional work performed by the central cells is greater than that performed by the side cells, namely \(A_{0}^{3}>A_{1}^{3}\) and \(A_{1}^{4}>A_{2}^{4}\). This is due to the fact that the central cells are closer to the rest of the cells compared to the side cells. Using the direct method, we did not find a closed-form solution for an infinite array of cells on a straight line. Thus, the number of interacting neighbors included in computations, in this case, is limited. Nevertheless, the addition of interacting neighbors does not influence the sign \(q\) of the interaction or the scaling exponent \(\alpha\) with cell-cell distances. The energy remains proportional to \(\bar{d}^{-6}\), and only the coefficient \(A_{i}^{\infty}\) is affected by the number of neighbors included in the calculation. Figure 6 shows how the value of \(A_{i}^{\infty}\) converges as the number of interacting cells grows, and the value given in Table 1 is for the largest number of cells that we considered, \(k_{\rm max}=39\).
If the interaction energy was pair-wise additive, one could treat the interactions of three, four, and many cells as combinations of two-cell interactions between all the cells in each configuration. The total interaction energy would then be equal
\begin{table}
\begin{tabular}{|l|c|c|c|} \hline & & FP & VP \\ \hline & \(\alpha\) & \(4\) & \(6\) \\ & B & \((5-6\nu)\) & \((4-5\nu)\) \\ \hline Two Cells & \(A_{1}^{2}\) & \(1\) & \(5\) \\ & \(A_{\rm tot}^{2}\) & \(2\) & \(10\) \\ \hline Three Cells & \(A_{1}^{3}\) & \(\frac{5}{16}\) & \(\frac{685}{64}\) \\ & \(A_{0}^{3}\) & \(\frac{5}{2}\) & \(\frac{45}{4}\) \\ & \(A_{0}^{3}\) & \(\frac{25}{8}\) & \(\frac{1045}{32}\) \\ & \(A_{\rm tot,s}^{3}\) & \(\frac{33}{8}\) & \(\frac{645}{32}\) \\ & \(\Delta A_{\rm tot}^{3}\) & \(-1\) & \(\frac{25}{2}\) \\ \hline Four Cells & \(A_{2}^{4}\) & \(\frac{11}{162}\) & \(\frac{8510}{729}\) \\ & \(A_{1}^{4}\) & \(\frac{263}{162}\) & \(\frac{23635}{1458}\) \\ & \(A_{\rm tot}^{4}\) & \(\frac{274}{81}\) & \(\frac{40655}{729}\) \\ & \(A_{\rm tot,s}^{4}\) & \(\frac{2033}{324}\) & \(\frac{10165}{324}\) \\ & \(\Delta A_{\rm tot}^{4}\) & \(-\frac{937}{324}\) & \(\frac{71135}{2916}\) \\ \hline Infinite Array & \(A_{i}^{\infty}\) & & \(28.90\) \\ & \(A_{i,s}^{\infty}\) & & \(\frac{2\pi^{4}}{9}=21.65\) \\ & \(\Delta A_{i}^{\infty}\) & & \(7.35\) \\ \hline \end{tabular}
\end{table}
Table 1: Coefficients for additional work Eq. (26) done by the cells as a result of mechanical interaction with their neighbors, for different scenarios of position regulation.
Figure 6: Convergence of the coefficient \(A_{i}^{\infty}\) with the number of cells \(k_{\rm max}\) included in the calculation for an infinite array of cells. Exact evaluation (red squares) taking into account interactions of \(3\) to \(39\) cells. Approximate fit (dashed line) given by \(A_{i}^{\infty}=84.42x^{4}+28.88x^{3}-99.56x^{2}+0.1618x+28.90\) where \(x=1/k_{max}\)
to the sum of energies of interactions between all pairs of cells and may be evaluated using the two-cell results given in Table 1. For example, we consider the interaction energy between three cells in the FSFP case by decomposing it into two similar interactions between the side cells and the central cell and the interaction between the cells on opposite sides. Since the distance between these cells equals twice the distance \(d\) between the side cell and the central cell, the interaction energy becomes:
\[\widetilde{W}_{\text{tot,s}}^{3}=\left[2\cdot 2\frac{1}{ \widetilde{d}^{4}}+2\frac{1}{\left(2\widetilde{d}\right)^{4}}\right]\frac{(1-2 \nu)}{5-6\nu}\] \[=\frac{33}{8}\frac{(1-2\nu)}{5-6\nu}\frac{1}{\widetilde{d}^{4}} \tag{28}\]
In the same manner, we evaluate the added work performed by a cell in an infinite array of cells as a sum of pair interactions with each cell on both sides, where according to Table 1, in the VP cases, the interaction energy of each pair equals \(\frac{10(1-2\nu)}{5-6\nu}\frac{1}{\widetilde{d}^{4}}\). Thus,
\[\widetilde{W}_{\text{tot,s}}^{\infty}=\left[2\cdot\sum_{k=1}^{ \infty}\frac{1}{\left(k\cdot\widetilde{d}\right)^{4}}\right]\frac{10(1-2\nu) }{5-6\nu}\] \[=\frac{2\pi^{4}}{9}\frac{1-2\nu}{5-6\nu}\frac{1}{\widetilde{d}^{4}}. \tag{29}\]
We denote the results obtained from this superposition calculation by the subscript s, and list these results as well in Table 1. We denote the difference between the direct many-body calculation and the superposition expression as \(\Delta A\equiv A_{\text{tot}}-A_{\text{s}}\).
From Table 1, we see that the coefficients \(A_{i}\) found using the two methods for the same configurations are different. This difference is not obvious since, in linear elasticity, typically results may be superimposed when analyzing more complicated arrangements. We conclude that the active response of the cells to their neighbors produces non-linear intercellular interactions, even in the case of linear elasticity. Considering that each active cell in the presence of other active cells is performing extra work, one might expect the interaction energy in all cases to be higher in direct method solutions than in superposition method solutions. However, in three- and four-cell configurations, this assumption is correct in the VP but not in the FP regulation scenarios. Namely, for VP, \(A_{\text{tot}}>A_{s}\), while for FP \(A_{\text{tot}}<A_{s}\). This unexpected result follows from canceling the central cell's rigid body motion due to the configuration's symmetry. For FP, a large part of the added work comes from the interaction between the size regulation of the cell (\(n=0\) mode) and the forces that regulate the motion of the neighbor (\(n=1\) mode) as a rigid body. The added work done by the central cell is significant due to its interaction with two neighbors on both sides. The work done by the side cells is small due to the absence of first-mode forces created by the central cell, see Fig. 6(a). Most of the added work done by the side cells follows from their interaction with the relatively distant cells on the opposite sides and is small due to the fourth power of normalized distance \(\widetilde{d}\).
In
Figure 7: Interactions between three cells in FP (a) and VP (b) regulation cases. For FP, the cells on the sides apply forces to regulate their motion. Due to the symmetry, the central cell does not need to apply forces to stay in place. For VP, no cell regulates its motion. Initial shape (purple), forces applied by each cell (red arrows), corresponding self displacements (dashed black), displacements caused by the other cells (dashed green), total displacement (blue), the center of the interaction-free displacement (black dot), the center of the displacements with the interactions (blue dot). For illustration purposes, the initial distance between the spheres was set to \(d=3R_{0}\), and the self-displacement to \(u_{0}=0.4R_{0}\). The Poisson ratio is \(\nu=0.3\)
Figure 6: Interactions between three cells in FP (a) and VP (b) regulation cases. For FP, the cells on the sides apply forces to regulate their motion. Due to the symmetry, the central cell does not need to apply forces to stay in place. For VP, no cell regulates its motion. Initial shape (purple), forces applied by each cell (red arrows), corresponding self displacements (dashed black), displacements caused by the other cells (dashed green), total displacement (blue), the center of the interaction-free displacement (black dot), the center of the displacements with the interactions (blue dot). For illustration purposes, the initial distance between the spheres was set to \(d=3R_{0}\), and the self-displacement to \(u_{0}=0.4R_{0}\). The Poisson ratio is \(\nu=0.3\)
contrast to FP cases, position-regulating terms are assumed to vanish and are not affected by symmetry in VP cases, see Fig. 6(b). The first mode is the only one that includes an antisymmetric function; thus, only this mode will be affected by symmetry.
## 7 Conclusions
We model live cells in the ECM as spherical active force dipoles, which are surrounded by a linear elastic environment. For isotropic active forces and thus isotropic self-displacements, the interaction energy between cells vanishes. Hence, we distinguish between this dead behavior, in which the cells apply constant forces and self-displacements on their surface, and live, regulatory behavior, in which cells adjust their active forces and self-displacements in response to changes that they sense in their environment. This live behavior of cells is similar to interaction between induced electric dipoles on particles with charge regulation.
We examine systems with three, four, and infinite numbers of cells on a line. We solved the interaction energy for these configurations for four different types of self-regulation: on top of preserving their spherical shape, cells can also preserve their volume or their position, or both. Similarly to the interaction between two such shape-regulating cells [23], for fixed position, we found the interaction energy to be inversely proportional to the distance between the cells to the fourth power, and for variable position, to its sixth power. As in the case of two cells, also here, we found that for fixed volume, multiple cells are repelled from each other, and for variable volume they are attracted to each other.
We compared the results of direct computation of the many-body configurations to the sum of all two-cell interactions for the same configurations. A comparison of the results shows that the superposition method does not predict the energy of multiple-cell configurations. We also found that if cells regulate their position, the many-body interaction energy is smaller than the sum of interactions between all pairs of cells in the system, while for cells that do not regulate their position, the many-body interaction is larger than the superposition prediction. We conclude that the active response of cells to their neighbors produces non-linear intercellular connections even in the case of linear elasticity.
We have solved the deformation fields for the case in which the rigidity of the cells is the same as that of their environment. Biological cells, however, are complex entities whose rigidity varies from place to place and from the rigidity of the ECM. To relate our results to live cells, we describe each of them as a mechanism that applies forces on the surface and responds by their variation to the application of external force or displacement. The displacements and forces applied by a cell may be divided into "dead" and "live" parts. While the dead part of the forces or displacements would remain the same if the cells were dead and retained their elastic properties, the live part depends on their programmed behavior and is generated by the contraction of their actomyosin networks. Since the resultant force and displacement are the sum of those two parts, cells may create such a live response so that the resulting forces and displacements will coincide with the case considered here, for which their rigidity is identical to the rigidity of their environment. Even if cells do not behave in this manner, our results highlight the many-body nature of matrix-mediated elastic interactions between cells, and specifically the different behavior for different regulation scenarios.
Following our work on multiple cells along a straight line, it would be interesting to extend our work to two-dimensional arrangements, the simplest of which would be three cells at the corners of a triangle. Since there will be no cylindrical symmetry like in our present study, a more complicated analysis of displacements on the surface of the cells will be needed to model their behavior. It would also be interesting to expand our present work to the case of aspherical cells, for example, oblate spheroids. In this case, the interaction energy between two such cells would depend on the distance between their centers and on the relative angle between their axes. We limited ourselves to cells surrounded by a linearly elastic medium, so that we could exactly solve their interactions analytically. It would be interesting to test our qualitative predictions by solving with numerical simulations situations with nonlinear response of the medium.
## Appendix A The functions
We distinct between three different cases for the functions \(\big{(}f_{nm}^{Cr}\big{)}_{t}\), \(\big{(}f_{nm}^{Dr}\big{)}_{t}\), \(\big{(}f_{nm}^{Co}\big{)}_{t}\) and \(\big{(}f_{nm}^{D\theta}\big{)}_{t}\) appearing in Eqs. (16,17). Thus the index \(t\) in the following expressions may be equal to \(l\), \(r\) or \(o\):
\[\Big{(}f_{nm}^{Cr}(\widetilde{d})\Big{)}_{t} =2\pi\int_{0}^{\pi}\big{[}\big{(}g_{m}^{1}\big{)}_{t}\,Y_{m}\left( \psi\right)\] \[+\big{(}g_{m}^{2}\big{)}_{t}\,Y_{m+1}\left(\psi\right)\big{]}Y_{n }(\theta_{1})d\theta_{1}, \tag{11}\] \[\Big{(}f_{nm}^{Dr}(\widetilde{d})\Big{)}_{t} =2\pi\int_{0}^{\pi}\big{[}\big{(}g_{m}^{3}\big{)}_{t}\,Y_{m}\left( \psi\right)\] \[+\big{(}g_{m}^{4}\big{)}_{t}\,Y_{m+1}\left(\psi\right)\big{]}Y_{n }(\theta_{1})d\theta_{1},\] (12) \[\Big{(}f_{nm}^{C\theta}(\widetilde{d})\Big{)}_{t} =\frac{\sqrt{\pi(2n+1)}}{n(n+1)}\int_{0}^{\pi}\big{[}\big{(}g_{m}^ {5}\big{)}_{t}\,Y_{m}\left(\psi\right)\] \[+\big{(}g_{m}^{6}\big{)}_{t}\,Y_{m+1}\left(\psi\right)\big{]}\] \[\cdot[\cos(\theta_{1})P_{n}(\theta_{1})-P_{n+1}(\theta_{1})]d \theta_{1}],\] (13) \[\Big{(}f_{nm}^{D\theta}(\widetilde{d})\Big{)}_{t} =\frac{\sqrt{\pi(2n+1)}}{n(n+1)}\int_{0}^{\pi}\big{[}\big{(}g_{m}^ {7}\big{)}_{t}\,Y_{m}\left(\psi\right)\] \[+\big{(}g_{m}^{8}\big{)}_{t}\,Y_{m+1}\left(\psi\right)\big{]}\] \[\cdot[\cos(\theta_{1})P_{n}(\theta_{1})-P_{n+1}(\theta_{1})]d \theta_{1}]. \tag{14}\]
We used the identity [28]:
\[\frac{dY_{n}(\theta)}{d\theta} =\sqrt{\frac{2n+1}{4\pi}}(n+1)\] \[\cdot\left[\frac{P_{n+1}(\cos\theta)}{\sin\theta}-\cot\theta P_{ n}(\cos\theta)\right] \tag{15}\]
to rewrite the derivatives \(\frac{dY_{n}(\theta)}{d\theta}\) in terms of \(\theta\), and for the sake of brevity we defined:
\[\big{(}g_{m}^{1}\big{)}_{t}=\left\{\begin{array}{ll}\frac{\big{(}h_{m}^{1} -h_{m}^{2}\big{)}\sin(\theta_{1})}{\zeta_{m+1}^{m+1}}&for\ t=o,r\\ \frac{\big{(}h_{m}^{1}+h_{m}^{2}\big{)}\sin(\theta_{1})}{\zeta_{m+1}^{m+1}}& for\ t=l\end{array}\right. \tag{16}\]
\[h_{m}^{1}=\widetilde{d}^{2}\left[m^{2}-m(3-4\nu)-4(1-\nu)\right] \\ +m^{2}+m(3-4\nu) \tag{17}\]
\[h_{m}^{2}=2\widetilde{d}\big{(}m^{2}-2+2\nu\big{)}\cos(\theta_{1}) \tag{18}\]
\[\big{(}g_{m}^{3}\big{)}_{t}=\left\{\begin{array}{ll}\frac{(m+1)\sin(\theta_{1 })}{\zeta_{m}^{m+1}}&for\ t=o,l\\ -\frac{(m+1)\sin(\theta_{1})}{\zeta_{m}^{m+1}}&for\ t=r\\ -\frac{(m+1)\sin(\theta_{1})}{\zeta_{m}^{m+1}}&for\ t=r\end{array}\right., \tag{19}\]
\[\big{(}g_{m}^{5}\big{)}_{t}=\left\{\begin{array}{ll}\frac{\big{(}h_{m}^{3} -h_{m}^{4}\big{)}}{\zeta_{m}^{m+1}}&for\ t=o,r\\ \frac{\big{(}h_{m}^{3}+h_{m}^{4}\big{)}}{\zeta_{m}^{m+1}}&for\ t=l\end{array}\right., \tag{20}\]
\[h_{m}^{3}=\left(\widetilde{d}^{2}+1\right)(m\!+\!1)(m\!-\!4\!+\!4\nu)\cos( \theta_{1}) \tag{21}\]
\[h_{m}^{4}=\widetilde{d}[m(m-6+8\nu)-6(1-\nu)\\ +\big{(}m^{2}-2+2\nu\big{)}\cos(2\theta_{1})] \tag{22}\]
\[\big{(}g_{m}^{6}\big{)}_{t}=\left\{\begin{array}{ll}-(\tilde{d}\cos(\theta_{1 })-1)&for\ t=o,l\\ \cdot\frac{(m+1)(m-4+4\nu)}{\zeta_{m}^{m}}&\\ (\tilde{d}\cos(\theta_{1})-1)&for\ t=r\\ \cdot\frac{(m+1)}{\zeta_{m}^{m+2}}&\end{array}\right. \tag{23}\]
\[\big{(}g_{m}^{7}\big{)}_{t}=-(m+1)\cos(\theta_{1})/\zeta_{t}^{m+1}, \tag{24}\]
\[\big{(}g_{m}^{8}\big{)}_{t}=\left\{\begin{array}{ll}(\tilde{d}\cos(\theta_{1 })-1)&for\ t=o,l\\ \cdot\frac{(m+1)}{\zeta_{m}^{m+2}}&\\ -(\tilde{d}\cos(\theta_{1})-1)&for\ t=r\\ \cdot\frac{(m+1)}{\zeta_{m}^{m+2}}&\end{array}\right. \tag{25}\]
where
\[\psi_{o} \equiv\left[\widetilde{d}-\cos(\theta_{1})\right]/\zeta_{o}, \tag{26}\] \[\psi_{l} \equiv\left[\widetilde{d}+\cos(\theta_{1})\right]/\zeta_{l},\] (27) \[\psi_{r} \equiv-\left[\widetilde{d}-\cos(\theta_{1})\right]/\zeta_{r},\] (28) \[\zeta_{o}=\zeta_{r} \equiv\sqrt{\widetilde{d}^{2}-2\widetilde{d}\cos(\theta_{1})+1},\] (29) \[\zeta_{l} \equiv\sqrt{\widetilde{d}^{2}+2\widetilde{d}\cos(\theta_{1})+1}. \tag{30}\]
Appendix B Coefficients \(D_{0}\), \(C_{1}\), \(D_{1}\), \(C_{2}\) and \(D_{2}\) for large distances
Table 1 presents expressions for the coefficients \(D_{0}\), \(D_{0}\), \(C_{1}\), \(D_{1}\), \(C_{2}\), and \(D_{2}\) in the case of three cells in a row. Similarly, we present expressions for the same coefficients for four cells in a row in Table 2. In both tables, the coefficients of side cells are marked by index \(1\) and those of central cells by index \(2\).
## Appendix C The stress tensor
Here, we develop the expressions for the stress tensor in the cases discussed in the paper. The applied displacements are symmetric about an axis passing through the centers of the cells. The expressions are taken from [26] for the case of the displacement field given by Eqs. (5-6), excluding the first term in Eq. (5), which corresponds to volume change. The stress tensor may be written in the following form in this case:
\[\boldsymbol{\tau}=\sum_{n=0}^{\infty}\begin{pmatrix}\tau_{rr}^{(n)}&\tau_{r \theta}^{(n)}&\tau_{r\varphi}^{(n)}\\ \tau_{\theta r}^{(n)}&\tau_{\theta\theta}^{(n)}&\tau_{\theta\varphi}^{(n)}\\ \tau_{\varphi r}^{(n)}&\tau_{\varphi\theta}^{(n)}&\tau_{\varphi\varphi}^{(n)} \end{pmatrix}\] (C22)
The stress tensor is symmetric \(\tau_{ij}=\tau_{ji}\) and thus only six components are to be evaluated. We define the dimensionless stress tensor \(\widetilde{\boldsymbol{\tau}}=\frac{\boldsymbol{\tau}}{G}\frac{R_{0}}{u_{0}}\), the elements of which are given by:
\[\widetilde{\tau}_{RR}^{(n)}= 2\bigg{[}-\frac{C_{n}}{\widetilde{r}_{i}^{\,n+1}}n(n^{2}+3n-2\nu)\] \[+\frac{D_{n}}{\widetilde{r}_{i}^{\,n+3}}(n+1)(n+2)\bigg{]}Y_{n} \left(cos\theta_{i}\right),\] (C23)
\[\widetilde{\tau}_{R\varphi}^{(n)}= 2\bigg{[}\frac{C_{n}}{\widetilde{r}_{i}^{\,n+1}}(n^{2}-2+2\nu)\] \[-\frac{D_{n}}{\widetilde{r}_{i}^{\,n+3}}(n+2)\bigg{]}\frac{dY_{n }\left(cos\theta_{i}\right)}{d\theta_{i}},\] (C24) \[\widetilde{\tau}_{\theta\theta}^{(n)}= 2\bigg{\{}\bigg{[}\frac{C_{n}}{\widetilde{r}_{i}^{\,n+1}}n(n^{2} -2n-1+2\nu)\] \[-\frac{D_{n}}{\widetilde{r}_{i}^{\,n+3}}(n+1)^{2}\bigg{]}Y_{n}(cos \theta_{i})\] \[\widetilde{\tau}_{\varphi\varphi}^{(n)}= 2\bigg{\{}\bigg{[}\frac{C_{n}}{\widetilde{r}_{i}^{\,n+1}}n(n+3-4n \nu-2\nu)\] \[-\frac{D_{n}}{\widetilde{r}_{i}^{\,n+3}}(n+1)\bigg{]}Y_{n}(cos \theta_{i})\] \[+\bigg{[}\frac{C_{n}}{\widetilde{r}_{i}^{\,n+1}}(-n+4-4\nu)\] \[+\frac{D_{n}}{\widetilde{r}_{i}^{\,n+1}}n(n+3-4n\nu-2\nu)\] \[-\frac{D_{n}}{\widetilde{r}_{i}^{\,n+3}}(n+1)\bigg{]}Y_{n}(cos \theta_{i})\] \[+\bigg{[}\frac{C_{n}}{\widetilde{r}_{i}^{\,n+1}}(-n+4-4\nu)\] \[+\frac{D_{n}}{\widetilde{r}_{i}^{\,n+1}}n(n+3-4n\nu-2\nu)\] \[-\frac{D_{n}}{\widetilde{r}_{i}^{\,n+3}}(n+1)\bigg{]}Y_{n}(cos \theta_{i})\] \[+\bigg{[}\frac{C_{n}}{\widetilde{r}_{i}^{\,n+1}}(-n+4-4\nu)\] \[+\frac{D_{n}}{\widetilde{r}_{i}^{\,n+1}}n(n+3-4n\nu-2\nu)\] \[-\frac{D_{n}}{\widetilde{r}_{i}^{\,n+3}}(n+1)\bigg{]}Y_{n}(cos \theta_{i})\] \[+\bigg{[}\frac{C_{n}}{\widetilde{r}_{i}^{\,n+1}}(-n+4-4\nu)\] \[+\frac{D_{n}}{\widetilde{r}_{i}^{\,n+1}}n(n+3-4n\nu-2\nu)\] \[-\frac{D_{n}}{\widetilde{r}_{i}^{\,n+3}}(n+1)\bigg{]}Y_{n}(cos \theta_{i})\] \[+\bigg{[}\frac{C_{n}}{\widetilde{r}_{i}^{\,n+1}}(-n+4-4\nu)\] \[+\frac{D_{n}}{\widetilde{r}_{i}^{\,n+1}}n(n+3-4n\nu-2\nu)\] \[-\frac{D_{n}}{\widetilde{r}_{i}^{\,n+3}}(n+1)\bigg{]}Y_{n}(cos \theta_{i})\] \[+\bigg{[}\frac{C_{n}}{\widetilde{r}_{i}^{\,n+1}}(-n+4-4\nu)\] \[+\frac{D_{n}}{\widetilde{r}_{i}^{\,n+1}}\bigg{]}\frac{dY_{n}(cos \theta_{i})}{d\theta_{i}}ctg\theta\bigg{\}},\] (C26) \[\widetilde{\tau}_{R\varphi}^{(n)}= \widetilde{\tau}_{\theta\varphi}^{(n)}= 0.\] (C27)
After obtaining the coefficients \(C_{n}\) and \(D_{n}\), we compute the extra work \(W_{i}^{k}\) performed by cell \(i\) in a configuration that consists of \(k\) cells, to generate total displacement \(\overrightarrow{u}\) in accordance with Eqs. (14-17) and the type of the regulation:
\[W_{i}^{k}=\frac{1}{2}\int_{S}\left(\overrightarrow{u}\cdot\overrightarrow{F}- \overrightarrow{u_{0}}\cdot\overrightarrow{F_{0}}\right)ds.\] (C28)
Here, the integration is over the spherical surface of the cell \(i\), \(\overrightarrow{F}\) is the force per unit area applied by it on its environment. Due to force balance, the active forces applied by each cell are equal and opposite to the forces applied on it by the environment: \(\overrightarrow{F}=-\boldsymbol{\tau}\cdot\hat{r}\), where \(\hat{r}\) is the outward pointing unit vector normal to the surface of the cell after its deformation and movement, and \(\boldsymbol{\tau}\) is the stress tensor given above that arises in the elastic environment of each live cell in response to the total displacement \(\overrightarrow{u}\) on its surface. Due to the spherical shape regulation of the cells, there are no displacements in the azimuthal direction in this study. \(\overrightarrow{F_{0}}=\frac{4Gu}{R_{0}}\hat{r}\) is the force per unit area on the surface of a single cell with known isotropic displacement \(\overrightarrow{u_{0}}=u_{0}\hat{r}\) on its surface and without interactions with other spheres, and thus \(\overrightarrow{u_{0}}\cdot\overrightarrow{F_{0}}=8\pi Gu_{0}^{2}R_{0}\).
## Author Contributions
RG and YS formulated the problem, performed the calculations, analyzed the results, and wrote the paper.
## Data Availability Statement
All data is available within the paper.
|
2309.06060 | Maximal regularity under quadratic estimates | In this Short Note we complement the intriguing harmonic analytic perspective
due to P. Auscher and A. Axelsson for the abstract evolution equations. This
concerns a unified approach to temporally weighted estimates for the forward
and backward maximal regularity operators in presence of quadratic estimates
and functional calculi. In particular we provide several invariance properties
for the maximal regularity operators either in evolution form or in balayage
form. | Yi C. Huang | 2023-09-12T08:57:36Z | http://arxiv.org/abs/2309.06060v1 | # Maximal regularity under quadratic estimates
###### Abstract.
In this Short Note we complement the intriguing harmonic analytic perspective due to P. Auscher and A. Axelsson for the abstract evolution equations. This concerns a unified approach to temporally weighted estimates for the forward and backward maximal regularity operators in presence of quadratic estimates and functional calculi. In particular we provide several invariance properties for the maximal regularity operators either in evolution form or in balayage form.
Key words and phrases:Abstract evolution equations, weighted maximal regularity, quadratic estimates, functional calculi, abstract Hardy spaces, balayage operators, invariant subspaces 2020 Mathematics Subject Classification: Primary 35K90; Secondary 47A60 Research of the author is supported by the National NSF grant of China (no. 11801274). The author would like to thank Li Liu (YZU) for calling his attention back to Weighted Maximal Regularity, and Jian-Hua Chen (HNUST) for highlighting quadratic estimates in Control Theory.
## 2. Invariance properties for maximal regularity operators
Our aim is to complement the harmonic analytic perspective due to Auscher and Axelsson [1] in their unified approach to the \(\mathsf{L}^{2}(\mathbb{R}_{+},\mathsf{t}^{\mathrm{s}1}\mathsf{d}\mathsf{t}; \mathbf{H})\) estimates for the maximal regularity operators in presence of quadratic estimates. Weighted maximal regularity estimates are motivated by elliptic boundary value problems [1], and are useful also in the study of abstract evolution equations, see [1, Section 2].
Recall that the forward maximal regularity operator \(\mathcal{M}_{+}\) is defined by
\[\mathcal{M}_{+}(\mathsf{f})(\mathsf{t})=\int_{0}^{\mathsf{t}}\mathsf{A}e^{-( \mathsf{t}-\mathsf{s})\mathsf{A}}\mathsf{f}(\mathsf{s})\mathsf{d}\mathsf{s}, \tag{2.1}\]
while the backward maximal regularity operator \(\mathcal{M}_{-}\) is given by
\[\mathcal{M}_{-}(\mathsf{f})(\mathsf{t})=\int_{\mathsf{t}}^{\infty}\mathsf{A}e ^{-(\mathsf{s}-\mathsf{t})\mathsf{A}}\mathsf{f}(\mathsf{s})\mathsf{d}\mathsf{s}. \tag{2.2}\]
In the notations \(\mathcal{M}_{\pm}\), we omit the dependence in \(\mathsf{A}\). The operators \(\mathcal{M}_{\pm}\) are associated to the evolution equations (1.1)-(1.2) as for appropriate \(\mathsf{f}\), we have
\[\mathsf{A}\mathsf{u}=\mathcal{M}_{+}(\mathsf{f})\quad\text{ and }\quad\mathsf{A} \mathsf{v}=-\mathcal{M}_{-}(\mathsf{f}).\]
Therefore, maximal regularity problems translate into boundedness of \(\mathcal{M}_{\pm}\), which are typical examples of singular integral operators with operator-valued kernels.
Recall that \(\mathsf{A}\) is said to satisfy the quadratic estimate \((\mathsf{Q})_{\mathsf{A}}\) if for all \(\mathsf{h}\in\mathbf{H}\),
(2.3)
where \(\left|\kern-1.075pt\left|\kern-1.075pt\left|\cdot\right|\kern-1.075pt\right| \kern-1.075pt\right|_{-1}=\left|\kern-1.075pt\left|\cdot\right|\kern-1.075pt \right|_{\mathsf{L}^{2}(\mathbb{R}_{+},\mathsf{s}^{-1}\mathsf{d}\mathsf{s}; \mathbf{H})}\). One also refers to \(\left|\kern-1.075pt\left|\kern-1.075pt\left|\mathsf{s}\mathsf{A}e^{-\mathsf{ s}\mathsf{A}}\mathsf{h}\right|\kern-1.075pt\right|\kern-1.075pt\right|_{-1}\) as square function norm of \(\mathsf{h}\), a notion widely used for the abstract Hardy spaces in harmonic analysis and elliptic boundary value problems, see Auscher [1], Auscher-McIntosh-Russ [1], Hofmann-Mayboroda-McIntosh [13], and Auscher-Stahlhut [1].
The main results of this Short Note can be summarized as below.
**Theorem 2.1**.: _Let \(\mathsf{N}\in\mathbb{N}_{+}=\{1,2,\cdots\}\) and \(\mathcal{M}_{\pm}\) be given as in (2.1)-(2.2). We have_
_i) (Evolution Formulae) Suppose that \((\mathsf{Q})_{\mathsf{A}}\) holds. For \(\mathsf{f}_{0}\in\mathbf{H}\), then_
\[\mathcal{M}_{+}\left((\mathsf{s}\mathsf{A})^{\mathsf{N}}\mathsf{e}^{-\mathsf{ s}\mathsf{A}}\mathsf{f}_{0}\right)(\mathsf{t})=\frac{1}{\mathsf{N}+1}(\mathsf{t} \mathsf{A})^{\mathsf{N}+1}\mathsf{e}^{-\mathsf{t}\mathsf{A}}\mathsf{f}_{0}, \tag{2.4}\]
\[\mathcal{M}_{-}\left(\mathsf{A}e^{-\mathsf{N}\mathsf{s}\mathsf{A}}\mathsf{f}_ {0}\right)(\mathsf{t})=\frac{1}{\mathsf{N}+1}\mathsf{A}e^{-\mathsf{N}\mathsf{ t}\mathsf{A}}\mathsf{f}_{0}, \tag{2.5}\]
_and the two formulae hold respectively in \(\mathsf{L}^{2}(\mathbb{R}_{+},\mathsf{t}^{-1}\mathsf{d}\mathsf{t};\mathbf{H})\) and \(\mathsf{L}^{2}(\mathbb{R}_{+},\mathsf{t}\mathsf{d}\mathsf{t};\mathbf{H})\)._
_ii) (Balayage Formulae) Suppose that \((\mathsf{Q})_{\mathsf{A}^{*}}\) holds. For \(\mathsf{f}\in\mathsf{L}^{2}(\mathbb{R}_{+},\mathsf{t}^{-1}\mathsf{d}\mathsf{t} ;\mathbf{H})\), then_
\[\int_{0}^{\infty}\mathsf{A}e^{-\mathsf{N}\mathsf{t}\mathsf{A}}\mathcal{M}_{+}( \mathsf{f})(\mathsf{t})\mathsf{d}\mathsf{t}=\frac{1}{\mathsf{N}+1}\int_{0}^{ \infty}\mathsf{A}e^{-\mathsf{N}\mathsf{s}\mathsf{A}}\mathsf{f}(\mathsf{s}) \mathsf{d}\mathsf{s}; \tag{2.6}\]
_and for \(\mathsf{f}\in\mathsf{L}^{2}(\mathbb{R}_{+},\mathsf{t}\mathsf{d}\mathsf{t}; \mathbf{H})\), then_
\[\int_{0}^{\infty}(\mathsf{t}\mathsf{A})^{\mathsf{N}}\mathsf{e}^{-\mathsf{t} \mathsf{A}}\mathcal{M}_{-}(\mathsf{f})(\mathsf{t})\mathsf{d}\mathsf{t}=\frac{1 }{\mathsf{N}+1}\int_{0}^{\infty}(\mathsf{s}\mathsf{A})^{\mathsf{N}+1}e^{- \mathsf{s}\mathsf{A}}\mathsf{f}(\mathsf{s})\mathsf{d}\mathsf{s}; \tag{2.7}\]
_and both formulae hold weakly in \(\mathbf{H}\)._
_iii) (Endpoint Balayage Formulae) Suppose that \((Q)_{A^{*}}\) holds. For \(f\in L^{2}(\mathbb{R}_{+},tdt;\mathbf{H})\), if in addition \(\int_{0}^{\infty}e^{-sA}f(s)ds\) converges weakly in \(\mathbf{H}\), then_
\[\int_{0}^{\infty}e^{-NtA}\mathcal{M}_{+}(f)(t)dt=\frac{1}{N+1}\int_{0}^{\infty }e^{-NsA}f(s)ds; \tag{2.8}\]
_and for \(f\in L^{2}(\mathbb{R}_{+},t^{-1}dt;\mathbf{H})\), if in addition \(\int_{0}^{\infty}Ae^{-sA}f(s)ds=0\), then_
\[\int_{0}^{\infty}t^{N-1}A^{N}e^{-tA}\mathcal{M}_{-}(f)(t)dt=\frac{1}{N}\int_{0 }^{\infty}s^{N}A^{N+1}e^{-sA}f(s)ds; \tag{2.9}\]
_and both formulae hold weakly in \(\mathbf{H}\)._
Proof.: By formal computations plus Fubini theorem.
The above formulae are best understood as invariant subspace properties of \(\mathcal{M}_{\pm}\) either in evolution form or in balayage form. By _evolution_, we mean an extension \(E\) mapping boundary elements in \(\mathbf{H}\) to \(L^{2}_{\mathrm{loc}}(\mathbb{R}_{+};\mathbf{H})\), while by _balayage_ (or, _sweeping_), we mean the dual mapping of \(E\) that sends elements from \(L^{2}_{\mathrm{loc}}(\mathbb{R}_{+};\mathbf{H})\) to boundary elements in \(\mathbf{H}\). Both mappings are extremely useful in the abstract Hardy space theory, see [1, 1]. For concrete balayage operators in connection with \(\overline{\delta}\)-equation and probability, see Amar-Bonami [1] and Labeye-Voisin [13].
## 3. Illustrations and further remarks
Note that (2.3) implies (see Albrecht-Duong-McIntosh [1]) that \(\forall\,h\in\mathbf{H}\),
\[\big{\|}\big{\|}\psi_{1}(sA)h\big{\|}\big{\|}_{-1}:=\big{\|}\big{(}sA)^{N}e^{ -sA}h\big{\|}\big{\|}_{-1}\leq C\big{\|}h\big{\|}_{\mathbf{H}} \tag{3.1}\]
and
\[\big{\|}\big{\|}\big{\|}\psi_{2}(sA)h\big{\|}\big{\|}_{-1}:=\big{\|}\big{[}e^{ -sA}\left(I-e^{-NsA}\right)h\big{]}\big{\|}_{-1}\leq C\big{\|}h\big{\|}_{ \mathbf{H}}, \tag{3.2}\]
where \(N\in\mathbb{N}_{+}=\{1,2,\cdots\}\). The functions \(\psi_{1}\) and \(\psi_{2}\) in (3.1)-(3.2) decay at \(0\) and \(\infty\).
For convenience, we also introduce weighted spaces \(\mathfrak{H}_{\alpha}=L^{2}\big{(}\mathbb{R}_{+},t^{\alpha}dt;\mathbf{H})\), \(\alpha\in\mathbb{R}\). Recall the following weighted de Simon's theorem [1] for (1.1)-(1.2): \(\forall\,\alpha<1\),
\[\|Au\|_{\mathfrak{H}_{\alpha}}\leq C\big{\|}f\|_{\mathfrak{H}_{\alpha}}\quad \text{ and }\quad\|Av\|_{\mathfrak{H}_{-\alpha}}\leq C\big{\|}f\|_{\mathfrak{H}_{- \alpha}},\]
which, as we mentioned before, are equivalent to
\[\|\mathcal{M}_{+}(f)\|_{\mathfrak{H}_{\alpha}}\leq C\|f\|_{\mathfrak{H}_{ \alpha}}\quad\text{ and }\quad\|\mathcal{M}_{-}(f)\|_{\mathfrak{H}_{-\alpha}}\leq C \|f\|_{\mathfrak{H}_{-\alpha}}. \tag{3.3}\]
For the endpoint weights, we have the following beautiful reduction [1]:
\[\bigg{\|}\mathcal{M}_{+}(f)-Ae^{-tA}\int_{0}^{\infty}e^{-sA}f(s)\bigg{\|}_{ \mathfrak{H}_{1}}\leq C\|f\|_{\mathfrak{H}_{1}}, \tag{3.4}\]
\[\bigg{\|}\mathcal{M}_{-}(f)-e^{-tA}\int_{0}^{\infty}Ae^{-sA}f(s)\bigg{\|}_{ \mathfrak{H}_{-1}}\leq C\|f\|_{\mathfrak{H}_{-1}}. \tag{3.5}\]
Proofs of (3.3)-(3.5) in [1, Section 1] are independent of quadratic estimates.
Now we illustrate the convergence issues in Theorem 2.1: applying (3.1) and (3.3) we see that the formulae (2.4)-(2.5) hold respectively in \(\mathfrak{H}_{-1}=L^{2}\big{(}\mathbb{R}_{+},t^{-1}dt;\mathbf{H})\) and \(\mathfrak{H}_{1}=L^{2}\big{(}\mathbb{R}_{+},tdt;\mathbf{H})\); using a duality argument, (3.1) for \(A^{*}\), and (3.3) we see that the formulae (2.6)-(2.7) hold weakly in \(\mathbf{H}\). For (2.8), note that (3.2) for \(A^{*}\) implies
\[\int_{0}^{\infty}e^{-sA}f(s)ds-\int_{0}^{\infty}e^{-NsA}f(s)ds\] \[=\int_{0}^{\infty}e^{-sA}\left(I-e^{-(N-1)sA}\right)f(s)ds\]
converges weakly in \(\mathbf{H}\). Hence for the family of integrals \(\int_{0}^{\infty}\mathbf{e}^{-N\mathsf{s}A}\mathsf{f}(\mathsf{s})\mathsf{d}\mathsf{s}\), the weak convergence in \(\mathbf{H}\) for \(\mathsf{N}=1\) encompasses the convergence for other \(\mathsf{N}\geq 2\). Note that by [1, Remark 1.7] or (3.4), the assumptions for (2.8) guarantee \(\mathcal{M}_{+}(\mathsf{f})\in\mathsf{L}^{2}(\mathbb{R}_{+},\mathsf{t} \mathsf{d}\mathsf{t};\mathbf{H})\). For (2.9), by [1, Proposition 1.6] or (3.5), the trace condition
\[\lim_{\tau\to 0}\frac{1}{\tau}\int_{\tau}^{2\tau}\mathcal{M}_{-}( \mathsf{f})(\mathsf{t})\mathsf{d}\mathsf{t}\] \[=\int_{0}^{\infty}\mathsf{A}\mathbf{e}^{-\mathsf{s}A}\mathsf{f}( \mathsf{s})\mathsf{d}\mathsf{s}=0\quad\text{in}\quad\mathbf{H}\]
plus \(\mathsf{f}\in\mathsf{L}^{2}(\mathbb{R}_{+},\mathsf{t}^{-1}\mathsf{d}\mathsf{t };\mathbf{H})\) guarantee \(\mathcal{M}_{-}(\mathsf{f})\in\mathsf{L}^{2}(\mathbb{R}_{+},\mathsf{t}^{-1} \mathsf{d}\mathsf{t};\mathbf{H})\). The weak convergence in \(\mathbf{H}\) for (2.9) follows upon using a duality argument together with (3.1) for \(\mathsf{A}^{*}\).
**Remark 3.1**.: Independent of \((\mathsf{Q})_{\mathsf{A}}\), (2.4)-(2.5) also hold strongly in \(\mathbf{H}\) for \(\forall\,\mathsf{t}>0\).
**Remark 3.2**.: We formulated the formulae (2.4)-(2.9) for \(\mathfrak{H}_{\pm 1}\) under a parameter \(\mathsf{N}\in\mathbb{N}_{+}\). This uses (3.1)-(3.2). However, by McIntosh's bounded holomorphic functional calculus (see e.g. Haase [10]), more general holomorphic functions other than \(\uppsi_{1}\) and \(\uppsi_{2}\) can be considered. We refrain ourselves from such generalizations, and in particular from considering \((\mathsf{s}\mathsf{A})^{|\alpha|}\mathbf{e}^{-\mathsf{s}\mathsf{A}}\) and \(\mathfrak{H}_{\alpha}\) for \(0<|\alpha|<1\) (see Rosen1[11]).
Footnote 1: Axelsson becomes Rosen in publications later than [1].
**Remark 3.3**.: In view of the neat decomposition (3.4), it was observed in [1, Remark 1.7] that under \((\mathsf{Q})_{\mathsf{A}^{*}}\), \(\mathcal{M}_{+}\) extends to a bounded operator on the subspace of those \(\mathsf{f}\in\mathsf{L}^{2}(\mathbb{R}_{+},\mathsf{t}\mathsf{d}\mathsf{t}; \mathbf{H})\) so that \(\int_{0}^{\infty}\mathbf{e}^{-\mathsf{s}A}\mathsf{f}(\mathsf{s})\mathsf{d} \mathsf{s}\) converges weakly in \(\mathbf{H}\). On one hand, _there is no simple description of this subspace_ (see [1, Remark 1.7]), if no further information on \(\mathbf{H}\) and \(\mathbf{e}^{-\mathsf{t}\mathsf{A}}\) is provided. On the other hand, this puzzle of subspace description can be connected to the Carleson duality in harmonic analysis, see Hytonen and Rosen [12, 13]. Our formula (2.8) gives an indirect description of this subspace: it is an invariant subspace of \(\mathsf{L}^{2}(\mathbb{R}_{+},\mathsf{t}\mathsf{d}\mathsf{t};\mathbf{H})\) under \(\mathcal{M}_{+}\).
### Compliance with ethical standards
**Conflict of interest** The author has no known competing financial interests or personal relationships that could have appeared to influence this reported work.
**Availability of data and material** Not applicable.
|
2302.14716 | Lagrangians for variational formulations of the Navier-Stokes equation | Variational formulations for viscous flows which lead to the Navier-Stokes
equation are examined. Since viscosity leads to dissipation and, therefore, to
the irreversible transfer of mechanical energy to heat, thermal degrees of
freedom have been included in the construction of viscous dissipative
Lagrangians, by embedding of thermodynamics aspects of the flow, such as
thermasy and flow exergy. Another approach is based on the presumption that the
pressure gradient force is a constrained force, whose sole role is to maintain
the continuity constraint, with a magnitude that is minimum at every instant.
From these considerations, Lagrangians based on the minimal energy dissipation
principal have been constructed from which the application of the
Euler-Lagrange equation leads to the standard form of the Navier-Stokes
equation directly, or at least they are capable of generating the same
equations of motion for simple steady and unsteady 1 D viscous flows. These
efforts show that there is equivalence between Lagrangian, Hamiltonian, and
Newtonian mechanics as far as the derivation of the Navier-Stokes equation is
concerned. However, one of the conclusions is that the attractiveness of the
variational approach in more complex situations is still an open question for
the applied fluid mechanician. | Sylvio R. Bistafa | 2023-02-28T16:30:06Z | http://arxiv.org/abs/2302.14716v3 | ###### Abstract
###### Abstract
Variational formulations for viscous flows which lead to the Navier-Stokes equation are examined. Since viscosity leads to dissipation and, therefore, to the irreversible transfer of mechanical energy to heat, thermal degrees of freedom have been included in the construction of viscous dissipative Lagrangians, by embedding of thermodynamics aspects of the flow, such as thermasy and flow exergy. Another approach is based on the presumption that the pressure gradient force is a constrained force, whose sole role is to maintain the continuity constraint, with a magnitude that is minimum at every instant. From these considerations, Lagrangians based on the minimal energy dissipation principal have been constructed from which the application of the Euler-Lagrange equation leads to the standard form of the Navier-Stokes equation directly, or at least they are capable of generating the same equations of motion for simple steady and unsteady 1 Dimensional viscous flows. These efforts show that there is equivalence between Lagrangian, Hamiltonian, and Newtonian mechanics as far as the derivation of the Navier-Stokes equation is concerned. However, one of the conclusions is that the attractiveness of the variational approach in more complex situations is still an open question for the applied fluid mechanician.
**Keywords**: Variational formulations for viscous flows, variational formulation of the Navier-Stokes equation, viscous dissipation, viscous dissipative Lagrangians, flow thermodynamics, thermasy, exergy.
Sylvio R. Bistafa
[email protected]
Retired, Polytechnic School, University of Sao Paulo
Sao Paulo, SP, Brazil
## 1 Introduction
The Navier-Stokes equation [1] arises from applying Newton's second law to fluid motion, together with the assumption that the stress in the fluid is the sum of a diffusing viscous term (proportional to the gradient of velocity) and a pressure term--hence describing viscous flow. The difference between the closely related Euler equation and the Navier-Stokes equation is that the latter takes viscosity into account while the former only model the inviscid flow.
Variational approaches to model viscous flows in fluid mechanics are based on the principle of minimum energy dissipation. This principle states that, for a given flow field, the rate of energy dissipation should be minimized. This can be achieved by finding the flow field that minimizes the dissipation rate, which is given by the rate of viscous dissipation in the fluid.
In order to do this, a functional is defined that describes the total energy dissipation in the fluid, and the flow field is found by minimizing this functional using techniques from the calculus of variations. This results in a set of partial differential equations that describe the flow. The dominant variational principle in physics is Hamilton's principle of least action, which does not directly allow for non-conservative forces (forces that do not come from a scalar work function). Therefore, most of the variational principles of fluid mechanics found in the literature have been developed for inviscid fluids governed by Euler's equations (e.g., [2, 3]), which do not take into consideration important features (e.g., viscosity, turbulence, and other irreversible phenomena). Nonetheless, the Hamiltonian structure has been mathematically manipulated to include artificial variables to recover the already known governing equations (e.g., [4, 5, 6]) but,
without providing new insights on the physics of the fluid responsible for dissipation. Moreover, these derivations are often characterized by deep mathematical treatments, which tend to be rather detached from the physical aspects of the problem, and where unorthodox symbology and notation represents a barrier for the applied fluid mechanician. Also, it is often not clear at all how one can use any of these variational formulations to solve (even simple) viscous fluid mechanics problems.
Therefore, it is considered that a variational formulation for real flows must objectively include the dissipative nature of the viscous forces and its manifestations. To this end, as we shall see below, the formulations that have been proposed are in one form or another related to the principle that the viscous fluid motion has the minimal energy dissipation of any other motion consistent with the same boundary conditions -- this is generally known as the Helmholtz minimum dissipation theorem.
In fluid mechanics, Helmholtz minimum dissipation theorem (named after Hermann von Helmholtz who published it in 1868) states that the steady Stokes flow motion of an incompressible fluid has the smallest rate of dissipation than any other incompressible motion with the same velocity on the boundary. The theorem also has been studied by Lord Rayleigh in 1913. This theorem is, in fact, true for any fluid motion where the nonlinear term of the incompressible Navier-Stokes equations can be neglected or equivalently when \(\nabla\times\nabla\times\omega=0\), where \(\omega\) is the vorticity vector. For example, the theorem also applies to unidirectional flows such as Couette flow and the Hagen-Poiseuille flow, where the nonlinear terms disappear automatically. The Poiseuille flow theorem is a consequence of the Helmholtz theorem and states that _the steady laminar flow of an incompressible viscous fluid down a straight pipe of arbitrary cross-section is characterized by the property that its energy dissipation is least among all laminar (or spatially periodic) flows down the pipe which have the same total flux_ (from the Wikipedia: [https://en.wikipedia.org/wiki/Helmholtz](https://en.wikipedia.org/wiki/Helmholtz) minimum dissipation theorem).
In the present paper, Lagrangian formulations for incompressible viscous flows that we have been able to identify from the point of view mentioned above are examined, to show the ingenuity of the proposals, to see how they are applied to solve classical viscous flow problems, and at the same time checking their attractiveness for further developments.
## 2 The calculus of variations and the Euler-Lagrange equation
The calculus of variations is a line of research that began with the proposal of the brachistochrone problem in 1690 by Jacob Bernoulli (1655-1705) who started a contest on finding the profile of a hanging flexible cord so that a bead will slide down the wire under gravity from one point to the other (without friction) in the shortest time. The resulting curve is known as the curve of fast descent or the brachistochrone curve.
This problem had been studied by several earlier investigators, including Leonhard Euler (1707-1783) who is considered the father of the variational calculus as laid out in his landmark 1744 book on variational techniques [7]. In this book, Euler also derives from geometrical arguments the so-called Euler-Lagrange equation in the form
\[\frac{\partial Z}{\partial y}-\frac{d}{dx}\left(\frac{\partial Z}{\partial y^ {\prime}}\right)=0, \tag{1}\]
from the requirement that the integral \(\int\,Zdx\) must be a minimum, and where \(Z\) is not a function, but instead a functional known as the _Lagrangian_\(L\). The solution of this differential equation gives the curve that minimizes the integral.
A particular example on the use of the Euler-Lagrange equation applied to the problem at hand may be taken from Seliger & Whitham [8], who considered that thermal degrees of freedom must be considered in a variational formulation of fluid flow which enters the formalism as the Lagrange multiplier \(\vartheta\) corresponding to the entropy-conservation constraint; the proposed Lagrangian is then written as
\[\mathcal{L}=-\rho\left[\frac{D\zeta}{Dt}+\alpha\frac{D\beta}{Dt}-s\frac{D \vartheta}{Dt}-\frac{\mathbf{u}^{2}}{2}+e(\rho,s)\right] \tag{2}\]
where
\[\mathbf{u}=\nabla\zeta+\propto\nabla\beta-s\nabla\vartheta, \tag{3}\]
which depends on the specific internal energy \(e(\rho,s)\), given in terms of density \(\rho\) and entropy \(s\), the three of the so-called Clebsch potentials \(\zeta,\propto,\beta\)[9] and an additional potential field \(\vartheta\). For inviscid flows, the potential representation of the velocity field is given by \(\mathbf{u}=\nabla\zeta+\propto\nabla\beta\).
To reveal the meaning of the thermal potential \(\vartheta\) it is possible to apply the Euler-Lagrange equation with respect to variations in \(s\), for which, \(\frac{\partial Z}{\partial y}=\frac{\partial\mathcal{L}}{\partial s}=\frac{D \vartheta}{Dt}-\frac{\partial e}{\partial s}=0\), giving
\[\frac{D\vartheta}{Dt}=\frac{\partial e}{\partial s}=T \tag{4}\]
or
\[\vartheta=\int_{t_{0}}^{t}\!Tdt \tag{5}\]
where the integration is carried out along a particle trajectory. Because of its definition as given by Eq. (5), \(\vartheta\) is often called the temperature integral, whereas Van Dantzig [10] called it the _thermasy_. From Eq. (5), it is seen that thermasy has units of temperature times time.
As pointed out by Scholle and Marner [11], although this approach is still restricted to adiabatic and therefore reversible processes, the above Lagrangian represents a momentous step forward because it is the first attempt of embedding of thermodynamics aspects of the flow. It is then seen that the most relevant difference to the classical potential theory is the appearance of an additional field, the thermasy \(\vartheta\). According to these same authors, its physical meaning is not obvious: there are attempts in literature to relate this additional degree of freedom to a deviation from local thermodynamic equilibrium, however, there are still many open questions.
## 3 Construction of viscous dissipative Lagrangians
We examine next three attempts that have been proposed for the variational derivation of the Navier-Stokes equation from viscous dissipative Lagrangians. Two of these attempts may be called "thermal" approaches in which thermodynamic quantities are introduced into the Lagrangians to model the thermal dissipation of the viscous forces. Another attempt directly linked to the viscous force effect is based on the presumption that the pressure gradient force is a constrained force, which should be minimized at every instant.
### Thermasy-based extended Lagrangian
Following the introduction of thermasy in the previous section, we shall first present Scholle and Marner's [11] extended Lagrangian proposal. Indeed, this has been accomplished by introducing terms modifying the entropy balance which results from the variation with respect to the thermasy \(\vartheta\).
The production of entropy is considered through the dissipation of heat \(\phi_{d}\) as \(\vartheta\phi_{d}/T\), by assuming a quadratic dependence on \(\frac{\partial u_{i}}{\partial x_{j}}\) according to the classical approach applied to viscous flows. The factor \(1/T\) represents the character of the entropy as 'weighted heat' according to \(\delta Q=Tds\). The extended Lagrangian in the absence of external forces and without considering the conduction of heat is then written as
\[\mathcal{L}=-\rho\left[\frac{D\zeta}{Dt}+\alpha\frac{D\beta}{Dt}-s\frac{D \vartheta}{Dt}-\frac{\mathbf{u}^{2}}{2}+e(\rho,s)\right]+\frac{\vartheta}{T} \left[\eta\;\mathrm{tr}\mathbb{D}^{2}+\frac{\eta^{\prime}}{2}(\nabla\cdot\mathbf{u} )^{2}\right], \tag{6}\]
where \(\eta\) is the shear viscosity, \(\eta^{\prime}\) is the volume viscosity of the fluid, and \(\mathrm{tr}\) denotes the trace of the shear rate tensor \(\mathbb{D}\).
From the variation with respect to the thermasy \(\vartheta\), arises the entropy-balance equation given by
\[\frac{\partial(\rho s)}{\partial t}+\nabla\cdot(\rho s\mathbf{u})=\frac{\eta}{T} \;\mathrm{tr}\mathbb{D}^{2}+\frac{\eta^{\prime}}{2T}(\nabla\cdot\mathbf{u})^{2}, \tag{7}\]
it is seen that the right-hand side of this equation represents the entropy production rate due to viscous dissipation, which implies that the second term in the right-hand side of Eq. (6) is the entropy production during the time interval \(t-t_{0}\).
Momentum is of course a vector quantity whose density \(p\) (momentum per unit volume) is \(\rho\mathbf{u}\). Scholle and Marner's [11] revealed that the momentum density is now not simply equal to \(\rho\mathbf{u}\), being the difference attributed to a quantity called the _quasi-momentum density_\(p^{*}\), given by
\[p^{*}=-2\eta\nabla\cdot\left(\frac{\vartheta}{T}\mathbb{D}\right)-\eta^{ \prime}\nabla\left(\frac{\vartheta}{T}\nabla\cdot\mathbf{u}\right). \tag{8}\]
It is speculated that the quasi-momentum density could be due to the system's momentum balance beyond the scope of the continuum hypothesis on a molecular scale, e.g., Brownian motion.
The Euler-Lagrange equation is then applied by Scholle [12] to calculate the variations of the quantities that appear in the Lagrangian (Eq. 6), namely: \(\mathbf{u},\vartheta,s,\rho,\alpha,\beta,\zeta\), which, after mathematical manipulations and other considerations, finally results in the equation of motion for the incompressible case given by
\[D_{t}\mathbf{u}=-\frac{\nabla p}{\rho}+\nu\{D_{t}+\nabla\delta\mathbf{u}\}\left[2 \mathbb{D}\nabla\left(\frac{\vartheta}{T}\right)+\frac{\vartheta}{T}\Delta \mathbf{u}\right]-\nu\mathrm{tr}\mathbb{D}^{2}\nabla\left(\frac{\vartheta}{T} \right), \tag{9}\]
where \(D_{t}=\frac{\vartheta}{\partial t}+\mathbf{u}\cdot\nabla\), \(\nu\) is the kinematic viscosity \(\frac{\eta}{\rho}\), \(\nabla\Theta\mathbf{u}\) is the velocity gradient tensor, and \(\Delta\mathbf{u}\) is the Laplacian of the velocity vector \(\mathbf{u}\).
It is seen that this equation of motion is not a reproduction of Navier-Stokes equation due to the occurrence of third-order derivatives instead of second-order terms, in a different form of the viscous terms, but also in an additional field, the thermasy \(\vartheta\), appearing explicitly. The additional terms and degrees of freedom in Eq. (9) may represent an extension of the classical theory towards non-equilibrium thermodynamics.
Three benchmark tests of Eq. (9) have been performed by Scholle and Marner [11], which were earlier discussed in more details by Scholle [12], against exact solutions of the Navier-Stokes equation: (i) Stoke's first problem, (ii) plane Couette flow, and (iii) the Lamb-Oseen vortex diffusion. For these flows, a particular form of Eq. (9) was used
\[D_{t}\mathbf{u}=-\frac{\nabla p}{\rho}+\nu\{D_{t}+\nabla\mathbf{\otimes}\mathbf{u}\}\left[ \frac{\partial}{T}\Delta\mathbf{u}\right]. \tag{10}\]
which considers a particular solution of Eq. (5) given by \(\frac{\vartheta}{\tau}=t-t_{0}\), and then, \(D_{t}\left(\frac{\vartheta}{\tau}\right)=1\).
The Stoke's first problem is a transient flow for which \(\mathbf{u}=u(y,t)\), and by considering that \(\nabla p=0\), the \(x\)-component of Eq. (10) reads (noting also that \(\nabla\mathbf{\otimes}\mathbf{u}\) reduces to \(\frac{\partial u}{\partial x}=0\))
\[\frac{\partial u}{\partial t}=\nu\frac{\partial}{\partial t}\left[(t-t_{0}) \frac{\partial^{2}u}{\partial y^{2}}\right]. \tag{11}\]
This third-order partial differential equation allows for integration with respect to time, leading to the equation
\[\frac{(u-u_{0})}{(t-t_{0})}=\nu\frac{\partial^{2}u}{\partial y^{2}}, \tag{12}\]
whereas the Navier-Stokes equation leads to
\[\frac{\partial u}{\partial t}=\nu\frac{\partial^{2}u}{\partial y^{2}}. \tag{13}\]
Although both equations are different, they imply a qualitatively similar evolution of velocity profiles. Eq. (13) is a classical diffusion equation whereas Eq. (12) is obtained by replacing the time derivative by a finite difference [13].
In the case of the plane Couette flow, we have additionally that the steady state solution is obtained in the limit \(t\to\infty\), which transforms Eq. (12) into
\[0=\frac{\partial^{2}u}{\partial y^{2}}. \tag{14}\]
This is the same equation that would have been obtained by the direct application of the Navier-Stokes equation to the plane Couette flow [13].
Another example of transient flow is the Lamb-Oseen vortex diffusion where the velocity in cylindrical coordinates is \(\mathbf{u}=u(r,t)\), and for which the vorticity \(\omega\) is given by \(\omega=\frac{1}{r}\frac{\partial(ru)}{\partial r}\); then from Eq. (11) we have that
\[\frac{\partial u}{\partial t}=\nu\frac{\partial}{\partial t}\left[(t-t_{0}) \frac{\partial\omega}{\partial r}\right]. \tag{15}\]
The integration of this equation with respect to time leads to
\[\frac{(u-u_{0})}{(t-t_{0})}=\nu\frac{\partial\omega}{\partial r}. \tag{16}\]
If at the initial time at the initial time \(t_{0}=0\) and \(u_{0}=\frac{\Gamma}{2\pi r}\) where \(\Gamma\) denotes its circulation, then, by applying the operator \(\frac{1}{r}\frac{\partial(r)}{\partial r}\) to Eq. (16), we will have that
\[\frac{\omega}{t}=\nu\left(\frac{\partial^{2}\omega}{\partial r^{2}}+\frac{1}{r} \frac{\partial\omega}{\partial r}\right). \tag{17}\]
In this equation, \(\frac{\omega}{t}\) has been replaced for \(\frac{\partial\omega}{\partial t}\) which would have been obtained by the direct application of the Navier-Stokes equation, resulting in the well-known equation for the diffusion of a vortex [14].
Scholle [12] then showed that there is a qualitative agreement between the solution given by Eq. (17), and the solution of the original vortex diffusion equation as given by the Navier-Stokes equation, despite quantitative differences in the respective flow profiles.
The main finding, however, is that by the Lagrangian (Eq. 6) is able to capture the main features of these flow-tests.
Additional tests were performed by Scholle and Marner [11] for pressure driven flows for which no adequate solutions of the field equations could be constructed which simultaneously fulfil the pressure boundary conditions. The conclusion was that the variational principle based on the proposed Lagrangian (Eq. 6) does not recover the dynamics of viscous flow in a proper way, since its applicability seems to be restricted to special flow problems only.
From these examples another issue arose, since the explicit appearance of the weighted thermasy \(\frac{\partial}{T}\) turns out to be of unlimited growth, either spatially or temporally, which also prohibits its interpretation in connection with non-equilibrium thermodynamics. Next, to overcome the above-highlighted anomalies, a modification of the proposed Lagrangian has been provided Scholle and Marner's [11], which is not considered further here.
**Flow exergy-based Lagrangian**
Sciubba [15] proposed a Lagrangian based on the exergy variation of the flow, to obtain the Navier-Stokes equation from the minimization of the exergy destruction. Exergy is a thermodynamic concept, used for many years within engineering analyses of chemical and mechanical processes and systems. Exergy is defined as _the maximum useful work which can be extracted from a system as it reversibly comes into equilibrium with its environment._ In other words, it is the capacity of energy to do the actual physical work.
The flow of a viscous fluid is driven by a set of well-defined external fields (pressure, external force, and temperature) and by the inertia of the mass under consideration, and it is affected by dissipative effects related to the viscosity and thermal conductivity of the fluid. Dissipation is associated with entropy production or to the exergy destruction of the flow. Flow exergy \(e\) is an extensive thermodynamic state function defined as
\[e=h-h_{0}-T_{0}(s-s_{0}), \tag{18}\]
where \(h\) is the enthalpy, \(s\) is the entropy and \(T_{0}\) is a reference state temperature.
A representation of the work and heat interactions of a system in terms of exergy has the advantage of unifying both work/heat interactions and dissipative effects into a unified framework. Thus, for any dissipative system, a theorem of "exergy destruction" applies, which states that if the system undergoes an irreversible process, its specific exergy content is destroyed (annihilated) at a rate given by
\[\dot{e}_{\lambda}=T_{0}\dot{s}_{irr}, \tag{19}\]
which states that every real (irreversible) process destroys exergy at a rate proportional to the irreversible entropy generation \(\dot{s}_{irr}\).
In an element of time \(dt\) the exergetic content per unit of mass is modified by four different contributions:
* an exergy variation rate equal to the reversible exchanged mechanical power \(\dot{e}_{Wrev}\) \[\dot{e}_{Wrev}=\mathbf{u}\cdot\frac{D\mathbf{u}}{Dt}+\mathbf{u}\cdot\frac{\nabla p }{\rho}+\mathbf{u}\cdot\mathbf{B},\] (20) where \(\mathbf{B}\) is the body force.
* an exergy destruction rate proportional to the viscous dissipation function \(D_{visc}\) \[\dot{e}_{\lambda_{visc}}=-\nu D_{visc},\] (21)
* an exergy variation rate proportional to the reversible thermal entropy exchange \(\dot{s}_{rev}\) \[\dot{e}_{0rev}=(T-T_{0})\dot{s}_{rev},\] (22)
* and an exergy destruction rate proportional to the irreversible thermal entropy production \(\dot{s}_{irr,therm}\) \[\dot{e}_{\lambda_{therm}}=-(T-T_{0})\dot{s}_{irr,therm}.\] (23)
Thus, the total exergy variation per unit mass of the fluid \(\Delta_{efuid}\)in time \(dt\) is
\[\Delta_{efuid}=dt\left[\mathbf{u}\cdot\frac{D\mathbf{u}}{Dt}+\mathbf{u}\cdot \frac{\nabla p}{\rho}+\mathbf{u}\cdot\mathbf{B}-\nu D_{visc}+(T-T_{0})\dot{s}_{rev}-( T-T_{0})\dot{s}_{irr,therm}\right]. \tag{24}\]
Once the flow variables are exactly known at each instant of time \(t\) and at each point in the flow domain, the quantity defined by Eq (24) can be computed exactly locally and, if necessary, integrated over the entire domain to yield the global variation of the exergy of the flow.
Sciubba [15] then considers that if at every instant of time, the fluid motion is governed by the minimization of the exergy destruction given by Eq. (24), the resulting equation of motion is indeed the Navier-Stokes equation.
From Eq. (24), for an isothermal flow of a viscous homogeneous fluid with constant properties the Lagrangian can be written as
\[\mathcal{L}=\mathbf{u}\frac{D\mathbf{u}}{Dt}+\mathbf{u}\cdot\frac{\nabla p}{ \rho}+\mathbf{u}\cdot\mathbf{B}-\nu D_{visc}, \tag{25}\]
which in index notation reads
\[\mathcal{L}=u_{j}\frac{\partial u_{j}}{\partial t}+u_{j}u_{k} \frac{\partial u_{j}}{\partial x_{k}}+\frac{1}{\rho}u_{j}\frac{\partial p}{ \partial x_{j}}+u_{j}B_{j}-\nu D_{visc}, \tag{26}\]
where, as usual,
\[D_{visc}=\frac{1}{2}\nu\left(\frac{\partial u_{i}}{\partial x_{ j}}+\frac{\partial u_{j}}{\partial x_{i}}\right)^{2}. \tag{27}\]
Imposing the condition of constrained minimum exergy destruction is equivalent to searching for the minimization of a functional whose integrand is the total exergy variation of the unit fluid mass given by Eq. (24); that is, \(\int\mathcal{L}dV\) must be a minimum in \(V\).
From Eq. (1), the Euler-Lagrange equation in index notation for the problem so posed reads
\[\frac{\partial\mathcal{L}}{\partial u_{j}}-\frac{\partial}{\partial x_{i}}\left[ \frac{\partial\mathcal{L}}{\partial\big{(}\partial u_{j}/\partial x_{i}\big{)} }\right]=0. \tag{28}\]
where
\[\frac{\partial\mathcal{L}}{\partial u_{j}}=\frac{\partial u_{j}}{\partial t}+u _{k}\frac{\partial u_{j}}{\partial x_{k}}+\frac{1}{\rho}\frac{\partial p}{ \partial x_{j}}+B_{j}. \tag{29}\]
\[\frac{\partial\mathcal{L}}{\partial\big{(}\frac{\partial u_{j}}{\partial x_{i} }\big{)}}=\frac{\partial D_{visc}}{\partial\big{(}\frac{\partial u_{j}}{ \partial x_{i}}\big{)}}=\nu\left(\frac{\partial u_{i}}{\partial x_{j}}+\frac{ \partial u_{j}}{\partial x_{i}}\right), \tag{30}\]
from which
\[\frac{\partial}{\partial x_{i}}\left[\frac{\partial\mathcal{L}}{\partial \big{(}\frac{\partial u_{j}}{\partial x_{i}}\big{)}}\right]=\frac{\partial} {\partial x_{i}}\left[\nu\left(\frac{\partial u_{i}}{\partial x_{j}}+\frac{ \partial u_{j}}{\partial x_{i}}\right)\right]=\nu\left[\frac{\partial}{ \partial x_{j}}\left(\frac{\partial u_{i}}{\partial x_{i}}\right)+\frac{ \partial^{2}u_{j}}{\partial x_{i}\partial x_{i}}\right]=\nu\,\frac{\partial^ {2}u_{j}}{\partial x_{i}\partial x_{i}}, \tag{31}\]
since for incompressible flows \(\left(\frac{\partial u_{i}}{\partial x_{i}}\right)=0\).
A direct substitution of Eqs. (29) and (31) into Eq. (28) results in
\[\frac{\partial u_{j}}{\partial t}+u_{k}\frac{\partial u_{j}}{\partial x_{k}}+ \frac{1}{\rho}\frac{\partial p}{\partial x_{j}}+B_{j}-\nu\left(\frac{\partial ^{2}u_{j}}{\partial x_{i}\partial x_{i}}\right)=0, \tag{32}\]
which is indeed the Navier-Stokes equation for an incompressible isothermal flow.
The method is considered "restricted" by Sciubba [15], in the sense that the minimization was performed in space, but not in time, i.e., the time derivative of the velocity is not subjected to the variation which corresponds to the steady flow.
#### Minimum pressure gradient-based Lagrangian
Taha and Gonzalez [16] have sought to transform any fluid mechanics problem into an optimization one with no need to apply the Navier-Stokes equation. With this goal in mind, they have applied Gauss's principle of least constraint5 to find the equation of motion of incompressible viscous flows. This principle is similar to Hamilton's principle which states that the true path taken by a mechanical system is an extremum of the action.
Footnote 5: The principle of least constraint is one variational formulation of classical mechanics enunciated by Carl Friedrich Gauss in 1829, equivalent to all other formulations of analytical mechanics. Intuitively, it says that the acceleration of a constrained physical system will be as similar as possible to that of the corresponding unconstrained system (from the Wikipedia: [https://en.wikipedia.org/wiki/Gauss%275](https://en.wikipedia.org/wiki/Gauss%275) principle of least constraint).
The principle of least constraint is a least squares principle stating that the true accelerations of a mechanical system of \(n\) masses is the minimum of the quantity \(Z\) given by
\[Z=\sum_{i=1}^{n}m_{i}\left(\mathbf{a}_{i}-\frac{\mathbf{F}_{i}}{m_{i}}\right)^{2}, \tag{33}\]
where \(m_{i}\) is the mass of the \(i\)th particle, \(\mathbf{a}_{i}\) is the corresponding acceleration which satisfy the imposed constraints, and which in general depends on the current state of the system, and \(\mathbf{F}_{i}\) is the non-constraint force applied to the \(i\)th particle.
These authors showed that the pressure gradient force is a constrained force, whose sole role is to maintain the continuity constraint, with a magnitude that is minimum at every instant. Then, by considering that the pressure gradient is a constraint force, and the viscous force is an impressed force, the action \(\mathcal{A}\) (Gauss' \(Z\) quantity) was written as
\[\mathcal{A}=\frac{1}{2}\int\rho\left(\frac{D\mathbf{u}}{Dt}-\nu\Delta\mathbf{u}\right)^ {2}\mathbf{dx}, \tag{34}\]
where \(\frac{Du}{Dt}=\frac{\partial\mathbf{u}}{\partial t}+\mathbf{u}\cdot\nabla\mathbf{u}\).
These authors pointed out that "...\(\mathcal{A}\) is simply equal to \(\int(\nabla p)^{2}\mathbf{dx}\), and since the pressure force is a constraint force (enforcing the continuity constraint), the flow field will deviate from the motion dictated by the inertia \(\mathbf{u}\cdot\nabla\mathbf{u}\) and viscous \(\nu\Delta\mathbf{u}\) forces only by the amount to satisfy continuity; no larger pressure gradient will be generated than that necessary to maintain continuity. Nature will not overdo it. This new principle is what we call _The Principle of Minimum Pressure Gradient_ (PMPG)." Therefore, it is seen that the action \(\mathcal{A}\) seeks automatically the minimization of the pressure gradient.
The variational principle of minimum pressure gradient (PMPG) was then applied to solve two classical viscous flow problems in fluid mechanics -- performing pure optimization without resorting to Navier-Stokes equation, namely: channel flow, and Stoke's second problem --, by Taha and Gonzalez [16]. The Euler-Lagrange equation (Eq. 1) comes from the requirement that \(\int Zdx\) must be a minimum, where from Eq. (33)
\[Z=\tfrac{1}{2}\rho\left(\frac{Du}{Dt}-\nu\Delta\mathbf{u}\right)^{2}. \tag{35}\]
For the channel flow, \(u=u(y)\), the functional \(Z\) given by Eq. (35) reduces to the Lagrangian \(\mathcal{L}=\frac{1}{2}\left(\mu\frac{\partial^{2}u}{\partial y^{2}}\right)^ {2}\). Since this Lagrangian contains second-order derivatives, then the _Euler-Poisson_ equation applies, which here is written as
\[\frac{\partial\mathcal{L}}{\partial u}-\frac{\partial}{\partial y}\left[\frac{ \partial\mathcal{L}}{\partial\left(\frac{\partial u}{\partial y}\right)} \right]+\frac{\partial^{2}}{\partial y^{2}}\left[\frac{\partial\mathcal{L}}{ \partial\left(\frac{\partial^{2}u}{\partial y^{2}}\right)}\right]=0 \tag{36}\]
giving
\[\mu\frac{\partial^{2}u}{\partial y^{2}}=constant. \tag{37}\]
Also, according to PMPG, we also have that
\[\int(\nabla p)^{2}\mathbf{dx}=0, \tag{38}\]
which from Euler-Lagrange equation results in
\[\frac{\partial}{\partial y}\left[\left(\frac{\partial p}{\partial x}\right)^{ 2}\right]=0\Rightarrow\frac{\partial p}{\partial x}=constant. \tag{39}\]
The equality between Eqs. (37) and (39) then gives
\[\mu\frac{\partial^{2}u}{\partial y^{2}}=\frac{\partial p}{\partial x}, \tag{40}\]
which is the same equation that is obtained by the direct application of the Navier-Stokes equation to the channel flow of the incompressible viscous fluid [13].
The Stokes' second problem is the unsteady flow above a harmonically-oscillating, infinitely-long plate, where \(u=u(y,t)\), for which the functional \(Z\) given by Eq. (35) reduces to the Lagrangian \(\mathcal{L}=\frac{1}{2}\rho\left(\frac{\partial u}{\partial t}-v\frac{\partial^ {2}u}{\partial y^{2}}\right)^{2}\). This is an unsteady problem, for which the variation in the PMPG is taken with respect to the local acceleration, where the Euler-Lagrange equation is written as
\[\frac{\partial}{\partial t}\left[\frac{\partial\mathcal{L}}{\partial\left( \frac{\partial u}{\partial t}\right)}\right]=0 \tag{41}\]
giving
\[\frac{\partial u}{\partial t}-v\frac{\partial^{2}u}{\partial y^{2}}= constant. \tag{42}\]
From Eq. (39) above, \(\frac{\partial p}{\partial x}=constant\), where now the constant should be equal to zero because there is no pressure gradient in the \(x\)-direction for this flow, therefore,
\[\frac{\partial u}{\partial t}=v\frac{\partial^{2}u}{\partial y^{2}}, \tag{43}\]
which is recognized as the same equation that is obtained by the direct application of the Navier-Stokes equation [13]. It should be noted that Eq. (43) is equally applicable to the Stoke's first problem, which is another type of unsteady flow in which the plate is suddenly jerked into motion in its own plane with a constant velocity.
Although not within the context of viscous flows, and to show the attractiveness of variational formulations, Taha and Gonzalez [16] also addressed the so-called 'airfoil problem' showing that the inviscid version of PMPG is also capable of providing a unique solution for the flow over arbitrarily smooth shapes. As is well known, the potential flow over a two-dimensional object does not have a unique solution. As they put it, the only theoretical fix available is the so-called _Kutta condition_, which dictates the amount of circulation necessary to remove the singularity such as a sharp trailing edge in a conventional airfoil. However, for singularity free shapes (e.g., ellipse, circle), the Kutta condition is not applicable; and there is no theoretical model that can predict circulation and lift over these shapes. To circumvent this problem, these authors have proposed a novel variational formulation to Euler's equation for the dynamics of inviscid fluids where the _Appellian_ (integral of squared acceleration) is minimized.
For the inviscid incompressible steady flow, the Appellian \(\mathcal{A}\) may be written as
\[\mathcal{A}(\Gamma)=\frac{1}{2}\rho\int[\mathbf{u}(\mathbf{x},\Gamma)\cdot\mathbf{\nabla u }(\mathbf{x},\Gamma)]^{2}d\mathbf{x}. \tag{44}\]
Since the velocity field \(\mathbf{u}(\mathbf{x},\Gamma)\) is known except for the unknown parameter \(\Gamma\), given an assumed value of \(\Gamma\), one can compute the flow field and, consequently, the scalar Appellian integral \(\mathcal{A}\) in Eq. (44).
The ArgMin of Eq. (44) is then used to find the smallest possible value \(\Gamma^{*}\) of the circulation for the given constraints, providing a generalization of the Kutta-Zhukovsky condition that is, unlike
the latter, derived from first principles. The PMPG allows, for the first time, computation of lift over smooth shapes without sharp edges where the Kutta condition fails.
Next, a family of airfoils parameterized by \(D\) have been considered Taha and Gonzalez [17], in which \(D\) controls smoothness of the trailing edge: \(D\,=\,0\) results in the classical Zhukovsky airfoil with a sharp trailing edge, and \(D\,=\,1\) results in a circular cylinder. From an involving calculation procedure, a plot of the locus of the minimizing circulation \(\Gamma^{*}\) was eventually generated showing that for \(D\,\,=\,\,0\) (sharp-edged airfoil), the minimizing circulation \(\Gamma^{*}\) coincides with Kutta's circulation; and for \(D\,\,=\,\,1\) (circular cylinder), the minimizing circulation vanishes, implying no inviscid lifting capability of this purely symmetric shape.
Finally, in a recent publication, Taha and Gonzalez [18] claim to have provided a theorem that establishes connection between the PMPG and Navier-Stokes equation in the general case, not only for specific examples, which is not considered further here.
## 4 Summary and Discussion
Three attempts to construct viscous dissipative Lagrangians to recover the Navier-Stokes equation from variational principles have been examined. In one form or another, these attempts are all related to the principle that the viscous fluid motion has the minimal energy dissipation of any other motion consistent with the same boundary conditions, which is a principle that appears in the literature under different statements: Helmholtz minimum dissipation theorem, Poiseuille flow theorem, and Gauss's principle of least constraint.
Since viscosity leads to dissipation and therefore to the irreversible transfer of mechanical energy to heat, thermal degrees of freedom have been included in the construction of viscous dissipative Lagrangians. One of these attempts adds to the potential representation of the Lagrangian a rather unknown thermodynamic quantity called thermasy (or temperature integral), whose physical meaning seems to be related to a deviation from local thermodynamic equilibrium. It is shown that the variation with respect to the thermasy results in an entropy balance equation where the entropy production rate is due to dissipation, as expected. Three benchmark tests have been performed with this extended Lagrangian for simple steady and unsteady 1 Dimensional flows, showing that it could lead to differential equations with the same character to those obtained from the direct application of the Navier-Stokes equation. However, additional tests performed by the authors led to the conclusion that the variational principle based on the proposed Lagrangian is not a general one, since it does not recover the dynamics of other viscous flows tested in a proper way, and, therefore, its applicability seems to be restricted to special flow problems only.
In another "thermal" approach, the Lagrangian density is obtained from the exergy balance equation written for the isothermal incompressible flow. The exergy of a fluid mass, composed of a kinetic, pressure work and body force work, reversible and irreversible thermal entropy exchange, and a dissipative portion, the latter being the result of viscous irreversibility is derived first, and it is then shown that a formal minimization of the exergy variation (i.e., destruction) generates a Lagrangian which, by applying the Euler-Lagrange equation to it, leads directly to the standard form of the Navier-Stokes equation for the isothermal incompressible flow of a viscous homogeneous fluid. However, the author of this approach considers that new applications will have to be developed for specific cases and tested experimentally, analytically, and numerically in unknown flow fields, and that an even more important step would be that of extending its validity to nonisothermal and compressible flows.
Finally, an approach was examined which is based on the presumption that the pressure gradient force is a constrained force due to the viscous force, whose sole role is to maintain the continuity constraint, with a magnitude that is minimum at every instant (Principle of Minimum Pressure Gradient, PMPG). A Lagrangian was then proposed, considering that the pressure gradient is a constraint force, and that the viscous force is an impressed force. From the equality between the Lagrangian expressing the minimal of the pressure gradient, it is then shown by the application of the Euler-Lagrange equation that the same differential equations are generated as those obtained from the direct application of the Navier-Stokes equation to simple steady and unsteady 1 Dimensional flows. The authors consider that the PMPG could shed light on the problem of existence of solutions of Navier-Stokes equation, because variational principles have usually been useful in studying existence of solutions of partial differential equations.
## 5 Conclusions
The proposals examined here show that a description of viscous flows is possible within the framework of the Lagrangian formalism, which leads to the standard form of the Navier-Stokes equation directly, or at least it is capable of generating the same equations of motion for simple steady and unsteady 1 Dimensional flows.
Perhaps, the most important revelation of the present paper is to show the power of the Euler-Lagrange equation in generating the Navier-Stokes equation once the key physical phenomenon involved (here viscous dissipation and its manifestations) has been properly modeled and included in the Lagrangians. However, these may lead to third-order derivatives instead of second-order terms, which have posed no additional difficulties for the simple flow problems considered here, but that may not be so in more complex situations.
As far as the derivation of Navier-Stokes equation is concerned, it is shown that there is equivalence between Lagrangian, Hamiltonian, and Newtonian mechanics, which, however, does not imply that every formulation of a physical problem is equally tractable in each framework.
Although the variational approaches examined here have proven to be capable of solving simple flow problems, their attractiveness in more complex situations is still an open question for the applied fluid mechanician.
|
2303.18205 | SimTS: Rethinking Contrastive Representation Learning for Time Series
Forecasting | Contrastive learning methods have shown an impressive ability to learn
meaningful representations for image or time series classification. However,
these methods are less effective for time series forecasting, as optimization
of instance discrimination is not directly applicable to predicting the future
state from the history context. Moreover, the construction of positive and
negative pairs in current technologies strongly relies on specific time series
characteristics, restricting their generalization across diverse types of time
series data. To address these limitations, we propose SimTS, a simple
representation learning approach for improving time series forecasting by
learning to predict the future from the past in the latent space. SimTS does
not rely on negative pairs or specific assumptions about the characteristics of
the particular time series. Our extensive experiments on several benchmark time
series forecasting datasets show that SimTS achieves competitive performance
compared to existing contrastive learning methods. Furthermore, we show the
shortcomings of the current contrastive learning framework used for time series
forecasting through a detailed ablation study. Overall, our work suggests that
SimTS is a promising alternative to other contrastive learning approaches for
time series forecasting. | Xiaochen Zheng, Xingyu Chen, Manuel Schürch, Amina Mollaysa, Ahmed Allam, Michael Krauthammer | 2023-03-31T16:59:40Z | http://arxiv.org/abs/2303.18205v1 | # SimTS: Rethinking Contrastive Representation Learning for Time Series Forecasting
###### Abstract
Contrastive learning methods have shown an impressive ability to learn meaningful representations for image or time series classification. However, these methods are less effective for time series forecasting, as optimization of instance discrimination is not directly applicable to predicting the future state from the history context. Moreover, the construction of positive and negative pairs in current technologies strongly relies on specific time series characteristics, restricting their generalization across diverse types of time series data. To address these limitations, we propose SimTS, a simple representation learning approach for improving time series forecasting by learning to predict the future from the past in the latent space. SimTS does not rely on negative pairs or specific assumptions about the characteristics of the particular time series. Our extensive experiments on several benchmark time series forecasting datasets show that SimTS achieves competitive performance compared to existing contrastive learning methods. Furthermore, we show the shortcomings of the current contrastive learning framework used for time series forecasting through a detailed ablation study. Overall, our work suggests that SimTS is a promising alternative to other contrastive learning approaches for time series forecasting.
Machine Learning, ICML
## 1 Introduction
The field of time series forecasting has experienced significant progress in recent years with a wide range of practical applications across different sectors such as finance (Sezer et al., 2020), traffic (Zheng and Huang, 2020), and clinical practice (Johnson et al., 2016). The availability of large volumes of data is one of the key factors behind these advancements. In particular, self-supervised learning approaches such as contrastive learning (Yue et al., 2022; Woo et al., 2022; Yeche et al., 2021) have shown promise in exploiting these datasets and have continually outperformed supervised approaches (Bai et al., 2018; Salinas et al., 2020; Zhou et al., 2021) in time series forecasting tasks. Self-supervised contrastive approaches learn representations by mapping similar instances (i.e., positive pairs) to similar representations while pushing dissimilar instances (i.e., negative pairs) apart. Most contrastive learning approaches rely on instance discrimination (Wu et al., 2018). The resulting representations contain information that can discriminate well between different instances of time series, making them informative for downstream tasks such as time series _classification_. However, in time series _forecasting_, the goal is to predict the future based on past time windows rather than discriminating between instances. Consequently, features learned by instance discrimination may not be sufficient for accurate forecasting.
Additionally, identifying positive and negative pairs for time series forecasting is challenging. Contrastive learning relies on data augmentations to generate positive pairs. While it is possible to find semantic preserving augmentations for time series classification (Ye and Keogh, 2009; Yeche et al., 2021; Nonnenmacher et al., 2022), it is more difficult to identify augmentation methods that can be generalized to time series forecasting. Besides, most existing methods for constructing negative pairs depend heavily on the individual characteristics of the time series, making them not applicable to other types of time series. Yue et al. (2022); Kiyasseh et al. (2021); Yeche et al. (2021); Hyvarinen and Morioka (2016); Tonekaboni et al. (2021) propose methods based on the assumptions that (1) the similarity between segments of the same time series decreases as the time lag increases, and (2) segments of distinctive time series are dissimilar. However, particular time series do not adhere to these assumptions, resulting in unsatisfactory representations (Eldele et al., 2021; Nonnenmacher et al., 2022) in other time series. For instance, in a time series with a strong periodicity, similar patterns exist between or within instances. As illustrated
in Figure 1, selecting times-windows randomly may result in selecting inappropriate negative pairs (Tian et al., 2020), leading to false repulsion, where the model incorrectly discriminates representations of similar samples. Other recent approaches are based on disentanglement (Woo et al., 2022; Wang et al., 2022) or fusion (Yang and Hong, 2022; Zhang et al., 2022), assuming that a time series can be represented by trends and seasonality components. As a result, these approaches may not generalize well across various forecasting datasets, since real-world data often lack consistent seasonality. In general, this reliance on specific characteristics limits their generalizability when applied to different types of time series data, which will be demonstrated through detailed experiments in Section 4.
To address these limitations in contrastive representation learning for time series forecasting, the paper aims to answer the following key question: "what is important for time series forecasting with contrastive learning, and how can we adapt contrastive ideas more effectively to time series forecasting tasks?" Beyond contrastive learning, we propose a _Simple Representation Learning Framework for **T**ime **S**eries Forecasting_ (**SimTS**), which is inspired by predictive coding (Oord et al., 2018): we learn a representation such that the latent representation of the future time windows can be predicted from the latent representation of the history time windows. In particular, we build upon a siamese network structure (Bromley et al., 1993; Chen and He, 2021) and propose key refinements that enable better prediction performance with a simpler model structure compared to state-of-art methods. First, we divide a given time series into _history_ and _future_ segments and then use an encoding network to map them to their latent space. Second, we use a predictive layer to predict the latent representation of the _future_ segment from the _history_ segment. We regard the predicted representation (from the _history_ segment) and the encoded representation of the _future_ segment as positive pairs. The representations learned in this way encode features that are useful for forecasting tasks.
Moreover, the paper questions existing assumptions and techniques used for constructing positive and negative pairs. We provide a detailed discussion and several experiments showing their shortcomings when applied to various time series. Specifically, inspired by (Chen and He, 2021; Grill et al., 2020; Tian et al., 2021), we question the proposed usage of negative pairs for time series forecasting and the idea of augmenting the data to generate positive pairs, which is empirically investigated in several experiments with different contrastive methods. As a consequence, our model does not use negative pairs to avoid false repulsion. We hypothesize that the most important mechanism behind representation learning for time series forecasting is maximizing the shared information between representations of _history_ and _future_ time windows. In our proposed model, we explicitly impose a constraint that the learned representation of history should encode as much information as possible by predicting the latent representation of the future from the latent representation of history. This mechanism simplifies several existing approaches and leads to state-of-the-art forecasting results, as thoroughly demonstrated in this paper.
Our contributions can be summarized as follows:
* We propose a novel method (SimTS) for time series forecasting, which employs a siamese structure and a simple convolutional encoder to learn representations in latent space without requiring negative pairs.
* We demonstrate the effectiveness and generalizability of SimTS across various types of time series through experiments on multiple types of benchmark datasets. Our method outperforms state-of-the-art methods for multivariate time series forecasting.
* We conduct extensive ablation experiments to assess and evaluate the effectiveness of various assumptions that are widely used in current state-of-the-art contrastive learning frameworks. This provides insights into the key factors that contribute to the performance of time series forecasting and sheds light on potential areas for improvement in future research.
## 2 Related Works
Researchers have recently developed numerous deep learning models to address the challenges of time series forecasting. Traditional models for time series prediction, such as ARIMA (Liu et al., 2016), SVM (Han et al., 2012), and VAR (Box et al., 2015), have been outperformed on many datasets by deep learning models, including RNN (Wen et al., 2017), CNN (Bai et al., 2018) and transformers (Vaswani et al., 2017). TCN (Bai et al., 2018) introduces dilated convolutions (Oord et al., 2016) for time series forecasting, which incorporates dilation factors into conventional CNNs to increase the receptive field significantly. To improve the effectiveness of long-term time series
Figure 1: Problems with selecting negative pairs based on methods proposed in (Yéche et al., 2021; Yue et al., 2022; Woo et al., 2022) when cross-instance and cross-time repeated patterns exist.
forecasting, the conventional transformer is modified and applied to time series: LogTrans (Li et al., 2019) suggests the _LogSparse_ attention; Informer (Zhou et al., 2021) develops the _ProbSparse_ self-attention mechanism to reduce the computational cost of long-term forecasting.
Recent developments in self-supervised learning have successfully discovered meaningful representations for images (He et al., 2020; Chen et al., 2020) with InfoNCE loss (Oord et al., 2018). To get reliable time-series representations, several approaches have been investigated. Some studies focus on formulating time segments as contrastive pairs: ICA (Hyvarinen and Morioka, 2016) investigates non-stationarity in temporal data to find a representation that allows optimal time segment discrimination; TNC (Tonekaboni et al., 2021) establishes a temporal neighborhood to contrast between neighboring segments and learn the underlying temporal dependency of non-stationary time series. However, these methods do not perform as well in forecasting tasks since they focus on extracting neighborhood features and fail to capture global patterns in time series. Furthermore, some methods utilize more complicated contrastive learning approaches to learn effective representations for time series. For example, (Franceschi et al., 2019) learns scalable representations for various time series lengths using contrasting positive, negative, and reference pairs with an innovative triplet loss. TS2Vec (Yue et al., 2022) employs hierarchical contrastive learning over time series augmentations, generating representations for each time step. However, these approaches formulate contrastive learning frameworks as classification tasks, which try to learn representations by discriminating time series from different classes and therefore ignore learning predictive features. Additionally, as time series can be (re-)constructed by combining trend, season, and noise components (Shumway et al., 2000), there is growing research that uses time series decomposition in unsupervised learning. CoST (Woo et al., 2022) encodes disentangled trend and seasonal representations using contrastive learning. BTSF (Yang and Hong, 2022) aggregates time and spectral domain to extract global information and refine representations. While decomposition-related methods may exhibit robust performance in certain datasets, they heavily rely on underlying assumptions about the data's characteristics and tend to fail when dealing with datasets that lack specific seasonality or trend.
## 3 Methods
### Motivation
In this work, we rethink "what is important for time series forecasting with contrastive learning?" Firstly, we observe that the existing methods might (1) ignore the possibility that repeated patterns exist within a time series, even though they may be located far apart from one another, and (2) disregard the possibility that distinct time series may contain similar patterns. We aim to identify a more suitable design that considers the inherent nature of time series forecasting and adheres to necessary assumptions for effective representation learning. We argue that a good representation should effectively capture the temporal dependencies between past segments and future predictions in forecasting tasks, emphasizing that the temporal differences hold greater significance than the similarity between positive and negative pairs. Thus, we design predictive positive pairs that can learn more flexible and adaptive representations.
Secondly, current approaches require sufficient negative pairs to avoid collapsing (Chen et al., 2020; Chen and He, 2021; Zhang et al., 2022). Collapsing happens in Siamese networks (Bromley et al., 1993) where the model produces a constant representation regardless of the input. Although the introduction of negative pairs constrains the solution space and prevents collapsing, it might also induce the issue of false repulsion. Simultaneously, identifying suitable augmentation methods and negative pairs for forecasting tasks can be challenging, especially when repeated patterns exist across different samples. Such challenges motivate us to explore alternative approaches that circumvent negative pairs and implement stop-gradient solutions.
Furthermore, we contend that real-world data often lack distinct seasonality, making it difficult for models to learn irregular temporal information using abstract features. Our experiments demonstrate that learned representations, which discard some additional model components, yield better forecasting performance than the state-of-the-art contrastive model CoST (Woo et al., 2022), as shown in Table 5. These results suggest that current methods may not generalize well to diverse time series datasets. Finally, it leads us to the central motivation of our SimTS model: we train an encoder to learn time series representations by predicting its future from historical segments in the latent space. SimTS achieves the best performance in time series forecasting benchmark datasets with a relatively simpler design compared to other contrastive learning frameworks.
### SimTS: Simple Representation Learning for Time Series Forecasting
Given a time series \(X=[x_{1},x_{2},\ldots,x_{T}]\in\mathbb{R}^{C\times T}\), where \(C\) is the number of features (i.e., variables) and \(T\) denotes the sequence length. Our objective is to learn a latent representation of the _history_ segment \(X^{h}=[x_{1},x_{2},\ldots,x_{K}]\), where \(0<K<T\), such that our model can predict the _future_ segment \(X^{f}=[x_{K+1},x_{K+2},\ldots,x_{T}]\) from it.
Inspired by well-developed contrastive learning frameworks (Oord et al., 2018; Chen et al., 2020; Grill et al., 2020; Chen and He, 2021), SimTS learns time series representations by maximizing the similarity between predicted
and encoded latent features for each timestamp. The approach involves designing an encoder network, denoted as \(F_{\theta}\), which maps historical and future segments to their corresponding latent representations, \(Z^{h}\) and \(Z^{f}\), respectively. The encoder's objective is to learn an informative latent representation \(Z^{h}=F_{\theta}(X^{h})=[z_{1}^{h},z_{2}^{h},...,z_{K}^{h}]\in\mathbb{R}^{C^{ \prime}\times K}\) that can be used to predict the latent representation of the future through a prediction network. The SimTS model consists of four main parts:
* A siamese neural network architecture (Bromley et al., 1993; Chen and He, 2021) consisting of two identical networks that share parameters. The time series is divided into the _history_ segment \(X^{h}\), and _future_ segment \(X^{f}\), and given as inputs to the siamese network. The siamese network learns to map them to their latent representations \(Z^{h},Z^{f}\).
* A multi-scale encoder consisting of a projection layer that projects raw features into a high dimensional space and multiple CNN blocks with different kernel sizes.
* A predictor network \(G_{\phi}\) that takes the last column of the encoded _history_ view as input and predicts the _future_ in latent space.
* A cosine similarity loss that only takes positive samples into account.
Figure 2 depicts the overall architecture of SimTS. Our model architecture consists of two paths: the _history_ encoding path and the _future_ encoding path. The _history_ encoding path takes the _history_ view \(X^{h}\) and outputs \(Z^{h}=F_{\theta}(X^{h})\). The _future_ encoding path takes the _future_ view \(X^{f}\) and outputs the encoded latent representation of the future \(Z^{f}=F_{\theta}(X^{f})=[z_{K+1}^{f},z_{K+2}^{f},...,z_{T}^{f}]\in\mathbb{R}^{C ^{\prime}\times(T-K)}\). As proposed in (Grill et al., 2020; Zeng et al., 2022), we apply a predictive MLP network \(G_{\phi}\) on the last column of \(Z^{h}\), denoted as \(z_{K}^{h}\), to predict the _future_ latent representations: \(\hat{Z}^{f}=G_{\phi}(z_{K}^{h})=[\hat{z}_{K+1}^{f},\hat{z}_{K+2}^{f},...,\hat{ z}_{T}^{f}]\in\mathbb{R}^{C^{\prime}\times(T-K)}\). Intuitively, the last column allows the encoder to condense the history information into a summary by properly choosing the kernel size. The training objective is to attract the _predicted future_ and _encoded future_ timestamps in representation space without introducing the negative pairs. As the predicted future latent representation is learned from the latent representation of the history, by forcing the predicted latent representation of the future to be close to the encoded latent representation of the future, we are forcing the model to learn a representation of the history that is informative for the future. Therefore, we regard the encoded \(Z^{f}\) and the predicted _future_ representations \(\hat{Z}^{f}\) as the positive pair and calculate the negative cosine similarity between them:
\[Sim(\hat{Z}^{f},Z^{f})=-\frac{1}{T-K}\sum_{i=K+1}^{T}\frac{\hat{z}_{i}^{f}}{ \parallel\hat{z}_{i}^{f}\parallel_{2}}\cdot\frac{z_{i}^{f}}{\parallel z_{i}^{ f}\parallel_{2}}, \tag{1}\]
where \(\parallel\cdot\parallel_{2}\) is \(l_{2}\)-norm and \(Sim(\cdot)\) is the average cosine similarity of all time steps. Algorithm 1 summarises the proposed SimTS.
### Multi-Scale Encoder
To learn a meaningful representation, the structure of the encoder network \(F_{\theta}\) plays a vital role. Given the nature of time series, we would like our base encoder \(F_{\theta}\) to extract temporal (inter-time) dependency from local and global patterns. For short-term forecasting, shorter local patterns (i.e., motifs) are ideal, whereas, for long-term forecasting, longer sets of global patterns are preferred. Therefore, we propose to use a convolutional network with multiple filters
Figure 3: Multi-scale encoder. Composed of a projection layer and a set of parallel 1d convolutions with kernel size \(2^{i}\), for \(i\in\{0,1,...,m\}\). An averaged pooling layer is added on the top of convolutions.
Figure 2: Illustration of our proposed SimTS.
that have various kernel sizes, which can extract both global and local patterns.
Figure 3 illustrates the details of the encoder \(F_{\theta}\). First, each time series input is passed through a convolutional projection layer. The projection layer enables us to project time series into a latent space (Yue et al., 2022; Woo et al., 2022; Wang et al., 2022). We aim to capture abstract information and consistent intra-time relationships between features that may not be immediately apparent from the raw data. So that the model can potentially learn more informative and abstract representations of the raw inputs. Second, for a time series \(X\) with length \(K\), we have \(m=[log_{2}K]+1\) parallel convolution layers on the top of the projection layer, and the \(i\)th convolution has kernel size \(2^{i}\), where \(i\in\{0,1,...,m\}\). These different kernel sizes can extract corresponding local/global patterns. Each convolution \(i\) takes the latent features from the projection layer and generates a representation \(\hat{Z}_{(i)}\). The final multi-scale representation \(Z\) are obtained by averaging across \(\hat{Z}_{(0)},\hat{Z}_{(1)},...,\hat{Z}_{(m)}\).
### Stop-gradient Operation
We apply a stop-gradient operation (Chen and He, 2021) to the _future_ encoding path in our model. Considering that we learn to encode both _history_ and _future_ using the same encoder, the model may optimize the encoder by pushing encoded _future_\(Z^{f}\) towards the predicted _future_\(\hat{Z}^{f}\). As the encoder should constrain the latent of the past to be predictive of the latent of the _future_, only \(\hat{Z}^{f}\) can only move towards \(Z^{f}\) in the latent space, not vice versa (Zhang et al., 2022). Due to the stop-gradient operation on \(Z^{f}\), our encoder cannot receive updates from _future_ representations \(Z^{f}\) and is constrained to only optimize the _history_ representation and its prediction \(\hat{Z}^{f}\). With stop-gradient (sg), the loss is:
\[\begin{array}{l}\mathcal{L}_{\theta,\phi}(X^{h},X^{f})=Sim\left(G_{\phi} \left(F_{\theta}(X^{h})\right),F_{\texttt{sg}(\theta)}(X^{f})\right)\\ =Sim(\hat{Z}^{f},\texttt{sg}(Z^{f}))\end{array} \tag{2}\]
The loss in definition (2) is for one sample \(X=[X^{h},X^{f}]\). The loss for a mini-batch \(\mathcal{D}=\{X^{h}_{i},X^{f}_{i}\}_{i\in[1:N]}\) can be written as
\[\mathcal{L}_{\theta,\phi}(\mathcal{D})=\frac{1}{N}\sum_{i=1}^{N}\mathcal{L}_{ \theta,\phi}(X^{h}_{i},X^{f}_{i}), \tag{3}\]
which corresponds to the average loss across all samples in the mini-batch.
## 4 Experiments
As our goal is to learn a meaningful representation for various types of time series data for forecasting tasks, we focus on experimental settings where we can test the representation power of our model on various forecasting benchmark datasets. To keep a fair comparison, we follow the exact same setup as in CoST and TS2Vec. We first use our trained model to obtain the latent representation of the time series, then train a ridge regression model on the learned latent representation for forecasting, i.e., predicting future \(L\) time steps.
### Datasets and Baselines
We compare our method to the most recent state-of-the-art two-stage representation learning methods for time series as well as to end-to-end learning methods where the model includes both the representation learning part and forecasting part are trained in an end-to-end fashion. The representation learning approaches include TS2Vec (Yue et al., 2022), CoST (Woo et al., 2022) and TNC (Tonekaboni et al., 2021) and end-to-end models include Informer (Zhou et al., 2021) and LogTrans (Li et al., 2019), and two-stage representation learning approaches include TS2Vec, CoST and TNC. The details and implementations of the baselines are provided in the appendix. Our model was tested for both univariate and multivariate forecasting. In the case of a dataset with \(C\) features, we either predict the future values for all \(C\) features (i.e., multivariate forecasting) or only focus on forecasting the future values of one specific feature (univariate forecasting).
Our experiments are carried out on six real-world public benchmark datasets. **Electricity Transformer Temperature (ETT)**(Zhou et al., 2021) measures long-term deployment of electric power. It consists of two hourly-sampled datasets (ETTh) and two 15-minute-sampled datasets (ETTm), which are collected for 2 years and from
two different Chinese provinces. ETT datasets contain one oil temperature feature and six power load features. In univariate forecasting, we only take oil temperature to train and forecast. In multivariate forecasting, we employ all features in our training and prediction. **Exchange-Rate1**[11] contains contains the daily exchange rates of eight foreign countries from 1990 to 2016, including Australia, Britain, Canada, Switzerland, China, Japan, New Zealand, and Singapore. We consider the values of Singapore for univariate forecasting and all countries' value for multivariate forecasting. **Weather2** consists of local climatological data for almost 1,600 U.S. areas for 4 years. The data is collected every 10 minutes. Each time step contains 11 weather variables and one target feature, 'Wet Bulb Celsius.' In univariate forecasting, we only consider the feature 'Wet Bulb Celsius'; in multivariate forecasting, all features are included. The detailed statistics of the datasets are in the appendix (Table 6).
Footnote 1: [https://github.com/laiguokun/multivariate-time-series-data](https://github.com/laiguokun/multivariate-time-series-data)
Footnote 2: [https://www.bgc-jena.mpg.de/wetter/](https://www.bgc-jena.mpg.de/wetter/)
### Experimental setup
We divide all datasets into training, validation, and test sets in the ratio of 6:2:2. Throughout the evaluation stage, the model parameters are frozen to output representations.
The input time series are projected to a 64-dimensional latent space using a convolutional projector. The multi-scale convolutions further encode the projected vectors into a 320-dimensional latent space (i.e., \(C^{\prime}\) = 320). We cut the original time series into sub-sequences of length T = 402, where each sub-sequence serves as a training sample. Within each sample, the first 201 timestamps correspond to its _history_ view and the subsequent 201 timestamps to its _future_ view. The cosine similarity loss is optimized using stochastic gradient descent (SGD) optimizer with a learning rate of 0.001, a momentum of 0.9, and a weight decay of 0.0001. We trained 500 epochs for all datasets with a batch size of 8.
We set the predicted horizons \(L\in\{24,48,168,336,720\}\) for dataset ETTh1, ETTh2, Exchange, and Weather. For dataset ETTm1 and ETTm2, we set \(L\in\{24,48,96,288,672\}\). We select the best ridge regression model using the validation set and then use it to report the forecasting error on the test set. Mean-squared-error (MSE) and mean-absolute-error (MAE) are used to evaluate our results. More details about the experimental setup and training process are included in the appendix, and codes for reproducing the results will be available upon acceptance.
### Results
Table 1 summarizes the average results of multivariate forecasting with five runs. Overall, our model, SimTS, outperforms all the representation learning baselines in the multivariate setting on most of the datasets by a large margin. When looking at the average performance across six datasets, SimTS outperforms TS2Vec by 18.8% (MSE) and 11.1%(MAE), TNC by 36.9% (MSE) and 16.0%(MAE), and CoST by 11.0% (MSE) and 6.8%(MAE). Additionally, when examining the performance on each dataset individually, SimTS outperforms TS2Vec on all six datasets and outperforms TNC and CoST on five out of six datasets while performing comparably or slightly worse on one of the six datasets. We believe one probable explanation is that the Exchange dataset is less stationary, and the pattern of data adjacent in time (i.e., in a neighborhood) can be discriminated from the pattern of data far away. Such neighborhood patterns can be found via TNC, which leads to better performance. On the other hand, the weather dataset is more stationary, which means CoST can use season-trend disentanglement to extract useful information and thus achieves better performance.
Although CoST and TNC perform better in some datasets, SimTS achieves overall state-of-the-art performance across all datasets. This suggests that our approach is general and robust across a wide range of time series datasets.
## 5 Ablation Study
In this section, we present a systematic ablation study to examine the different components and assumptions in our model. We also investigate assumptions in the baseline models to assess their influence on forecasting performance.
### Backbones
First, we examine the importance of our encoder network structure design. To test the contribution of the convolutional network structure as our encoding network, we substitute the convolutional layers with the TCN [1] and LSTM [16] networks with comparable parameter sizes. Table 2 shows the forecasting results on ETT datasets. In both univariate and multivariate forecasting, the convolutional layer in our model performs better than TCN and LSTM, demonstrating the efficiency of our encoder for encoding time series representations.
### Negative Samples
Negative pairs, if not constructed carefully, could depreciate the model performance in terms of representation power. TS2Vec (Yue et al., 2022) and CoST (Woo et al., 2022) use sub-sequences of other instances or various timestamps as the negative pairs for contrastive learning. However, our model SimTS outperforms them in the absence of negative pairs, implying that the selection of negative pairs in CoST and TS2Vec may be inaccurate and result in sub-optimal performance. To further demonstrate the influence of neg
\begin{table}
\begin{tabular}{c|c c c c c c c c|c c c c} \hline \hline \multirow{2}{*}{Methods} & \multicolumn{6}{c}{Unsupervised Representation Learning} & \multicolumn{6}{c}{End-to-end Forecasting} \\ \cline{3-14} & \multicolumn{3}{c}{Ours} & \multicolumn{3}{c}{TS2Vec} & \multicolumn{3}{c}{TNC} & \multicolumn{3}{c}{CoST} & \multicolumn{3}{c}{Informer} & \multicolumn{3}{c}{TCN} \\ \hline & L & MSE & MAE & MSE & MAE & MSE & MAE & MSE & MAE & MSE & MAE \\ \hline \multirow{5}{*}{\begin{tabular}{c} Contrastive \\ \end{tabular} } & 24 & **0.377** & **0.422** & 0.590 & 0.531 & 0.708 & 0.592 & 0.386 & 0.429 & 0.577 & 0.549 & 0.583 & 0.547 \\ & 48 & **0.427** & **0.454** & 0.624 & 0.555 & 0.749 & 0.619 & 0.437 & 0.464 & 0.685 & 0.625 & 0.670 & 0.606 \\ & 168 & **0.638** & **0.577** & 0.762 & 0.639 & 0.884 & 0.699 & 0.643 & 0.582 & 0.931 & 0.752 & 0.811 & 0.680 \\ & 336 & 0.815 & **0.685** & 0.931 & 0.728 & 1.020 & 0.768 & **0.812** & 0.679 & 1.128 & 0.873 & 1.132 & 0.815 \\ & 720 & **0.956** & **0.771** & 1.063 & 0.799 & 1.157 & 0.830 & 0.970 & 0.771 & 1.215 & 1.869 & 1.165 & 0.813 \\ \hline \multirow{5}{*}{\begin{tabular}{c} Contrastive \\ \end{tabular} } & 24 & **0.336** & **0.434** & 0.424 & 0.489 & 0.612 & 0.595 & 0.447 & 0.502 & 0.720 & 0.665 & 0.935 & 0.754 \\ & 48 & **0.564** & **0.571** & 0.619 & 0.605 & 0.840 & 0.716 & 0.699 & 0.637 & 1.457 & 1.001 & 1.300 & 0.911 \\ & 168 & **1.407** & **0.926** & 1.845 & 1.074 & 2.359 & 1.213 & 1.549 & 0.982 & 3.489 & 1.515 & 4.017 & 1.579 \\ & 336 & **1.640** & **0.996** & 2.194 & 1.197 & 2.782 & 1.349 & 1.749 & 1.042 & 2.723 & 1.340 & 3.460 & 1.456 \\ & 720 & **1.878** & **1.065** & 2.636 & 1.370 & 2.753 & 1.394 & 1.971 & 1.092 & 3.467 & 1.473 & 3.106 & 1.381 \\ \hline \multirow{5}{*}{\begin{tabular}{c} Contrastive \\ \end{tabular} } & 24 & **0.232** & **0.314** & 0.453 & 0.444 & 0.522 & 0.472 & 0.246 & 0.329 & 0.323 & 0.369 & 0.522 & 0.472 \\ & 48 & **0.311** & **0.368** & 0.592 & 0.521 & 0.695 & 0.567 & 0.381 & 0.386 & 0.494 & 0.503 & 0.542 & 0.508 \\ & 96 & **0.360** & **0.402** & 0.635 & 0.554 & 0.731 & 0.595 & 0.378 & 0.419 & 0.678 & 0.614 & 0.666 & 0.578 \\ & 288 & **0.450** & **0.467** & 0.693 & 0.597 & 0.818 & 0.649 & 0.472 & 0.486 & 1.056 & 0.786 & 0.991 & 0.735 \\ & 672 & **0.612** & **0.563** & 0.782 & 0.653 & 0.932 & 0.712 & 0.620 & 0.574 & 1.192 & 0.926 & 1.032 & 0.756 \\ \hline \multirow{5}{*}{\begin{tabular}{c} Contrastive \\ \end{tabular} } & 24 & **0.108** & **0.223** & 0.180 & 0.293 & 0.185 & 0.297 & 0.122 & 0.244 & 0.173 & 0.301 & 0.180 & 0.324 \\ & 48 & **0.164** & **0.285** & 0.244 & 0.350 & 0.264 & 0.360 & 0.183 & 0.305 & 0.303 & 0.409 & 0.204 & 0.327 \\ & 96 & **0.271** & **0.376** & 0.360 & 0.427 & 0.389 & 0.458 & 0.294 & 0.394 & 0.365 & 0.453 & 3.041 & 1.330 \\ & 288 & **0.716** & **0.646** & 0.723 & 0.639 & 0.920 & 0.788 & 0.723 & 0.652 & 1.047 & 0.804 & 3.162 & 1.337 \\ & 672 & **1.600** & **0.979** & 1.753 & 1.007 & 2.164 & 1.135 & 1.899 & 1.073 & 3.126 & 1.302 & 3.624 & 1.484 \\ \hline \multirow{5}{*}{
\begin{tabular}{c} Contrastive \\ \end{tabular} } & 24 & **0.059** & **0.172** & 0.108 & 0.252 & 0.105 & 0.236 & 0.136 & 0.291 & 0.611 & 0.626 & 2.483 & 1.327 \\ & 48 & **0.135** & **0.265** & 0.200 & 0.341 & 0.162 & 0.270 & 0.250 & 0.387 & 0.680 & 0.644 & 2.328 & 1.256 \\ \cline{1-1} & 168 & 0.713 & 0.635 & 0.412 & 0.492 & **0.397** & **0.480** & 0.924 & 0.762 & 1.097 & 0.825 & 2.372 & 1.279 \\ \cline{1-1} & 336 & 1.409 & 0.938 & 1.339 & 0.901 & **1.008** & **0.866** & 1.774 & 1.063 & 1.672 & 1.036 & 3.113 & 1.459 \\ \cline{1-1} & 720 & **1.628** & **1.056** & 2.114 & 1.125 & 1.989 & 1.063 & 2.160 & 1.209 & 2.478 & 1.310 & 3.150 & 1.458 \\ \hline \multicolumn{11}{c}{- The results of TS2Vec and CoST on ETTm2, Exchange, and Weather datasets are implemented by us.} \\ \end{tabular}
\end{table}
Table 1: Multivariate forecasting results. The best results are highlighted in bold, and the second-best results are highlighted with an underline. \(L\) denotes the predicted horizons of datasets.The performance is measured in mean-squared error (MSE) and mean-absolute error (MAE).
\begin{table}
\begin{tabular}{c|c c c c c c c c c c} \hline \hline \hline Backbones & \multicolumn{3}{c}{Ours} & \multicolumn{3}{c}{TCN} & \multicolumn{3}{c}{LSTM} \\ \hline & MSE & MAE & MSE & MAE & MSE & MAE & MSE & MAE \\ \hline Multivariate & **0.688** & **0.601** & 0.912 & 0.674 & 2.124 & 0.827 \\ Univariate &
ative samples, we construct negative pairs by following SimCLR (Chen et al., 2020) to test our model result with and without negative pairs. We replace the cosine similarity loss in Equation (2) with the loss that is used in Chen et al. (2020) and Oord et al. (2018) to consider the negative pairs together with the positive pairs. Given a mini-batch \(\mathcal{D}=\{X_{1},X_{2},...,X_{N}\}\) of \(N\) samples with length \(T\) and its encoded representations \(\{Z_{1},Z_{2},...,Z_{N}\}\), we optimize:
\[\mathcal{L}_{\theta,\phi}^{\text{NCE}}(\mathcal{D})=-\frac{1}{T}\sum_{t=1}^{T }\left[\log\frac{\exp(z_{t}^{f}\cdot z_{t,+}^{f})}{\sum_{i=1}^{N}\exp(\hat{z}_ {t}^{f}\cdot z_{t,i}^{f})}\right], \tag{4}\]
where \(\hat{z}_{t,+}^{f}=G_{\theta}(z_{t}^{h})\), and \(z_{t,i}^{f}\) denotes latent representation of the \(t\)-th timestamp of the \(i\)-th sample from \(\mathcal{D}\). The numerator calculates the similarity between predicted and encoded future representations, which is the positive pair in our framework. The denominator calculates the similarities between the negative pairs, which are the predicted future representation and encoded representations from other samples within \(\mathcal{D}\). Table 3 shows the forecasting results with and without including the negative samples. In particular, it demonstrates that negative samples generally decrease performance in most of the datasets we tested. These results confirm that adding negative pairs to our proposed method leads to suboptimal performance. However, this does not mean that including negative pairs overall is not useful; it simply implies that the current approaches to constructing negative pairs are inefficient. Thus, future research should be dedicated to coming up with better ways to construct negative pairs.
### Stop-Gradient Operation
In SimTS, we apply a stop-gradient operation on the _future_ encoding path during the optimization. To test the effect of this operation on the overall model performance, we did an ablation study on what happens if we remove this operation, or apply it on the history encoding path instead of the _future_ encoding path, see Figure 4. When we apply the stop gradient on the history encoding path, as it is shown in Figure 3(c), the model optimizes the loss by pushing _future_ representations \(Z^{f}\) towards the _future_ predictions \(\hat{Z}^{f}\). We refer to this model as RevSimTS (SimTS with reverse stop-gradient, Figure 3(c)). As shown in Table 4, we observe that either the removal of the stop-gradient on the _future_ encoding path (Figure 3(b)) or moving the stop-gradient to the history encoding path causes a significant decrease in performance, supporting our argument that the stop-gradient operation in the future encoding path leads to optimal performance.
### Disentanglement Assumption
To demonstrate that the season-trend disentanglement as proposed in CoST (Woo et al., 2022) may not work well for different types of datasets, especially on the less stationary data, we conduct an ablation study by removing the season disentanglement in CoST. The original season-trend disentanglement is performed by applying Fourier transform to the data and using an affine transformation to extract feature
\begin{table}
\begin{tabular}{l|c c c c c c} \hline \hline Model & \multicolumn{2}{c}{SimTS} & \multicolumn{2}{c}{SimTS w/o SG\({}^{\dagger}\)} & \multicolumn{2}{c}{RevSimTS} \\ \hline & MSE & MAE & MSE & MAE & MSE & MAE \\ \hline ETH1 & **0.642** & **0.582** & 0.783 & 0.663 & 0.762 & 0.634 \\ ETH2 & **1.165** & **0.798** & 2.940 & 1.490 & 3.128 & 1.449 \\ ETTm1 & **0.393** & **0.432** & 0.681 & 0.609 & 0.551 & 0.525 \\ ETTm2 & **0.572** & **0.502** & 1.315 & 0.863 & 1.186 & 0.796 \\ Exchange & **0.789** & **0.613** & 1.808 & 1.062 & 1.398 & 0.900 \\ Weather & **0.424** & **0.458** & 0.605 & 0.592 & 0.485 & 0.512 \\ \hline \hline \end{tabular} \({}^{\dagger}\) SimTS without stop-gradient operation
\end{table}
Table 4: Ablation study of stop-gradient operation on multivariate forecasting across ETT datasets.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c} \hline \hline Datasets & \multicolumn{2}{c}{ETTh1} & \multicolumn{2}{c}{ETTh2} & \multicolumn{2}{c}{ETTm1} & \multicolumn{2}{c}{ETTm2} & \multicolumn{2}{c}{Exchange} & \multicolumn{2}{c}{Weather} \\ \hline & MSE & MAE & MSE & MAE & MSE & MAE & MSE & MAE & MSE & MAE & MSE & MAE \\ \hline SimTS & 0.642 & 0.582 & 1.165 & 0.798 & 0.393 & 0.423 & 0.572 & 0.502 & 0.789 & 0.613 & 0.424 & 0.458 \\ SimTS w/ neg\({}^{\ddagger}\) & 0.685 & 0.632 & 1.544 & 0.938 & 0.392 & 0.441 & 0.747 & 0.572 & 1.405 & 0.769 & 0.434 & 0.468 \\ \hline \hline \end{tabular} \({}^{\ddagger}\) With negative samples and InfoNCE loss in Equation 4.
\end{table}
Table 3: Ablation study of negative samples on multivariate forecasting across ETT datasets.
\begin{table}
\begin{tabular}{l|c c c c c} \hline \hline Datasets & Exchange & ETTm2 & ETTm1 & Weather \\ \hline ADF Test Stat. & -1.889 & -6.225 & -14.985 & -26.661 \\ \hline CoST & 0.975 & 0.822 & 0.492 & 0.439 \\ CoST w/o SD\({}^{\dagger}\) & 0.899 & 0.754 & 0.466 & 0.440 \\ CoST w/o aug.\({}^{\ddagger}\) & 0.865 & 0.986 & 0.493 & 0.462 \\ CoST w/ mask\({}^{\ddagger}\) & 1.223 & 0.664 & 1.041 & 0.502 \\ \hline Diff. w/o SD & 0.076 \(\uparrow\) & 0.068 \(\uparrow\) & 0.026 \(\uparrow\) & 0.001 \(\downarrow\) \\ Diff. w/o aug. & 0.119 \(\uparrow\) & 0.164 \(\downarrow\) & 0.001 \(\downarrow\) & 0.023 \(\uparrow\) \\ Diff. w/ mask & 0.248 \(\downarrow\) & 0.158 \(\uparrow\) & 0.549 \(\downarrow\) & 0.063 \(\downarrow\) \\ \hline \hline \end{tabular} \({}^{\dagger}\) Seasonal disentanglement
\({}^{\ddagger}\) Discard the augmentations proposed in (Woo et al., 2022)
\({}^{\lx@sectionsign}\) Timestamp masking proposed in (Yue et al., 2022)
\({}^{\dagger}\)/\(\downarrow\) indicates performance increase/decrease
\end{table}
Table 5: The average multivariate forecasting results from changing the season-trend disentanglement and data augmentation modules in CoST.
correlations in the frequency domain. We substitute this process by performing the same affine transformation on the original data without applying Fourier transform. The results are shown in Table 5, where "w/o SD" denotes CoST without the seasonal disentanglement. Besides, we adopt the Augmented Dick-Fuller (ADF) test statistic (Elliott et al., 1996) proposed in (Liu et al., 2022) to measure the degree of stationarity. A smaller ADF score indicates higher stationarity. We observe that seasonal disentanglement can improve the forecasting outcomes for the Weather dataset, which exhibits significant stationarity. However, the seasonal disentanglement impairs predicting ability in less stationary datasets like Exchange and ETTm2, supporting our claim that the seasonal disentanglement assumption is misleading in some datasets and lacks generality.
### Data Augmentation for Constructing Views
Data augmentation is a common method to generate positive pairs in contrastive learning. However, current augmentation methods for time series may impair the performance of forecasting. We conduct ablation studies to demonstrate the influences of data augmentations. CoST uses three types of data augmentation: scaling, shifting, and jittering. On the other hand, TS2Vec randomly masks timestamps in a sample to construct views. Therefore, we implement two ablation experiments for CoST: (1) eliminating data augmentation and (2) adding random masks. Table 5 shows the results of the two experiments, where "w/o aug" denotes CoST without its original augmentation methods and "w/mask" denotes CoST using random masks as augmentation. Our experiments show that the original data augmentation in CoST can potentially result in lower performances, and adding random masks impairs performances for most datasets. These findings do not imply that data augmentation is not effective in general; rather, they demonstrate that finding efficient augmentation techniques applicable to various time series is challenging, and better methods for augmenting time series data need to be developed.
## 6 Conclusion
This paper proposes SimTS, a simple representation learning framework based on contrastive learning that does not require negative pairs. We conducted an extensive study to test our proposed model and compared it to other existing representation learning models for time series forecasting. Our general aim was to challenge the assumptions and components that are widely used in these models. Our study reveals that current representation learning methods are not universally applicable to different types of time series data. Some of the components used in these models might be unnecessary and can even negatively impact performance in some cases. This means that existing models based on contrastive learning for time series forecasting are highly dependent on the specific dataset being used, and careful consideration is necessary when deploying them.
Our proposed model, however, addresses some of the limitations by providing a simplified and robust contrastive learning model achieving better performance across different datasets compared to state-of-the-art methods. Moving forward, we plan to extend our framework to handle more challenging data such as irregular time series and explore efficient data augmentation methods for time series forecasting.
## 7 Acknowledgements
This work is supported by the Swiss National Science Foundation (project 201184).
Figure 4: Ablation study of stop-gradient operation. (a) SimTS architecture. (b) SimTS without stop-gradient operation. (c) RevSimTS with stop-gradient on the _history_ encoding path. |
2309.05569 | ITI-GEN: Inclusive Text-to-Image Generation | Text-to-image generative models often reflect the biases of the training
data, leading to unequal representations of underrepresented groups. This study
investigates inclusive text-to-image generative models that generate images
based on human-written prompts and ensure the resulting images are uniformly
distributed across attributes of interest. Unfortunately, directly expressing
the desired attributes in the prompt often leads to sub-optimal results due to
linguistic ambiguity or model misrepresentation. Hence, this paper proposes a
drastically different approach that adheres to the maxim that "a picture is
worth a thousand words". We show that, for some attributes, images can
represent concepts more expressively than text. For instance, categories of
skin tones are typically hard to specify by text but can be easily represented
by example images. Building upon these insights, we propose a novel approach,
ITI-GEN, that leverages readily available reference images for Inclusive
Text-to-Image GENeration. The key idea is learning a set of prompt embeddings
to generate images that can effectively represent all desired attribute
categories. More importantly, ITI-GEN requires no model fine-tuning, making it
computationally efficient to augment existing text-to-image models. Extensive
experiments demonstrate that ITI-GEN largely improves over state-of-the-art
models to generate inclusive images from a prompt. Project page:
https://czhang0528.github.io/iti-gen. | Cheng Zhang, Xuanbai Chen, Siqi Chai, Chen Henry Wu, Dmitry Lagun, Thabo Beeler, Fernando De la Torre | 2023-09-11T15:54:30Z | http://arxiv.org/abs/2309.05569v1 | # ITI-Gen: Inclusive Text-to-Image Generation
###### Abstract
Text-to-image generative models often reflect the biases of the training data, leading to unequal representations of underrepresented groups. This study investigates **inclusive** text-to-image generative models that generate images based on human-written prompts and ensure the resulting images are **uniformly distributed** across attributes of interest. Unfortunately, directly expressing the desired attributes in the prompt often leads to sub-optimal results due to linguistic ambiguity or model misrepresentation. Hence, this paper proposes a drastically different approach that adheres to the maxim that** "a picture is worth a thousand words". We show that, for some attributes, images can represent concepts more expressively than text. For instance, categories of skin tones are typically hard to specify by text but can be easily represented by example images. Building upon these insights, we propose a novel approach, **ITI-Gen1**, that leverages readily available reference images for **Inclusive Text-to-**I**mage **GEN**eration. The key idea is learning a set of prompt embeddings to generate images that can effectively represent all desired attribute categories. More importantly, ITI-Gen requires no model fine-tuning, making it computationally efficient to augment existing text-to-image models. Extensive experiments demonstrate that ITI-Gen largely improves over state-of-the-art models to generate inclusive images from a prompt._
Footnote 1: Project page: [https://czhang0528.github.io/iti-gen](https://czhang0528.github.io/iti-gen)
## 1 Introduction
In recent years we have witnessed a remarkable leap in text-based visual content creation, driven by breakthroughs in generative modeling [70, 28, 60, 59, 64] and the access to large-scale multimodal datasets [68, 36]. Particularly, publicly released models, such as Stable Diffusion [64], have matured to the point where they can produce highly realistic images based on human-written prompts.
However, one major drawback of existing text-to-image models is that they inherit biases from the training data [6, 59, 64, 12, 5] and thus have yet to exhibit _inclusiveness_ -- the generated images based on the input text may reflect stereotypes, leading to the exclusion of certain attributes or minority groups. For instance, given the prompt "a headshot of a person", Figure 1(a) shows how a state-of-the-art system generates about 92\(\%\) images of subjects without eyeglasses, and only \(8\%\) with eyeglasses, showing a clear bias towards people without eyeglasses. Alternatively, as shown in Figure 1(b), one could specify the attribute in the prompt, resulting in better outcomes; however, this will still result in a sub-optimal solution due to linguistic ambiguity. While inclusiveness has been critical to responsible AI, existing text-to-image models are still lagging [12, 5, 56, 54, 47]. In this work, we propose a new method that achieves inclusive
Figure 1: **(a)** Given a human-written prompt (“_a headshot of a person_”), existing text-to-image models [64] can hardly synthesize pictures representing minority groups (_i.e._, people with eyeglasses in this example). **(b)** Conventional hard prompt searching [19] is sub-optimal due to linguistic ambiguity. **(c)** We address these problems by leveraging a small set of reference images for inclusive text-to-image generation (ITI-Gen).
ness2 in text-to-image generation using only a few example images, as illustrated in Figure 1(c).
Footnote 2: Few works [12, 5] have studied fairness issues in text-to-image generation but mainly focused on social biases (, perceived gender, ethnicity). This paper incorporates a broader spectrum of attributes.
To advance inclusive generation, a straightforward way is to retrain or fine-tune the model upon request, using _truly_ inclusive training data [18, 84]. Doing so, however, is insurmountably challenging as collecting large-scale training data that is balanced/inclusive across all attributes of interest is impractical, and training generative models is highly compute-intensive [68, 66, 18]. Another principled approach towards inclusiveness is to specify or enumerate each category in natural language (, hard prompt searching) [19, 56]. However, many categories are difficult to specify with natural language (, skin tone) or cannot be well synthesized by the existing models due to linguistic ambiguity or model misrepresentation [30].
At first glance, these seem to paint a grim picture for inclusive text-to-image generation. However, we argue that instead of specifying attributes explicitly using descriptive natural language, images can represent specific concepts or attributes more efficiently. Observing the availability of a shared vision-language embedding in many multimodal generative models [57], we raise the question: _can we learn inclusive prompt embeddings using images as guidance?_
To achieve this goal, we introduce **ITI-Gen**, a novel and practical framework that creates discriminative prompts based on readily available reference images for **I**nclusive **T**ext-to-**I**mage **GEN**eration. Concretely, we leverage the vision-language pre-trained CLIP model [57] to obtain the embeddings of the reference images and learnable prompts. In the joint embedding space, we design a new training objective to align the directions of the image and prompt features. The core idea is to translate the visual attribute differences into natural language differences such that the generated images based on the learned prompts can effectively represent all desired categories. By equalizing the sampling process over the learned prompts, our method guarantees inclusiveness for text-to-image generation.
We validate our framework with Stable Diffusion [64]. ITI-Gen can leverage reference images from different domains, including human faces [44, 35, 21] and scenes [69], to achieve inclusive generation in single or multiple attributes of interest. ITI-Gen needs neither prompt specification nor model fine-tuning, bypassing the problems of linguistic ambiguity as well as computational complexity. Moreover, ITI-Gen is compatible with the existing text-based image generation models (, ControlNet [83] and instruction-based image editing models [7]) in a plug-and-play manner. To the best of our knowledge, this is the first method that allows inclusive text-to-image generation over a frozen model and obtains competitive results throughout.
## 2 Related Work
**Text-to-Image Generative Models.** Text-based image generation has been widely studied with numerous model architectures and learning paradigms [49, 63, 72, 60, 24, 81, 19, 20, 9, 70, 80, 16, 18, 39]. Recently, the overwhelming success of diffusion-based text-to-image models [59, 67, 59, 52] has attracted significant attention. A key factor to this success is their ability to deal with large-scale multimodal datasets [68, 36, 11]. Thus, questions concerning inclusiveness while learning with biased datasets remain a crucial open problem [12, 5, 3].
**Bias Mitigation in Text-to-Image Generation.** While fairness has been studied extensively in discriminative models [75, 76, 77, 43], research on developing fair generative models is limited [85, 31, 23, 14, 47]. Most efforts focus on GAN-based models [13, 58, 32, 61, 82, 37, 79, 71, 34, 48], restricting their applicability to the emerging diffusion-based text-to-image models. Recently, there have been some efforts to address this limitation. For instance, Bansal [5] proposed to diversify model outputs by ethical intervention3. Ding [19] proposed to directly add attribute words to the prompt. However, these hard prompt searching methods have limitations such as being opaque and laborious [5], and not always generating diverse images reliably [30, 5]. In this work, we incorporate a broad spectrum of attributes beyond social groups. Moreover, we learn inclusive prompts in the continuous embedding space, requiring no hard prompt specification.
Footnote 3:, appending “irrespective of their gender” to the end of a neutral prompt “a photo of a lawyer” for generating diverse pictures w.r.t. genders.
To learn a fair generative model, Wu [78] employed off-the-shelf models, such as CLIP [57] and pre-trained classifiers, as guidance. Choi [13] used a reference dataset to train the model via sample re-weighting. In contrast, we use reference data in a drastically different way -- treating the images as proxy signals to guide prompt learning but without retraining the text-to-image model.
**Image-Guided Prompt Tuning.** Our method is inspired by Prompt Tuning (PT) [42, 33]. Typically, PT methods insert small learnable modules (, tokens) into the pre-trained models and fine-tune these modules with downstream tasks while freezing the model parameters. Recently, PT has been leveraged in personalized text-to-image generation [25, 65, 40]. By providing several reference images with the customized subject, they use a special token to represent the object by optimizing the token embedding [25, 40] or the diffusion models [65, 40]. This motivates us to learn the specific token embedding for each attribute category for inclusiveness. However, we note that the previously mentioned methods for personalization do not effectively capture the attributes in the images. Thus, we propose to optimize the directions of the attribute-specific
prompts in the joint vision-language embedding space, bypassing training text-to-image generative models.
## 3 Inclusive Text-to-Image Generation
To drive the progress of Inclusive Text-to-Image Generation, we propose ITI-Gen, which creates inclusive prompts that represent various attributes and their combinations. This is particularly challenging for attributes that are difficult to describe in language or underrepresented. To address this, ITI-Gen uses readily available reference images as guidance, enabling unambiguous specification of different attributes. Figure 2 illustrates the overall framework. In this section, we first introduce the framework of ITI-Gen in Section 3.1, then describe the details of the learning strategy in Section 3.2, and finally discuss the key properties of ITI-Gen in Section 3.3.
### Overview
Problem Statement.Given a pre-trained text-to-image generative model \(G\) and a human-written prompt (_e.g_., "_a headshot of a person")_ tokenized as \(\mathbf{T}\in\mathbb{R}^{p\times e}\), where \(p\) is the number of tokens and \(e\) is the dimension of the embedding space, we aim to sample equal (or controllable) numbers of images that can represent any _category_ combination given the _attribute_ set \(\mathbf{A}\). Formally,
\[\mathbf{A}=\{\mathcal{A}_{m}|1\leq m\leq M\};\mathcal{A}_{m}=\{a_{k}^{m}|1\leq k \leq K_{m}\} \tag{1}\]
contains \(M\) different attributes (_e.g_., perceived gender, skin tone, _etc_.), where \(a_{k}^{m}\) records a mutually exclusive category (_e.g_., a specific type of skin tone) in attribute \(\mathcal{A}_{m}\) and \(K_{m}\) denotes the number of categories in \(\mathcal{A}_{m}\). Note that \(K_{m}\) may vary among different attributes.
Inclusive Prompt Set.Inspired by [42, 33], we propose prompt tuning for inclusive generation. Specifically, for a given category \(a_{k}^{m}\) within attribute \(\mathcal{A}_{m}\), we inject \(q\)_learnable_ tokens \(\mathbf{S}_{k}^{m}\in\mathbb{R}^{q\times e}\) after the original \(\mathbf{T}\) to construct a new prompt \(\mathbf{P}_{k}^{m}=[\mathbf{T};\mathbf{S}_{k}^{m}]\in\mathbb{R}^{(p+q)\times e}\). By querying the model \(G\) with \(\mathbf{P}_{k}^{m}\), we can generate images exhibiting the characteristics of the corresponding category \(a_{k}^{m}\). To differentiate the new tokens \(\mathbf{S}_{k}^{m}\) from the original prompt \(\mathbf{T}\), we refer to them as _inclusive tokens_.
When jointly considering \(M\) attributes, we aggregate \(M\) separate inclusive tokens \(\mathbf{S}_{o_{1}}^{1},\mathbf{S}_{o_{2}}^{2},\ldots,\mathbf{S}_{o_{M}}^{M}\) to represent a specific category combination \((a_{o_{1}}^{1},a_{o_{2}}^{2},\ldots,a_{o_{M}}^{M})\), _e.g_., the concept of ("woman", "dark skin",..., "young"). We thus expect to create a unique \(\mathbf{S}_{o_{1}o_{2}\ldots o_{M}}\),
\[\mathbf{S}_{o_{1}o_{2}\ldots o_{M}}=f(\mathbf{S}_{o_{1}}^{1},\mathbf{S}_{o_{2}}^{2},\ldots,\mathbf{S}_{o_{M}}^{M}) \tag{2}\]
that can be injected after \(\mathbf{T}\) to generate images for this particular category combination. The aggregation function \(f\) in Equation 2 should be able to take various numbers of attributes while maintaining the permutation invariant property4 with respect to attributes. Common options include element-wise average, sum, and max operations. Following [50], we adopt element-wise sum to preserve the text semantics without losing information5. Finally, we define the _inclusive prompt set_ as follows:
Footnote 4: That is, the output of \(f\) should be the same even if we permute the indices \(m\) of the attributes in \(\mathbf{A}\) (cf. Equation 1).
Footnote 5: Please see Appendix E.2 for more analysis and other options for aggregating multiple tokens, _e.g_., concatenation.
\[\mathcal{P}_{\text{total}}=\{\mathbf{P}_{o_{1}o_{2}\ldots o_{M}}=[\mathbf{T};\sum_{m= 1}^{M}S_{o_{m}}^{m}]\in\mathbb{R}^{(p+q)\times e}\mid\]
\[1\leq o_{1}\leq K_{1},\ldots,1\leq o_{M}\leq K_{M}\}. \tag{3}\]
Figure 2: **Illustration of Inclusive Text-to-Image GENeration (ITI-Gen) with the example of two binary attributes: _perceived gender_ and _skin tone_. (a) Given an input prompt, (b) ITI-Gen learns discriminative token embeddings to represent each category of every target attribute. (c) By injecting the learned tokens after the original input prompt, ITI-Gen synthesizes an inclusive prompt set that can be used to (d) sample equal (or controllable) numbers of images for any category combination. Further, our framework can be easily extended to multi-category multi-attribute scenarios of inclusive text-to-image generation. Note that, in practice, multi-category skin tones beyond (“light”, “dark”) as in this example may be challenging to specify with language (see Figure 3). Please see Section 3.1 for details.**
By uniformly sampling the prompts from \(\mathcal{P}_{\text{total}}\) as the conditions to generate images using the generative model \(G\), we achieve inclusiveness across all attributes (see Figure 2). _More generally speaking, the distribution of the generated data is directly correlated to the distribution of the prompts, which can be easily controlled._
In contrast to specifying the category name in discrete language space [5, 19], we optimize prompts entirely in the _continuous_ embedding space. Additionally, we only update the attribute-specific embeddings -- the colors \(\bullet\) and \(\bullet\) in Equation 3 indicate frozen and learnable parameters, respectively. This decoupled optimization mechanism thus provides the advantage of using the learned inclusive tokens in a plug-and-play manner across various applications, as will be demonstrated in Section 3.3 and Section 4.3. We elaborate on the learning process in the following section.
### Learning Inclusive Prompts
**Reference Image Set.** We propose using reference images to guide prompt learning, as they can provide more expressive signals to describe attributes that may be challenging to articulate through language. Specifically, we assume the availability of a reference image set \(\mathcal{D}_{\text{ref}}^{m}=\{(\mathbf{x}_{n}^{m},y_{n}^{m})\}_{n=1}^{N_{m}}\) for a target attribute \(\mathcal{A}_{m}\), where \(N_{m}\) is the dataset size and \(y_{n}^{m}\in\mathcal{A}_{m}\) (defined in Equation 1) indicates the category to which \(\mathbf{x}_{n}\) belongs. When considering multiple attributes, we only need a reference dataset for each attribute, rather than one large balanced dataset with all attribute labels. _This property is extremely beneficial, as it is much easier to obtain a dataset that captures only the distribution of one attribute (i.e., the marginal distribution) rather than one that captures the joint distribution of all attributes_.
**Aligning Prompts to Images with CLIP.** Given reference image sets for the target attributes, can we learn prompts that align the attributes in the images? Recently, pre-trained large-scale multimodal models have demonstrated strong capabilities in connecting vision and language. One such model is CLIP [57], which aligns visual concepts with text embeddings by jointly training a text encoder \(E_{\text{text}}\) and an image encoder \(E_{\text{img}}\). The output of the pre-trained CLIP text encoder has also been used as the condition for text-guided image generation [64, 59], opening up an opportunity to align prompts to reference images without the need to modify the text-to-image models.
One straightforward solution is to maximize the similarity between the prompt and the reference image embeddings in the CLIP space, as suggested by [57]. However, we found it deficient for two reasons. First, this objective forces the prompt to focus on the overall visual information in the images, rather than the specific attribute of interest. Second, the generated images from the learned prompt often exhibit adversarial effects or significant quality degradation, potentially due to image features distorting the prompt embedding. To address these, we propose direction alignment and semantic consistency losses, as described below.
**Direction Alignment Loss.** Instead of directly maximizing the similarity between the prompts and the images, we draw inspiration from [55, 26] to induce the direction between the prompt \(\mathbf{P}_{i}^{m}\) and \(\mathbf{P}_{j}^{m}\) to be aligned with the direction between the averaged embeddings of the reference images corresponding to _a pair of categories_\(a_{i}^{m}\) and \(a_{j}^{m}\) in \(\mathcal{A}_{m}\). This alignment of pairwise categories direction serves as a proxy task for guiding the prompts to learn the visual difference among images from category \(a_{i}^{m}\) and \(a_{j}^{m}\) (Figure 3).
Specifically, we define the direction alignment loss \(\mathcal{L}_{\text{dir}}\) to maximize the cosine similarity between the image direction and the prompt direction as follows:
\[\mathcal{L}_{\text{dir}}^{m}(\mathbf{S}_{i}^{m},\mathbf{S}_{j}^{m})=1-\big{\langle} \Delta_{\mathbf{I}}^{m}(i,j),\Delta_{\mathbf{P}}^{m}(i,j)\big{\rangle}. \tag{4}\]
Here, the image direction \(\Delta_{\mathbf{I}}\) is defined as the difference of the averaged image embeddings between two categories of the attribute \(\mathcal{A}_{m}\). Let \(\mathfrak{X}_{k}^{m}=\frac{1}{|\mathcal{B}_{k}|}\sum_{y_{n}^{m}=a_{k}^{m}}E_{ \text{img}}(\mathbf{x}_{n}^{m})\) be the averaged image embedding for category \(a_{k}^{m}\); \(|\mathcal{B}_{k}|\) is the number of images from category \(a_{k}^{m}\) in each mini-batch. We denote the image direction as follows:
\[\Delta_{\mathbf{I}}^{m}(i,j)=\mathfrak{X}_{i}^{m}-\mathfrak{X}_{j}^{m}. \tag{5}\]
Similarly, the prompt direction \(\Delta_{\mathbf{P}}\) is defined as the difference of the averaged prompt embeddings between two categories. Let \(\mathfrak{P}_{k}^{m}=\frac{1}{|\mathcal{P}_{k}^{m}|}\sum_{\mathbf{P}\in\mathcal{P} _{k}^{m}}E_{\text{text}}(\mathbf{P})\) be the averaged prompt embedding for attribute \(a_{k}^{m}\). Specifically,
Figure 3: **Translating visual differences into text embedding differences.** Given reference images of a multi-category attribute (, skin tone), we learn the inclusive tokens by direction alignment between images and prompts, ensuring that the visual difference matches the learned language description. In addition, we propose semantic consistency loss to address language drift. Images are from FAIR benchmark [21]. Details are in Section 3.2.
\(\mathcal{P}^{m}_{k}=\{\mathbf{P}\in\mathcal{P}_{\text{total}}\mid o_{m}=k\}\) is a collection of prompts containing all the category combinations for other attributes given the category \(a^{m}_{k}\) for attribute \(\mathcal{A}_{m}\) (cf. Equation 3). Finally, we denote the prompt direction as follows:
\[\Delta^{m}_{\mathbf{P}}(i,j)=\mathfrak{P}^{m}_{i}-\mathfrak{P}^{m}_{j}. \tag{6}\]
By inducing the direction alignment, we aim to facilitate the prompt learning of more meaningful and nuanced differences between images from different categories.
**Semantic Consistency Loss.** We observe that direction alignment loss alone may result in language drift [46, 41, 65] -- the prompts slowly lose syntactic and semantic properties of language as they only focus on solving the alignment task. To resolve this issue, we design a semantic consistency objective to regularize the training by maximizing the cosine similarity between the learning prompts and the original input prompt (see Figure 3):
\[\mathcal{L}^{m}_{\text{sem}}(\mathbf{S}^{m}_{i},\mathbf{S}^{m}_{j})=\text{max}\Big{(} 0,\lambda-\big{\langle}E_{\text{text}}(\mathbf{P}),E_{\text{text}}(\mathbf{T}) \big{\rangle}\Big{)} \tag{7}\]
where \(\mathbf{P}\in\mathcal{P}^{m}_{i}\cup\mathcal{P}^{m}_{j}\) and \(\lambda\) is a hyperparameter (see an analysis in Section 4.3). This loss is crucial for generating high-quality images that remain faithful to the input prompt.
**Optimization.** Building upon \(\mathcal{L}^{m}_{\text{dir}}\) and \(\mathcal{L}^{m}_{\text{sem}}\), our total training loss for learning the inclusive tokens of a pair of categories in attribute \(\mathcal{A}_{m}\) is written as follows:
\[\mathcal{L}^{m}_{\text{pair}}(\mathbf{S}^{m}_{i},\mathbf{S}^{m}_{j})=\mathcal{L}^{m}_ {\text{dir}}(\mathbf{S}^{m}_{i},\mathbf{S}^{m}_{j})+\mathcal{L}^{m}_{\text{sem}}(\mathbf{ S}^{m}_{i},\mathbf{S}^{m}_{j}). \tag{8}\]
At each iteration, we update the embeddings of inclusive tokens of all the categories from _only one attribute_ but freeze the parameters of inclusive tokens for all other attributes. The final objective during the whole learning process is:
\[\mathcal{L}_{\text{total}}=\sum_{m=11\leq i<j\leq K_{m}}^{M}\mathcal{L}^{m}_{ \text{pair}}(\mathbf{S}^{m}_{i},\mathbf{S}^{m}_{j}), \tag{9}\]
where the inner summation enumerates all pairwise categories for one attribute \(\mathcal{A}_{m}\) at each iteration, while the outer summation alters the attribute across the iteration.
### Key Properties of Iti-Gen
**Generalizability.** Unlike personalization methods that train the embeddings for a specific model (because they use diffusion losses [25, 40, 65]), _the tokens learned by_ITi-Gen _are transferable between different models._ We highlight two use cases for these tokens. (1) _In-domain generation._ We use the user-specified prompt \(\mathbf{T}\) to learn the inclusive tokens and then apply them back to \(\mathbf{T}\) to generate inclusive images. (2) _Train-once-for-all._ As shown in Equation 3, the newly introduced inclusive tokens do not change the original prompt \(\mathbf{T}\), which implies that the learned tokens can be compatible with a different human-written prompt. For human face images, an example \(\mathbf{T}\) for training can be any neutral prompt, _e.g._, _"a headshot of a person"_. After training, inclusive tokens can be used to handle out-of-domain prompts (_e.g._, _"a photo of a doctor"_) or facilitate different models [83, 7] in a plug-and-play manner, justifying the generalizability of our approach.
**Data, Memory, and Computational Efficiency.** ITI-Gen uses averaged image features to guide prompt learning, indicating that (1) only a few dozen images per category are sufficient, and (2) a balanced distribution across categories within an attribute is _not_ required. ITI-Gen keeps the text-to-image model intact and only updates the inclusive tokens, allowing it to circumvent the costly back-propagation step in the diffusion model. Training with a single attribute takes approximately 5 minutes (1 A4500 GPU). In practice, we set the length6 (\(q\) in Equation 3) of inclusive tokens to \(3\) (which is less than 10KB) for all attribute categories of interest in our study. Hence, when scaling up to scenarios with multiple attributes, ITI-Gen always has low memory requirements for both training and storing inclusive tokens.
Footnote 6: The token length used here is generalizable across the attributes we studied in this paper. See Appendix E.1 for a detailed ablation study.
**Comparison to Image Editing Methods.** Our direction alignment loss may be reminiscent of the directional CLIP loss employed in image editing methods [26, 38]. However, they are fundamentally different. First, our ITI-Gen is designed to promote the inclusiveness, while image editing methods focus on single image manipulation. Second, image editing methods modify the source image according to the change in texts (from source to target), whereas ITI-Gen learns prompts by leveraging changes in images from one category to another. This key difference suggests a significant distinction: the two methods are learning the task from completely different directions.
## 4 Experiments
We validate ITI-Gen for inclusive text-to-image generation on various attributes and scenarios. We begin by introducing the experimental setup in Section 4.1, then present the main results in Section 4.2, and finally, show detailed ablation studies and applications in Section 4.3. Please see Appendix for additional details, results, and analyses.
### Setup
**Datasets.** We construct reference image sets and investigate a variety of attributes based on the following datasets. **(1) CelebA**[44] is a face attributes dataset and each image with \(40\) binary attribute annotations. We experiment with these binary attributes and their combinations. **(2) FAIR benchmark (FAIR)**[21] is a recently proposed synthetic face dataset used for skin tone estimation. Following [21],
we use the ground-truth albedos to classify each facial crop into one of six skin tone levels [22] and use FAIR for in-clusiveness on skin tone type. **(3) FairFace**[35]7 contains face images with annotations for \(2\) perceived gender and \(9\) perceived age categories. **(4) Landscapes HQ (LHQ)**[69] provides unlabeled natural scene images. With the annotation tool from [74], each image can be labeled with \(6\) quality (_e.g_., colorfulness, brightness) and \(6\) abstraction (_e.g_., scary, aesthetic) attributes. Figure 5 shows example images.
Footnote 7: We note that, while the FairFace dataset contains race categories, we focus instead on skin tone in this study. This is because skin tone is more readily inferable from pixels, whereas racial identities are better understood as social concepts that are neither immutable nor biological in nature [8, 15, 62, 4]; furthermore, phenotypic variation of skin tone within racial identification groups is well documented [51].
**Experimental Protocols.** We only require that a reference image set captures a marginal distribution for each attribute (cf. Section 3.2). Note that, while images from CelebA and FairFace are annotated with multiple attributes, we use only the attribute label for each target category but not others. We randomly select \(25\) reference images per category as our default setting (and ablate it in Section 4.3). For attribute settings, we consider _single binary attribute_, _multi-category attributes_, and _multiple attributes_ in the domains of human faces and scenes. We study both in-domain and train-once-for-all generations (cf. Section 3.3) and further provide qualitative and quantitative analyses for each setup.
**Quantitative Metrics.** We use two metrics to quantify distribution diversity and image quality. (1) _Distribution Discrepancy_ (\(\mathbb{D}_{\text{KL}}\)). Following [12, 14], we use the CLIP model to predict the attributes in the images. For attributes that CLIP might be erroneous, we leverage pre-trained classifiers [35] combined with human evaluations. Specifically, for skin tone, which is extreme difficult to obtain an accurate scale [1, 2, 29], we adopt the most commonly used Fitzpatrick skin type [10] combined with off-the-shelf
\begin{table}
\begin{tabular}{l|c c c c c c|c c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{8}{c|}{**(a) Single Attribute**} & \multicolumn{8}{c}{**(b) Multiple Attributes**} \\ \cline{2-10} & \(\mathbb{D}_{\text{KL}}^{\text{male}}\downarrow\) & \(\mathbb{D}_{\text{KL}}^{\text{young}}\downarrow\) & \(\mathbb{D}_{\text{KL}}^{\text{male}\text{ skin}}\downarrow\) & \(\mathbb{D}_{\text{KL}}^{\text{male}}\downarrow\) & \(\mathbb{D}_{\text{KL}}^{\text{englass}}\downarrow\) & \(\mathbb{D}_{\text{KL}}^{\text{male}\times\text{young}}\downarrow\) & \(\mathbb{D}_{\text{KL}}^{\text{male}\times\text{young}}\times\text{eglass} \downarrow\) & \(\mathbb{D}_{\text{KL}}^{\text{male}\times\text{young}}\times\text{eglass} \downarrow\) & \(\mathbb{D}_{\text{KL}}^{\text{male}\times\text{young}}\times\text{eglass} \downarrow\) \\ \hline SD [64] & 0.343 & 0.578 & 0.308 & 0.375 & 0.111 & 0.134 & 0.882 & 1.187 & 1.406 \\ EI [5] & 0.143 & 0.423 & 0.644 & 0.531 & 0.693 & 0.189 & 0.361 & 1.054 & 1.311 \\ HPS [19] & 1\(\times 10^{-5}\) & 0.027 & 2.8 \(\times 10^{-3}\) & 0.371 & 0.241 & 4.4 \(\times 10^{-3}\) & 3.5 \(\times 10^{-3}\) & 0.399 & 0.476 \\ PD [14] & 0.322 & 0.131 & 0.165 & 0.272 & 0.063 & 0.146 & – & – & – \\ CD [40] & 0.309 & 0.284 & 0.074 & 0.301 & 0.246 & 0.469 & – & – & – \\ \hline
**ITI-Gen** & **2\(\times\)10\({}^{-6}\) & 2\(\times\)10\({}^{-4}\)** & **0** & **2\(\times\)10\({}^{-4}\)** & **4.5 \(\times\)10\({}^{-4}\)** & **2.5 \(\times\)10\({}^{-3}\)** & **1.3 \(\times\)10\({}^{-4}\)** & **0.061** & **0.094** \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Comparison with baseline methods with (a) single attribute and (b) multiple attributes.** Reference images are from CelebA. We use CLIP [57] as the attribute classifier [12, 14]. **ITI-Gen** achieves competitive results for both settings. **SD**: vanilla stable diffusion. **EI**: ethical intervention. **HPS**: hard prompt searching. **PD**: prompt debiasing. **CD**: custom diffusion. See Appendix F for full results.
Figure 4: **Qualitative results of the combination of four binary attributes** (the last column in Table 1). The input prompt (\(\mathbf{T}\)) is _“a headshot of a person”_. By using the learned inclusive tokens (cf. Equation 3), **ITI-Gen** can inclusively generate images with all attribute combinations. Images across each tuple are sampled using the same random seed. More examples are included in Appendix F.
Figure 5: **Examples of reference images.** CelebA [44] and FairFace [35] are real-face datasets with different resolutions and focuses. FAIR benchmark [21] is a synthetic dataset used for skin tone estimation. Landscape (LHQ) [69] contains images from natural scenes. **ITI-Gen** can leverage various image sources to benefit inclusive text-to-image generation for various attributes.
models [21] for evaluation. (2) _FID_. We report the FID score [27, 53] (FFHQ [36]) to measure the image quality. Please see Appendix E for more details.
**Baselines.** We compare ITI-Gen to the following methods. (1) _Stable Diffusion_ (SD) [64] without any modification. (2) _Ethical Intervention_ (EI) [5] that edits the prompt by adding attribute-related interventions. (3) _Hard Prompt Searching_ (HPS) [19] that directly expresses the desired attribute category in the prompt. (4) _Prompts Debiasing_ (PD) [14] that calibrates the bias in the text embedding by using the attribute category names. (5) _Custom Diffusion_ (CD) [40] that fine-tunes the text-to-image model with reference images based on Textual Inversion [25, 65].
**Implementation Details.** We use Stable Diffusion [64] (sd-v1-4) as the base model for all methods and show compatibility with ControlNet [83] and InstructPix2Pix [7]. ITI-Gen is model agnostic as long as they take token embeddings as the inputs. We set \(\lambda=0.8\) in \(\mathcal{L}_{\text{sem}}\) across all experiments and show that \(\lambda\) can be robustly selected according to the prior knowledge (see Section 4.3). All the inclusive tokens are initiated as zero vectors8. We set the length of the inclusive tokens to \(3\) in all experiments. _There is no additional hyper-parameter in our framework._ The total number of the parameters for the inclusive tokens that need to be optimized is \(\sum_{m=1}^{M}K_{m}\times 3\times 768\), where \(M\) is the number of attributes, \(K_{m}\) is the category number for attribute \(m\), and \(768\) is the dimension of the embedding (\(e\) in Equation 3). We train the models with \(30\) epochs on a batch size of 16 and a learning rate of \(0.01\). During training, we leverage image augmentations used in the CLIP image encoder.
Footnote 8: We investigated other options such as random initialization but did not see notable differences in both generation quality and training speed.
### Main Results
**Single Binary Attribute.** To demonstrate the capability of ITI-Gen to sample images with a variety of face attributes, we construct \(40\) distinct reference image sets based on attributes from CelebA [44]. Each represents a specific binary attribute and contains an equal number of images (\(50\%\)) for the positive and negative categories9. Table 1(a) shows a comparison to state-of-the-art methods. We evaluate \(5\) text prompts -- _"a headshot of a {person, professor, doctor, worker, firefighter}"_ -- and sample \(200\) images per prompt for each attribute, resulting in \(40\)K generated images. We highlight the averaged results across \(5\) prompts of \(6\) attributes. We provide complete results in Appendix F.2. ITI-Gen achieves near-perfect performance on balancing each binary attribute, justifying our motivation: using separate inclusive tokens is beneficial in generating images that are uniformly distributed across attribute categories.
Footnote 9: We found that different ratios do not lead to notable differences. We provide an analysis of learning with imbalanced data in Appendix F.3.
**Multiple Attributes.** Given multiple reference image sets (each captures the marginal distribution for an attribute), can ITI-Gen generate diverse images across any category combination of the attributes? We provide an affirmative answer and present results in Table 1(b) and Figure 4. As we observe, ITI-Gen produces diverse and high-quality images with significantly lower distribution discrepancies compared to baseline methods. We attribute this to the aggregation operation of inclusive tokens (Equation 3), allowing ITI-Gen to disentangle the learning of different inclusive tokens with images in marginal distributions.
**Multi-Category Attributes.** We further investigate multi-category attributes including perceived age and skin tone. Specifically, we consider two challenging settings: (1) Perceived Gender \(\times\) Age (Figure 6(a)), and (2) Perceived Gender \(\times\) Skin Tone (Figure 6(b)). ITI-Gen achieves inclusiveness across all setups, especially on extremely under-represented categories for age (\(<\) 10 and \(>50\) years old in Figure 6(a)). More surprisingly (Figure 6(b)), ITI-Gen can leverage synthetic images (from FAIR) and jointly learn
Figure 6: **Multi-category distribution** with “_a headshot of a person_”. For a reliable evaluation, the results of (a) are evaluated using classifiers in [35], and (b) are evaluated using existing models [10, 21]. The generated images from ITI-Gen are more uniformly distributed across different sub-groups than the baseline Stable Diffusion. See Figure 7 for qualitative results.
Figure 7: **Results of ITI-Gen on multi-category attributes** for Gender\(\times\)Age (Figure 6(a)) and Gender\(\times\)Skin Tone (Figure 6(b)). Examples are randomly picked with “_a headshot of a person_”.
from different data sources (CelebA for gender and FAIR for skin tone), demonstrating great potential for bootstrapping inclusive data generation with graphics engines.
**Other Domains.** Besides human faces, we apply ITI-Gen to another domain: scene images. We claim that the inclusive text-to-image generation accounts for attributes from not only humans but also scenes, objects, or even environmental factors. Specifically, we use images from LHQ [69] as guidance to learn inclusive tokens and generate images with diverse subjective perception attributes. As illustrated in Figure 8, ITI-Gen can enrich the generated images to multiple levels of colorfulness10, justifying the generalizability of our method to the attributes in different domains.
Footnote 10: Note that the subjective attributes we explore here are different from artistic styles (_e.g_., painting, cartoon) in image-to-image translation (_e.g_., [26]). Understanding the attributes related to _quality_ and _look_ of images may be intuitive for humans but remain non-trivial for generative models.
### Ablations and Applications
**Reference Images.** Figure 9 illustrates the impact of the _quantity_ of reference images per attribute category, telling that ITI-Gen can produce high-quality images using very few reference data without sacrificing inclusiveness (KL). In addition, as indicated in Table 2, ITI-Gen consistently generates realistic images regardless of reference sources (see examples in Figure 4 and Figure 7). More interestingly, we found that using synthetic images (_i.e_., FAIR [21]) is slightly better than real data [44, 35]. We hypothesize that the background noise in real images degrades the quality.
**Semantic Consistency Loss \(\mathcal{L}_{\text{sem}}\).** Again in Table 2, we compare ITI-Gen with and without \(\mathcal{L}_{\text{sem}}\). With the help of the semantic constraint (Figure 3), we regularize the learned embeddings not too far from the original prompt. We show evidence to verify this insight: the averaged CLIP similarity scores of text features between the hard prompts of \(40\) attributes in CelebA and the original prompt is \(0.8\) (the \(\lambda\) we used), suggesting that the hyper-parameter can be robustly chosen based on prior linguistic knowledge.
**Train-once-for-all Generalization.** As shown in Figure 8, inclusive tokens can be applied to user-specified prompts in a plug-and-play manner (Section 3.2). In Figure 10, we provide more examples of professional prompts to demonstrate the ability of train-once-for-all generation.
**Compatibility with ControlNet**[83]. ITI-Gen achieves inclusiveness by learning attribute-specific prompts without modifying the original text-to-image model, potentially benefiting various downstream vision-language tasks. In Figure 11, we demonstrate its compatibility with ControlNet [83], a state-of-the-art model capable of conditioning
\begin{table}
\begin{tabular}{c c c c} \hline \hline
**Method** & **Source** & \(\mathcal{L}_{\text{sem}}\) & **FID\(\downarrow\)** \\ \hline Baseline [64] & – & – & 67.40 \\ \hline \multirow{6}{*}{ITTI-Gen} & \multirow{2}{*}{CelebA [44]} & ✓ & **60.38** \\ & & ✗ & (+17.40) \\ \cline{1-1} & & & ✓ & **55.10** \\ \cline{1-1} & & ✗ & (+9.01) \\ \cline{1-1} \cline{2-5} & & & ✓ & **51.83** \\ \cline{1-1} & & ✗ & (+10.86) \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Ablation on reference image sources and \(\mathcal{L}_{\text{sem}}\).** ITI-Gen produces lower FID than the baseline Stable Diffusion. Semantic consistency loss \(\mathcal{L}_{\text{sem}}\) plays a key role in quality control.
Figure 8: **ITI-Gen with perception attributes on scene images.** The tokens of “colorfulness” are trained with “_a photo of a natural scene_” and applied to “_a castle on the cliff_” in this example (_train-once-for-all_ in Section 3.3). ITI-Gen (right) enables the baseline Stable Diffusion (left) to generate images with different levels of colorfulness. Same seed for each row. Better viewed in color. See Appendix F.5 for results of other attributes, _e.g_., scary, brightness.
Figure 10: **Train-once-for-all generalization.** Inclusive tokens of ITI-Gen trained with a neutral prompt (“_a headshot of a person_”) can be applied to out-of-domain prompts in these two examples to alleviate stereotypes. See Appendix F.6 for more results.
Figure 9: **Ablation on the quantity of reference images.** More reference images (\(>10\)) help possibly due to more diversity and less noise. ITI-Gen is robust in the low data regime (Section 3.3).
on a variety of inputs beyond text. Interestingly, we observe an intriguing feature where the newly introduced tokens may implicitly entangle other biases or contrasts inherent in the reference image sets, such as clothing style. Nevertheless, we emphasize that disentanglement of attributes is not the primary concern of this study. ITI-Gen achieves competitive results in distributional control for the _intended_ attributes (, skin tone in Figure 11) -- aggregating tokens learned from marginal distributions implicitly disentangles the _known_ attributes of interest.
**Compatibility with InstructPix2Pix (IP2P)**[7]. Note that, achieving fully unsupervised disentanglement is a challenging task [45]. Previous attempts in image generation often resort to additional supervision, either through the use of reference data [13], classifiers learned from a joint distribution [71], or even more robust controls such as instruction-based image editing [7]. Here, we show that ITI-Gen can potentially disentangle the target attribute by incorporating InstructPix2Pix [7] -- to improve the inclusiveness of IP2P on the target attribute, while ensuring minimal changes to other features such as clothing and background. Results are shown in Figure 12, telling that ITI-Gen can be an effective method to condition diffusion on contrastive image sets,, images taken by different cameras, art by unknown artists, and maybe even different identities of people.
## 5 Conclusion and Discussion
We present a new method for inclusive text-to-image generation. Our main contribution lies in a new direction: _leveraging readily available reference images to improve the inclusiveness of text-to-image generation_. This problem is timely and challenging [6, 5, 14, 23, 12]. Our key insight is learning separate token embeddings to represent different attributes of interest via image guidance. The proposed ITI-Gen method is simple, compact, generalizable, and effective on various applications. Specifically, ITI-Gen has several advantages: (1) scalable to multiple attributes and different domains using relatively small numbers of images; (2) can be used in a plug-and-play manner to out-of-distribution, relatively complex prompts; (3) efficient in both training and inference; (4) compatible with the text-to-image generative models that support additional conditions or instructions. We conduct extensive experiments to verify the effectiveness of the proposed method on multiple domains, offering insights into various modeling choices and mechanisms of ITI-Gen. We incorporate a broad spectrum of attributes in both human faces and scenes. We hope that our results and insights can encourage more future works on exploring inclusive data generation.
**Limitations.** ITI-Gen can handle a wide range of general attributes, such as perceived gender and skin tone, and excels in cases where "Hard Prompt" struggles. However, there remain several limitations. First, ITI-Gen does not always provide optimal results for very subtle facial attributes (Appendix F.2) or for the combinations of highly entangled attributes (Appendix F.3). Second, ITI-Gen still requires dozens of reference images for each category as guidance. It is possible that the reference images may introduce biases or inaccuracies. One mitigation strategy is to integrate ITI-Gen with models that offer robust controls [7], such as the one highlighted in Figure 12.
**Acknowledgments.** We thank Oliver Wang, Jianjin Xu, and Or Patashnik for their feedback on the drafts of this paper.
Figure 11: **Compatibility with models using additional conditions**,, human pose (left). ITI-Gen promotes inclusiveness of ControlNet [83] by using the inclusive tokens of six skin tone types (right). The tokens are trained with “_a headshot of a person_” guided by images from FAIR dataset [21], and applied here in a _train-once-for-all_ manner (Section 3.3). See Appendix F.7 for additional results on versatile conditions,, depth, segmentation.
Figure 12: **Compatibility with instruction-based image editing methods.** Given an image and a written instruction (top-left), InstructPix2Pix (IP2P) [7] follows the instruction to edit the image (bottom-left). ITI-Gen (right) enables inclusive instruction-based image editing. Similar to Figure 11, the inclusive tokens used in this example are trained in a train-once-for-all manner. |
2309.12015 | Sharp semiclassical spectral asymptotics for Schrödinger operators
with non-smooth potentials | We consider semiclassical Schr\"odinger operators acting in
$L^2(\mathbb{R}^d)$ with $d\geq3$. For these operators we establish a sharp
spectral asymptotics without full regularity. For the counting function we
assume the potential is locally integrable and that the negative part of the
potential minus a constant is one time differentiable and the derivative is
H\"older continues with parameter $\mu\geq1/2$. Moreover we also consider sharp
Riesz means of order $\gamma$ with $\gamma\in(0,1]$. Here we assume the
potential is locally integrable and that the negative part of the potential
minus a constant is two time differentiable and the second derivative is
H\"older continues with parameter $\mu$ that depends on $\gamma$. | Søren Mikkelsen | 2023-09-21T12:34:40Z | http://arxiv.org/abs/2309.12015v2 | # Sharp semiclassical spectral asymptotics for Schrodinger operators with non-smooth potentials
###### Abstract
We consider semiclassical Schrodinger operators acting in \(L^{2}(\mathbb{R}^{d})\) with \(d\geq 3\). For these operators we establish a sharp spectral asymptotics without full regularity. For the counting function we assume the potential is locally integrable and that the negative part of the potential minus a constant is one time differentiable and the derivative is Holder continues with parameter \(\mu\geq 1/2\). Moreover we also consider sharp Riesz means of order \(\gamma\) with \(\gamma\in(0,1]\). Here we assume the potential is locally integrable and that the negative part of the potential minus a constant is two time differentiable and the second derivative is Holder continues with parameter \(\mu\) that depends on \(\gamma\).
## 1 Introduction
Consider a semiclassical Schrodinger operator \(H_{h}=-\hbar^{2}\Delta+V\) acting in \(L^{2}(\mathbb{R}^{d})\), where \(-\Delta\) is the positive Laplacian and \(V\) is the potential. For the Schrodinger operator \(H_{h}\) the Weyl law states that
\[\operatorname{Tr}\big{[}\mathbf{1}_{(-\infty,0]}(H_{h})\big{]}=\frac{1}{(2\pi \hbar)^{d}}\int_{\mathbb{R}^{2d}}\mathbf{1}_{(-\infty,0]}(p^{2}+V(x))\,dpdx+o( \hbar^{-d}), \tag{1.1}\]
where \(\mathbf{1}_{\Omega}(t)\) is the characteristic function of the set \(\Omega\). It has recently been proven by Frank [6] that (1.1) is valid under the condition that \(d\geq 3\), \(V\in L^{1}_{loc}(\mathbb{R}^{d})\) and \(V_{-}\in L^{\frac{d}{2}}(\mathbb{R}^{d})\), where \(V_{-}=\max(0,-V)\). These conditions are the minimal conditions such that both sides of the equality are well defined and finite. For a brief historical description of the development on establishing (1.1) under minimal assumptions see the introduction of [6].
Under additional assumptions on the potential \(V\) it was established by Helffer and Robert in [7] that
\[\operatorname{Tr}\big{[}\mathbf{1}_{(-\infty,0]}(H_{h})\big{]}=\frac{1}{(2\pi \hbar)^{d}}\int_{\mathbb{R}^{2d}}\mathbf{1}_{(-\infty,0]}(p^{2}+V(x))\,dpdx+ \mathcal{O}(\hbar^{1-d}) \tag{1.2}\]
for all \(\hbar\in(0,\hbar_{0}]\), \(\hbar_{0}\) sufficiently small. They proved this under the condition that \(V\in C^{\infty}(\mathbb{R}^{d})\), satisfies some regularity condition at infinity and \(V(x)\geq c>0\) for all \(x\in\Omega^{c}\), where \(\Omega\subset\mathbb{R}^{d}\) is some open bounded set. Moreover, they assumed a non-critical condition on the energy surface \(\{(x,p)\in\mathbb{R}^{2d}\,|\,p^{2}+V(x)=0\}\). The non-critical condition can afterwards be removed see e.g. [18]. The error estimate in (1.2) is the best generic error estimate one can obtain. As
an example, one can consider the operator \(H_{\hbar}=-\hbar^{2}\Delta+x^{2}-\lambda\), for some \(\lambda>0\). For this operator we can explicitly find all eigenvalues and check by hand that (1.2) is valid with an explicit error of order \(\hbar^{1-d}\).
When comparing the two results in dimensions \(d\geq 3\) it do raise the question: Is the formula (1.2) valid under less smoothness? Could it even be valid for all \(V\) satisfying the assumptions of the result by Frank? The last part of the question seems currently out of reach to give a positive answer and to the authors knowledge there do not yet exist a counter example. However, for the first part of the question we will give positive answers.
We will in fact not just consider the Weyl law but also Riesz means. That is for \(\gamma\in[0,1]\) we will consider traces of the form
\[\operatorname{Tr}\big{[}g_{\gamma}(H_{\hbar})\big{]}, \tag{1.3}\]
where the function \(g_{\gamma}\) is given by
\[g_{\gamma}(t)=\begin{cases}\mathbf{1}_{(-\infty,0]}(t)&\gamma=0\\ (t)_{-}^{\gamma}&\gamma\in(0,1].\end{cases} \tag{1.4}\]
Frank also considered traces of the form (1.3) in [6]. Helffer and Robert did only consider Weyl asymptotics in [7], but proved the sharp estimate for Riesz means in [8]. For later comparison and use we recall the exact statement of the results obtained by Frank in [6].
**Theorem 1.1**.: _Let \(\gamma\geq 1/2\) if \(d=1\), \(\gamma>0\) if \(d=2\) and \(\gamma\geq 0\) if \(d\geq 3\). Let \(\Omega\subset\mathbb{R}^{d}\) be an open set and let \(V\in L^{1}_{loc}(\Omega)\) with \(V_{-}\in L^{\gamma+d/2}(\Omega)\). Then_
\[\operatorname{Tr}\big{[}g_{\gamma}(H_{\hbar})\big{]}=\frac{1}{(2\pi\hbar)^{d} }\int_{\mathbb{R}^{2d}}g_{\gamma}(p^{2}+V(x))\,dxdp+o(\hbar^{-d}) \tag{1.5}\]
_as \(\hbar\to 0\), where \(H_{\hbar}=-\hbar^{2}\Delta+V(x)\) is considered in \(L^{2}(\Omega)\) with Dirichlet boundary conditions._
One thing to observe here is that this theorem is also valid, when we are on bounded domains. We will only discuss the case, where the domain is the whole space \(\mathbb{R}^{d}\), \(d\geq 3\). For results on sharp Weyl laws without full regularity and on bounded domains we refer the reader to the works by Ivrii [11] and [12, Vol 1].
We will set up some notation and recall a definition before we give the assumptions for our main theorem and state it.
**Definition 1.2**.: Let \(f:\mathbb{R}^{d}\mapsto\mathbb{R}\) be a measurable function. For each \(\nu\in\mathbb{R}\) we define the set
\[\Omega_{\nu,f}\coloneqq\big{\{}x\in\mathbb{R}^{d}\,|\,f(x)<\nu\big{\}}.\]
**Definition 1.3**.: For \(k\) in \(\mathbb{N}\) and \(\mu\) in \([0,1]\) and \(\Omega\subset\mathbb{R}^{d}\) open we denote by \(C^{k,\mu}(\Omega)\) the subspace of \(C^{k}(\Omega)\) defined by
\[C^{k,\mu}(\Omega)=\big{\{}f\in C^{k}(\Omega)\,\big{|}\,\exists C >0: |\partial_{x}^{\alpha}f(x)-\partial_{x}^{\alpha}f(y)|\leq C|x-y|^{\mu}\] \[\forall\alpha\in\mathbb{N}^{d}\text{ with }\left|\alpha \right|=k\text{ and }\forall x,y\in\Omega\big{\}}.\]
These definitions are here to clarify notation. We are now ready to state our assumptions on the potential \(V\).
**Assumption 1.4**.: Let \(V\in L^{1}_{loc}(\mathbb{R}^{d})\) be a real function. Suppose there exists numbers \(\nu>0\), \(k\in\mathbb{N}_{0}\) and \(\mu\in[0,1]\) such that the set \(\Omega_{4\nu,V}\) is open and bounded and \(V\in C^{k,\mu}(\Omega_{4\nu,V})\).
With our assumptions on the potential \(V\) in place we can now state the main theorem.
**Theorem 1.5**.: _Let \(H_{\hbar}=-\hbar^{2}\Delta+V\) be a Schrodinger operator acting in \(L^{2}(\mathbb{R}^{d})\) and let \(\gamma\in[0,1]\). If \(\gamma=0\) we assume \(d\geq 3\) and if \(\gamma\in(0,1]\) we assume \(d\geq 4\). Suppose that \(V\) satisfies Assumption 1.4 with the number \(\nu>0\) and \(k=1\), \(\mu\geq\frac{1}{2}\) if \(\gamma=0\) and \(k=2\), \(\mu\geq\min(\frac{3}{2}\gamma-\frac{1}{2},0)\) if \(\gamma>0\). Then it holds that_
\[\Big{|}\operatorname{Tr}\big{[}g_{\gamma}(H_{\hbar})\big{]}-\frac{1}{(2\pi \hbar)^{d}}\int_{\mathbb{R}^{2d}}g_{\gamma}(p^{2}+V(x))\,dxdp\Big{|}\leq C\hbar ^{1+\gamma-d} \tag{1.6}\]
_for all \(\hbar\) sufficiently small. The constant \(C\) depends on the number \(\nu\) and the potential \(V\)._
When comparing the assumptions for our main theorem and Theorem 1.1 we have that in both we assume the potential to be in \(L^{1}_{loc}(\mathbb{R}^{d})\). But in Theorem 1.1 the additional assumptions on the potential is on the negative part of \(V\), whereas we need to assume regularity for the negative part of \(V-4\nu\) for some \(\nu>0\). One could have hoped to only have an assumption on the negative part of \(V\). However, this does not seem obtainable with the methods we use here. Firstly, the way we prove the theorem require us to have control of the potential just outside the classical allowed zone (\(\{x\in\mathbb{R}^{d}\,|\,V(x)\leq 0\}\)). Secondly, we have that the constant in (1.6) will diverge to infinity as \(\nu\) tends to zero. Hence, we cannot hope to do an approximation argument.
The assumptions on dimensions are needed to ensure integrability of some integrals. In the case of Theorem 1.1 there are counter examples to Weyl asymptotics for \(V\in L^{\frac{d}{2}}(\mathbb{R}^{d})\) for \(d=1,2\) for details see [1, 15].
This is not the first work considering sharp Weyl laws without full regularity. The first results in a semicalssical setting was obtained by Ivrii in [10], where he also considered higher order differential operators acting in \(L^{2}(M)\), where \(M\) is a compact manifold without boundary. In this work the coefficients are assumed to be differentiable and with a Holder continuous first derivative. This was a generalisation of works by Zielinski who previously had obtained sharp Weyl laws in high energy asymptotics in [19, 20, 21, 22]. The results by Ivrii was generalised by Bronstein and Ivrii in [2], where they reduced the assumptions further by assuming the first derivative to have modulus of continuity \(\mathcal{O}(|\log(x-y)|^{-1})\) and then again by Ivrii in [11] to also include boundaries and removing the non-critical condition. The non-critical condition, used in cases without full regularity, for a semiclassical pseudo-differential operator \(\operatorname{Op}_{\hbar}^{\mathrm{w}}(a)\) is
\[|\nabla_{p}a(x,p)|\geq c>0\qquad\text{for all }(x,p)\in a^{-1}(\{0\}). \tag{1.7}\]
In [23] Zielinski considers the semiclassical setting with differential operators acting in \(L^{2}(\mathbb{R}^{d})\) and proves an optimal Weyl Law under the assumption that all coefficients are one time differentiable with a Holder continuous derivative. Moreover, it is assumed that the coefficients and the derivatives are bounded. In [23] it is remarked that it should is possible to consider unbounded coefficients in a framework of tempered variation models. This was generalised by the author in [13] to allow for the coefficients to be unbounded. Moreover, more general operators where also considered in [13]. Both of these works assumed a non-critical condition (1.7). This assumption makes the results of [13] and [23] not valid for Schrodinger operators. Since the assumption is equivalent to assuming that
\[|V(x)|\geq c>0\qquad\text{for all }x\in\mathbb{R}^{d}. \tag{1.8}\]
The author recently established sharp local spectral asymptotics for magnetic Schrodinger operators in [14]. The techniques used to establish those will be crucial for the results obtained here. The assumptions we make on regularity here is "lower" than the regularity assumptions made in [14].
The results obtained by Bronstein and Ivrii [2] and Ivrii [10, 11] do assume less regularity than we do in the present work. However, the techniques used in these works seem to not translate well to a non-compact setting.
The methods we use to establish Theorem 1.5 can also be be used in cases where we have less regularity than we assume in the statement of the theorem. However, if we assume less regularity, we cannot obtain sharp remainder estimates. The results we can obtain are in the following two theorems.
**Theorem 1.6**.: _Let \(H_{h}=-\hbar^{2}\Delta+V\) be a Schrodinger operator acting in \(L^{2}(\mathbb{R}^{d})\) with \(d\geq 3\). Suppose that \(V\) satisfies Assumption 1.4 with the number \(\nu>0\), \(k=1\) and \(0\leq\mu\leq 1\). Then it holds that_
\[\Big{|}\operatorname{Tr}\big{[}g_{0}(H_{\hbar})\big{]}-\frac{1}{(2\pi\hbar)^{ d}}\int_{\mathbb{R}^{2d}}g_{0}(p^{2}+V(x))\,dxdp\Big{|}\leq C\hbar^{\kappa-d} \tag{1.9}\]
_for all \(\hbar\) sufficiently small, where \(\kappa=\min[\frac{2}{3}(1+\mu),1]\) The constant \(C\) depends on the number \(\nu\) and the potential \(V\)._
One can see that for \(\mu\geq\frac{1}{2}\) we are in the setting of Theorem 1.5 and recover the sharp estimate. For the cases where \(\mu<\frac{1}{2}\) we cannot currently get optimal error. However, the "worst" error we can obtain is \(\hbar^{\frac{2}{3}-d}\). This is still a significant improving of the estimate \(\hbar^{-d}\). Moreover, since a globally Lipschitz function is almost everywhere differentiable we can with these methods obtain the error \(\hbar^{\frac{2}{3}-d}\), when the potential \(V\) satisfies Assumption 1.4 with the number \(\nu>0\), \(k=0\) and \(\mu=1\). The author believes that this case should also have sharp estimates.
**Theorem 1.7**.: _Let \(H_{\hbar}=-\hbar^{2}\Delta+V\) be a Schrodinger operator acting in \(L^{2}(\mathbb{R}^{d})\) with \(d\geq 4\) and let \(\gamma\in(0,1]\). Suppose that \(V\) satisfies Assumption 1.4 with the numbers \(\nu>0\), \(k=2\) and \(0\leq\mu\leq 1\). Then it holds that_
\[\Big{|}\operatorname{Tr}\big{[}g_{\gamma}(H_{\hbar})\big{]}-\frac{1}{(2\pi \hbar)^{d}}\int_{\mathbb{R}^{2d}}g_{\gamma}(p^{2}+V(x))\,dxdp\Big{|}\leq C \hbar^{\kappa-d} \tag{1.10}\]
_for all \(\hbar\) sufficiently small where \(\kappa=\min[\frac{2}{3}(2+\mu),1+\gamma]\). The constant \(C\) depends on the number \(\nu\) and the potential \(V\)._
Again we have that for \(\mu\geq\min(\frac{3}{2}\gamma-\frac{1}{2},0)\) we again recover the sharp estimates from Theorem 1.5. When considering the result we have obtained here, we have for the case \(\gamma=1\) and a \(C^{3}\) assumption sharp error terms and even in the case of \(C^{2}\) assumption we obtain an error of the form \(\hbar^{\frac{4}{3}-d}\).
The current paper is structured as follows. In Section 2 we specify our notation and construct approximating/framing operators. Inspired by these framing operators we define operators that locally behave as rough Schrodinger operators in Section 3. For these operators we establish a sharp Weyl law at the end of the section. This result rely heavily on the results obtained in [14]. In Section 4 we first establish a result on localisations of the traces and a comparison of phase-space integrals. We end the section with a proof of the main theorems.
### Acknowledgement
The author is grateful to the Leverhulme Trust for their support via Research Project Grant 2020-037.
Preliminaries
We will for an operator \(A\) acting in a Hilbert space \(\mathscr{H}\) denote the operator norm by \(\|A\|_{\mathrm{op}}\) and the trace norm by \(\|A\|_{1}\). Moreover, we will in the following use the convention that \(\mathbb{N}\) is the strictly positive integers and \(\mathbb{N}_{0}=\mathbb{N}\cup\{0\}\).
Next we will describe the operators we are working with. Under Assumption 1.4 we can define the operator \(H_{\hbar}=-\hbar^{2}\Delta+V\) as the Friedrichs extension of the quadratic form given by
\[\mathfrak{h}[f,g]=\int_{\mathbb{R}^{d}}\hbar^{2}\sum_{i=1}^{d}\partial_{x_{i}} f(x)\overline{\partial_{x_{i}}g(x)}+V(x)f(x)\overline{g(x)}\ dx,\qquad f,g\in\mathcal{D}(\mathfrak{h}),\]
where
\[\mathcal{D}(\mathfrak{h})=\left\{f\in L^{2}(\mathbb{R}^{d})|\int_{\mathbb{R}^ {d}}|p|^{2}\left|\hat{f}(p)\right|^{2}\ dp<\infty\ \text{and}\ \int_{\mathbb{R}^{d}}|V(x)|\left|f(x)\right|^{2}\ dx<\infty\right\}.\]
In this set up the Friedrichs extension will be unique and self-adjoint see e.g. [16]. We will in our analysis use the Helffer-Sjostrand formula. Before we state it we will recall a definition of an almost analytic extension.
**Definition 2.1** (Almost analytic extension).: For \(f\in C_{0}^{\infty}(\mathbb{R})\) we call a function \(\tilde{f}\in C_{0}^{\infty}(\mathbb{C})\) an almost analytic extension if it has the properties
\[|\bar{\partial}\tilde{f}(z)| \leq C_{n}|\operatorname{Im}(z)|^{n},\qquad\text{for all}\ n\in \mathbb{N}_{0}\] \[\tilde{f}(t) =f(t)\qquad\text{for all}\ t\in\mathbb{R},\]
where \(\bar{\partial}=\frac{1}{2}(\partial_{x}+i\partial_{y})\).
For how to construct the almost analytic extension for a given \(f\in C_{0}^{\infty}(\mathbb{R})\) see e.g. [24, 4]. The following theorem is a simplified version of a theorem in [3].
**Theorem 2.2** (The Helffer-Sjostrand formula).: _Let \(H\) be a self-adjoint operator acting on a Hilbert space \(\mathscr{H}\) and \(f\) a function from \(C_{0}^{\infty}(\mathbb{R})\). Then the bounded operator \(f(H)\) is given by the equation_
\[f(H)=-\frac{1}{\pi}\int_{\mathbb{C}}\bar{\partial}\tilde{f}(z)(z-H)^{-1}\,L(dz),\]
_where \(L(dz)=dxdy\) is the Lebesgue measure on \(\mathbb{C}\) and \(\tilde{f}\) is an almost analytic extension of \(f\)._
### Construction of framing operators and auxiliary asymptotics
The crucial part in this construction is Proposition 2.3, for which a proof can be found in either [2, Proposition 1.1] or [12, Proposition 4.A.2].
**Proposition 2.3**.: _Let \(f\) be in \(C^{k,\mu}(\mathbb{R}^{d})\) for a \(\mu\) in \([0,1]\). Then for every \(\varepsilon>0\) there exists a function \(f_{\varepsilon}\) in \(C^{\infty}(\mathbb{R}^{d})\) such that_
\[|\partial_{x}^{\alpha}f_{\varepsilon}(x)-\partial_{x}^{\alpha}f(x)| \leq C_{\alpha}\varepsilon^{k+\mu-|\alpha|} |\alpha|\leq k, \tag{2.1}\] \[|\partial_{x}^{\alpha}f_{\varepsilon}(x)| \leq C_{\alpha}\varepsilon^{k+\mu-|\alpha|} |\alpha|\geq k+1,\]
_where \(C_{\alpha}\) is independent of \(\varepsilon\), but depends on \(f\) for all \(\alpha\in\mathbb{N}_{0}^{d}\)._
**Lemma 2.4**.: _Let \(H_{\hbar}=-\hbar^{2}\Delta+V\) be a Schrodinger operator acting in \(L^{2}(\mathbb{R}^{d})\) and suppose that \(V\) satisfies Assumption 1.4 with the numbers \((\nu,k,\mu)\). Then for all \(\varepsilon>0\) there exists two framing operators \(H^{\pm}_{\hbar,\varepsilon}\) such that_
\[H^{-}_{\hbar,\varepsilon}\leq H_{\hbar}\leq H^{+}_{\hbar,\varepsilon} \tag{2.2}\]
_in the sense of quadratic forms. The operators \(H^{\pm}_{\hbar,\varepsilon}\) are explicitly given by \(H^{\pm}_{\hbar,\varepsilon}=-\hbar^{2}\Delta+V^{\pm}_{\varepsilon}\), where_
\[V^{\pm}_{\varepsilon}(x)=V^{1}_{\varepsilon}(x)+V^{2}(x)\pm C\varepsilon^{k+ \mu}, \tag{2.3}\]
_where the function \(V^{1}_{\varepsilon}(x)\) is the smooth function from Proposition 2.3 associated to \(V^{1}=V\varphi\) and \(V^{2}=V(1-\varphi)\). The function \(\varphi\) is chosen such that \(\varphi\in C^{\infty}_{0}(\mathbb{R}^{d})\) with \(\varphi(x)=1\) for all \(x\in\Omega_{3\nu,V}\) and \(\operatorname{supp}(\varphi)\subset\Omega_{4\nu,V}\). Moreover for all \(\varepsilon>0\) sufficiently small there exists a \(\tilde{\nu}>0\) such that_
\[\Omega_{4\tilde{\nu},V^{+}_{\varepsilon}}\cap\operatorname{supp}(V^{2})= \emptyset\quad\text{and}\quad\Omega_{4\tilde{\nu},V^{-}_{\varepsilon}}\cap \operatorname{supp}(V^{2})=\emptyset. \tag{2.4}\]
Proof.: We start by letting \(\varphi\) be as given in the statement of the lemma and set \(V^{1}=V\varphi\) and \(V^{2}=V(1-\varphi)\). By assumption we have that \(V^{1}\in C^{k,\mu}_{0}(\mathbb{R}^{d})\). Hence for all \(\varepsilon>0\) we get from Proposition 2.3 the existence of \(V^{1}_{\varepsilon}(x)\) such that (2.1) is satisfied with \(f\) replaced by \(V^{1}\). We now let
\[H_{\hbar,\varepsilon}=-\hbar^{2}\Delta+V^{1}_{\varepsilon}+V^{2}.\]
This operator is well defined and selfadjoint since both potentials are in \(L^{1}_{loc}(\mathbb{R}^{d})\). Moreover we have that \(H_{\hbar,\varepsilon}\) and \(H_{\hbar}\) will have the same domains. Let \(f\in\mathcal{D}[H_{\hbar}]\) we then have that
\[\begin{split}\big{|}\langle H_{\hbar}f,f\rangle-\langle H_{\hbar,\varepsilon}f,f\rangle\big{|}&=\big{|}\langle(V^{1}-V^{1}_{ \varepsilon})f,f\rangle\big{|}\\ &\leq\|V^{1}-V^{1}_{\varepsilon}\|_{L^{\infty}(\mathbb{R}^{d})}\| f\|^{2}_{L^{2}(\mathbb{R}^{d})}\leq c\varepsilon^{k+\mu}\|f\|^{2}_{L^{2}(\mathbb{R}^{d})}. \end{split} \tag{2.5}\]
From choosing a sufficiently large constant \(C\) we get from (2.5) that by letting \(H^{\pm}_{\hbar,\varepsilon}=-\hbar^{2}\Delta+V^{\pm}_{\varepsilon}\) with \(V^{\pm}_{\varepsilon}(x)=V^{1}_{\varepsilon}(x)+V^{2}(x)\pm C\varepsilon^{k+\mu}\) we have that (2.2) is satisfied with this choice of operators.
What remains is to establish (2.4). We have by construction that
\[\|V-V^{\pm}_{\varepsilon}\|_{L^{\infty}(\mathbb{R}^{d})}\leq C\varepsilon^{k+ \mu}. \tag{2.6}\]
Hence if we choose \(\tilde{\nu}\leq\frac{\nu}{2}\) and \(\varepsilon\) is sufficiently small we can ensure that \(\Omega_{4\tilde{\nu},V^{+}_{\varepsilon}}\subset\Omega_{3\nu,V}\) and \(\Omega_{4\tilde{\nu},V^{-}_{\varepsilon}}\subset\Omega_{3\nu,V}\). Since we have that \(\operatorname{supp}(V^{2})\subset\Omega^{c}_{3\nu,V}\) by construction it follows that with such a choice of \(\tilde{\nu}\) and for \(\varepsilon\) sufficiently small we have (2.4) is true. This concludes the proof.
_Remark 2.5_.: We will in what follows for \(\varepsilon>0\) call a potential \(V_{\varepsilon}\in C^{\infty}_{0}(\mathbb{R}^{d})\) a rough potential of regularity \(\tau\geq 0\) if
\[\sup_{x\in\mathbb{R}^{d}}\big{|}\partial^{\alpha}_{x}V_{\varepsilon}(x)\big{|} \leq C_{\alpha}\varepsilon^{\min(0,\tau-|\alpha|)}\quad\text{for all }\alpha\in\mathbb{N}^{d}_{0}\]
where the constants \(C_{\alpha}\) are independent of \(\varepsilon\). Moreover, we denote by a rough Schrodinger operator of regularity \(\tau\geq 0\) an operator of the form
\[H_{\hbar,\varepsilon}=-\hbar^{2}\Delta+V+V_{\varepsilon},\]
where \(V\in L^{1}_{loc}(\mathbb{R}^{d})\) and \(V_{\varepsilon}\) is a rough potential of regularity \(\tau\).
_Remark 2.6_.: Assume we are in the setting of Lemma 2.2 then it follows from Theorem 1.1 that there exists a constant \(C>0\) such that
\[\operatorname{Tr}\left[g_{\gamma}(H^{+}_{h,\varepsilon})\right]\leq \operatorname{Tr}\left[g_{\gamma}(H_{h})\right]\leq\operatorname{Tr}\left[g_{ \gamma}(H^{-}_{h,\varepsilon})\right]\leq C\hbar^{-d}\]
for \(\hbar>0\), \(\varepsilon>0\) sufficiently small. The constant \(C\) only depends on the dimension, the set \(\Omega_{4\nu,V}\) and \(\min(V)\). The two first inequalities follows from the min-max principle. For the third inequality we can choose a potential \(V^{min}\) such that
\[V^{min}(x)=\begin{cases}\min(V)-1&\text{if }x\in\Omega_{4\nu,V}\\ 0&\text{if }x\notin\Omega_{4\nu,V}.\end{cases}\]
Then when we consider the operator \(H^{min}_{h}=-\hbar^{2}\Delta+V^{min}\), defined as a Friedrichs extension of the associated form, we have that
\[H^{min}_{h}\leq H^{-}_{h,\varepsilon}\]
in the sense of quadratic forms. Hence using the min-max principle and Theorem 1.1 we obtain that
\[\begin{split}\operatorname{Tr}\left[g_{\gamma}(H^{-}_{h, \varepsilon})\right]&\leq\,\operatorname{Tr}\left[g_{\gamma}(H^{ min}_{h})\right]\\ &\leq\frac{1}{(2\pi\hbar)^{d}}\int_{\mathbb{R}^{2d}}g_{\gamma}(p^{ 2}+V^{min}(x))\,dxdp+\tilde{C}\hbar^{-d}\leq C\hbar^{-d},\end{split} \tag{2.7}\]
where the constant \(C\) only depends on the dimension, the set \(\Omega_{4\nu,V}\) and \(\min(V)\).
## 3 Auxiliary results and model problem
Inspired by the form of the framing operators we will make the following assumption. This assumption is essentially the assumption that appears in [18] but with a rough potential and no magnetic field.
**Assumption 3.1**.: Let \(\mathcal{H}_{h,\varepsilon}\) be an operator acting in \(L^{2}(\mathbb{R}^{d})\), where \(\hbar,\varepsilon>0\). Suppose that
* \(\mathcal{H}_{h,\varepsilon}\) is self-adjoint and lower semibounded.
* Suppose there exists an open set \(\Omega\subset\mathbb{R}^{d}\) and a rough potential \(V_{\varepsilon}\in C_{0}^{\infty}(\mathbb{R}^{d})\) of regularity \(\tau\geq 0\) such that \(C_{0}^{\infty}(\Omega)\subset\mathcal{D}(\mathcal{H})\) and \[\mathcal{H}_{h}\varphi=H_{h,\varepsilon}\varphi\quad\text{for all }\varphi\in C_{0}^{\infty}(\Omega),\] where \(H_{h,\varepsilon}=-\hbar^{2}\Delta+V_{\varepsilon}\).
For these operators we will establish our model problem. The first auxiliary result we will need was established in [14], where it is Lemma 4.6. It is almost the full model problem except we consider only the operator \(H_{h,\varepsilon}\) and not the general operator \(\mathcal{H}_{h,\varepsilon}\).
**Lemma 3.2**.: _Let \(\gamma\in[0,1]\) and \(H_{h,\varepsilon}=-\hbar^{2}\Delta+V_{\varepsilon}\) be a rough Schrodinger operator acting in \(L^{2}(\mathbb{R}^{d})\) of regularity \(\tau\geq 1\) if \(\gamma=0\) and regularity \(\tau\geq 2\) if \(\gamma>0\) with \(\hbar\in(0,\hbar_{0}]\), \(\hbar_{0}\) sufficiently small. Assume that \(V_{\varepsilon}\in C_{0}^{\infty}(\mathbb{R}^{d})\) and there exists a \(\delta\in(0,1]\) such that \(\varepsilon\geq\hbar^{1-\delta}\). Suppose there is an open set \(\Omega\subset\operatorname{supp}(V_{\varepsilon})\) and a \(c>0\) such that_
\[|V_{\varepsilon}(x)|+\hbar^{\frac{2}{3}}\geq c\qquad\text{for all }x\in\Omega.\]
_Then for \(\varphi\in C_{0}^{\infty}(\Omega)\) it holds that_
\[\Big{|}\operatorname{Tr}\big{[}\varphi g_{\gamma}(H_{h,\varepsilon})\big{]}- \frac{1}{(2\pi\hbar)^{d}}\int_{\mathbb{R}^{2d}}g_{\gamma}(p^{2}+V_{\varepsilon }(x))\varphi(x)\,dxdp\Big{|}\leq C\hbar^{1+\gamma-d},\]
_where the constant \(C\) depends only on the dimension and the numbers \(\|\partial^{\alpha}\varphi\|_{L^{\infty}(\mathbb{R}^{d})}\) and \(\varepsilon^{-\min(0,\tau-|\alpha|)}\|\partial^{\alpha}V_{\varepsilon}\|_{L^ {\infty}(\Omega)}\) for all \(\alpha\in N_{0}^{d}\)._
What remains in order for us to be able to prove our model problem is to establish that \(\operatorname{Tr}\big{[}\varphi g_{\gamma}(H_{h,\varepsilon})\big{]}\) and \(\operatorname{Tr}\big{[}\varphi g_{\gamma}(\mathcal{H}_{h,\varepsilon})\big{]}\) are close. To do this we will need some addtional notation and results.
_Remark 3.3_.: In order to prove Lemma 3.2 as done in [14] one needs to understand the Schrodinger propagator \(e^{i\hbar^{-1}tH_{h,\varepsilon}}\) associated to \(H_{h,\varepsilon}\). Under the assumptions of the lemma we can find an operator with an explicit kernel that locally approximate \(e^{i\hbar^{-1}tH_{h,\varepsilon}}\) in a suitable sense. This local construction is only valid for times of order \(\hbar^{1-\frac{\delta}{2}}\). But if we locally have a non-critical condition the approximation can be extended to a small time interval \([-T_{0},T_{0}]\), where \(T_{0}\) is indenpendent of \(\hbar\). For further details see [13]. In the following we will reference this remark and the number \(T_{0}\).
_Remark 3.4_.: Let \(T\in(0,T_{0}]\) and \(\hat{\chi}\in C_{0}^{\infty}((-T,T))\) be a real valued function such that \(\hat{\chi}(s)=\hat{\chi}(-s)\) and \(\hat{\chi}(s)=1\) for all \(t\in(-\frac{T}{2},\frac{T}{2})\). Here \(T_{0}\) is the number from Remark 3.3 Define
\[\chi_{1}(t)=\frac{1}{2\pi}\int_{\mathbb{R}}\hat{\chi}(s)e^{ist}\,ds.\]
We assume that \(\chi_{1}(t)\geq 0\) for all \(t\in\mathbb{R}\) and there exist \(T_{1}\in(0,T)\) and \(c>0\) such that \(\chi_{1}(t)\geq c\) for all \(T\in[-T_{1},T_{1}]\). We can guarantee these assumptions by (possible) replace \(\hat{\chi}\) by \(\hat{\chi}*\hat{\chi}\). We will by \(\chi_{\hbar}(t)\) denote the function
\[\chi_{\hbar}(t)=\tfrac{1}{\hbar}\chi_{1}(\tfrac{t}{\hbar}).\]
Moreover for any function \(g\in L^{1}_{loc}(\mathbb{R})\) we will use the notation
\[g^{(\hbar)}(t)=g*\chi_{\hbar}(t)=\int_{\mathbb{R}}g(s)\chi_{\hbar}(t-s).\]
Before we proceed we will just recall the following classes of functions. These did first appear in [17].
**Definition 3.5**.: A function \(g\in C^{\infty}(\mathbb{R}\setminus\{0\})\) is said to belong to the class \(C^{\infty,\gamma}(\mathbb{R})\), \(\gamma\in[0,1]\), if \(g\in C(\mathbb{R})\) for \(\gamma>0\), for some constants \(C>0\) and \(r>0\) it holds that
\[g(t) =0,\qquad\text{for all }t\geq C\] \[|\partial_{t}^{m}g(t)| \leq C_{m}|t|^{r},\qquad\text{for all }m\in\mathbb{N}_{0}\text{ and }t\leq-C\] \[|\partial_{t}^{m}g(t)| \leq\begin{cases}C_{m}&\text{if }\gamma=0,1\\ C_{m}|t|^{\gamma-m}&\text{if }\gamma\in(0,1)\end{cases},\qquad\text{for all }m\in \mathbb{N}\text{ and }t\in[-C,C]\setminus\{0\}.\]
A function \(g\) is said to belong to \(C_{0}^{\infty,\gamma}(\mathbb{R})\) if \(g\in C^{\infty,\gamma}(\mathbb{R})\) and \(g\) has compact support.
With this notation sat up we recall the following Tauberian type result. that can be found in [18], where it is Proposition 2.8.
**Proposition 3.6**.: _Let \(A\) be a selfadjoint operator acting in a Hilbert space \(\mathscr{H}\) and \(g\in C^{\infty,\gamma}_{0}(\mathbb{R})\). Let \(\chi_{1}\) be defined as in Remark 3.4. If for a Hilbert-Schmidt operator \(B\)_
\[\sup_{t\in\mathcal{D}(\delta)}\|B^{*}\chi_{h}(A-t)B\|_{1}\leq Z(\hbar), \tag{3.1}\]
_where \(\mathcal{D}(\delta)=\{t\in\mathbb{R}\,|\,\operatorname{dist}(\operatorname{ supp}(g)),t)\leq\delta\}\), \(Z(\beta)\) is some positive function and strictly positive number \(\delta\). Then it holds that_
\[\|B^{*}(g(A)-g^{(\hbar)}(A))B\|_{1}\leq C\hbar^{1+\gamma}Z(\hbar)+C^{\prime}_ {N}\hbar^{N}\|B^{*}B\|_{1}\quad\text{for all }N\in\mathbb{N}, \tag{3.2}\]
_where the constants \(C\) and \(C^{\prime}\) depend on the number \(\delta\) and the functions \(g\) and \(\chi_{1}\) only._
The following two lemmas can be found in [14], where it is Lemma 4.3 and Lemma 4.5 respectively. In the first lemma we state here we have included an additional estimate. This estimate is established as a part of the proof Lemma 4.3 in [14].
**Lemma 3.7**.: _Let \(\mathcal{H}_{h,\varepsilon}\) be an operator acting in \(L^{2}(\mathbb{R}^{d})\) which satisfies Assumption 3.1 with the open set \(\Omega\) and let \(H_{h,\varepsilon}=-\hbar^{2}\Delta+V_{\varepsilon}\) be the associated rough Schrodinger operator of regularity \(\tau\geq 1\). Assume that \(\hbar\in(0,\hbar_{0}]\), with \(\hbar_{0}\) sufficiently small. Then for \(f\in C^{\infty}_{0}(\mathbb{R})\) and \(\varphi\in C^{\infty}_{0}(\Omega)\) we have for any \(N\in\mathbb{N}_{0}\) that_
\[\|\varphi[f(\mathcal{H}_{h,\varepsilon})-f(H_{h,\varepsilon})]\|_{1} \leq C_{N}\hbar^{N}, \tag{3.3}\] \[\left\|\varphi[(z-\mathcal{H}_{h,\varepsilon})^{-1}-(z-H_{h, \varepsilon})^{-1}]\right\|_{1} \leq C_{N}\frac{\langle z\rangle^{N+\frac{d+1}{2}}\hbar^{2N-d}}{| \operatorname{Im}(z)|^{2N+2}},\qquad z\in\mathbb{C}\setminus\mathbb{R} \tag{3.4}\]
_and_
\[\|\varphi f(\mathcal{H}_{h,\varepsilon})\|_{1} \leq C\hbar^{-d}, \tag{3.5}\]
_The constant \(C_{N}\) depends only on the numbers \(N\), \(\|f\|_{L^{\infty}(\mathbb{R})}\), \(\|\partial^{\alpha}\varphi\|_{L^{\infty}(\mathbb{R}^{d})}\) for all \(\alpha\in\mathbb{N}_{0}^{d}\) and the constant \(c\)._
**Lemma 3.8**.: _Let \(H_{h,\varepsilon}=-\hbar^{2}\Delta+V_{\varepsilon}\) be a rough Schrodinger operator acting in \(L^{2}(\mathbb{R}^{d})\) of regularity \(\tau\geq 1\) with \(\hbar\in(0,\hbar_{0}]\), \(\hbar_{0}\) sufficiently small. Assume that \(V_{\varepsilon}\in C^{\infty}_{0}(\mathbb{R}^{d})\) and there exists a \(\delta\in(0,1]\) such that \(\varepsilon\geq\hbar^{1-\delta}\). Suppose there is an open set \(\Omega\subset\operatorname{supp}(V_{\varepsilon})\) and a \(c>0\) such that_
\[|V_{\varepsilon}(x)|+\hbar^{\frac{2}{3}}\geq c\qquad\text{for all }x\in\Omega.\]
_Let \(\chi_{h}(t)\) be the function from Remark 3.4, \(f\in C^{\infty}_{0}(\mathbb{R})\) and \(\varphi\in C^{\infty}_{0}(\Omega)\) then it holds for \(s\in\mathbb{R}\) that_
\[\|\varphi f(H_{h,\varepsilon})\chi_{h}(H_{h,\varepsilon}-s)f(H_{h,\varepsilon })\varphi\|_{1}\leq C\hbar^{-d}.\]
_The constant \(C_{N}\) depends only on the dimension and the numbers \(\|f\|_{L^{\infty}(\mathbb{R})}\), \(\|\partial^{\alpha}\varphi\|_{L^{\infty}(\mathbb{R})}\) for all \(\alpha\in\mathbb{N}_{0}^{d}\), \(\|\partial^{\alpha}a_{j}\|_{L^{\infty}(\mathbb{R}^{d})}\) and \(\varepsilon^{-\min(0,\tau-|\alpha|)}\|\partial^{\alpha}V_{\varepsilon}\|_{L^{ \infty}(\Omega)}\) for all \(\alpha\in\mathbb{N}_{0}^{d}\)._
The following two lemmas are similar to Lemma 4.7 and Lemma 4.8 from [14]. However due to some differences in the assumptions and the proofs we will give the proofs here.
**Lemma 3.9**.: _Let \(\mathcal{H}_{h,\varepsilon}\) be an operator acting in \(L^{2}(\mathbb{R}^{d})\) satisfying Assumption 3.1 with the open set \(\Omega\) and let \(H_{h,\varepsilon}=-\hbar^{2}\Delta+V_{\varepsilon}\) be the associated rough Schrodinger operator of regularity \(\tau\geq 1\). Assume that \(\hbar\in(0,\hbar_{0}]\), with \(\hbar_{0}\) sufficiently small and there exists a \(\delta\in(0,1]\) such that \(\varepsilon\geq\hbar^{1-\delta}\) Moreover, let \(\chi_{h}(t)\) be the function from Remark 3.4, \(f\in C^{\infty}_{0}(\mathbb{R})\) and \(\varphi\in C^{\infty}_{0}(B(\Omega))\) then it holds for \(s\in\mathbb{R}\) and \(N\in\mathbb{N}\) that_
\[\|\varphi f(\mathcal{H}_{h,\varepsilon})\chi_{h}(\mathcal{H}_{h,\varepsilon}-s )f(\mathcal{H}_{h,\varepsilon})\varphi-\varphi f(H_{h,\varepsilon})\chi_{h}(H_ {h,\varepsilon}-s)f(H_{h,\varepsilon})\varphi\|_{1}\leq C_{N}\hbar^{N}. \tag{3.6}\]
_Moreover, suppose there exists some \(c>0\) such that_
\[|V_{\varepsilon}(x)|+\hbar^{\frac{2}{3}}\geq c\qquad\text{for all $x\in\Omega$.}\]
_Then it holds that_
\[\|\varphi f(\mathcal{H}_{h,\varepsilon})\chi_{\hbar}(\mathcal{H}_{h, \varepsilon}-s)f(\mathcal{H}_{h,\varepsilon})\varphi\|_{1}\leq C\hbar^{-d}. \tag{3.7}\]
_The constant \(C_{N}\) depends only on the dimension and the numbers \(\|f\|_{L^{\infty}(\mathbb{R})}\), \(\|\partial^{\alpha}\varphi\|_{L^{\infty}(\mathbb{R})}\) and \(\varepsilon^{-\min(0,\tau-|\alpha|)}\|\partial^{\alpha}V_{\varepsilon}\|_{L^{ \infty}(\Omega)}\) for all \(\alpha\in\mathbb{N}_{0}^{d}\)._
Proof.: We start by making the following estimate
\[\begin{split}&\|\varphi f(\mathcal{H}_{h,\varepsilon})\chi_{ \hbar}(\mathcal{H}_{h,\varepsilon}-s)f(\mathcal{H}_{h,\varepsilon})\varphi- \varphi f(H_{h,\varepsilon})\chi_{\hbar}(H_{h,\varepsilon}-s)f(H_{h, \varepsilon})\varphi\|_{1}\\ &\leq C\hbar^{-d}\|\varphi f(\mathcal{H}_{h,\varepsilon})\chi_{ \hbar}(\mathcal{H}_{h,\varepsilon}-s)-\varphi f(H_{h,\varepsilon})\chi_{ \hbar}(H_{h,\varepsilon}-s)\|_{\mathrm{op}}+C\hbar^{-1}\|f(\mathcal{H}_{h, \varepsilon})\varphi-f(H_{h,\varepsilon})\varphi\|_{1}\\ &\leq C\hbar^{-d}\|\varphi f(\mathcal{H}_{h,\varepsilon})\chi_{ \hbar}(\mathcal{H}_{h,\varepsilon}-s)-\varphi f(H_{h,\varepsilon})\chi_{ \hbar}(H_{h,\varepsilon}-s)\|_{\mathrm{op}}+C\hbar^{N},\end{split} \tag{3.8}\]
where we in the first inequality have added and subtracted the term \(\varphi f(H_{h,\varepsilon})\chi_{\hbar}(H_{h,\varepsilon}-s)f(\mathcal{H}_{ h,\varepsilon})\varphi\), used the triangle inequality, used estimate (3.5) from Lemma 3.7 and that \(\sup_{t\in\mathbb{R}}\chi_{\hbar}(t)\leq C\hbar^{-1}\). In the second inequality we have used estimate (3.3) from Lemma 3.7. We observe that \(\chi_{\hbar}(z-s)\) is the inverse Fourier transform of a smooth compactly supported function. Hence it is holomorphic in \(z\). Using the Helffer-Sjostrand formula (Theorem 2.2) we get that
\[\begin{split}&\varphi f(\mathcal{H}_{h,\varepsilon})\chi_{ \hbar}(\mathcal{H}_{h,\varepsilon}-s)-\varphi f(H_{h,\varepsilon})\chi_{ \hbar}(H_{h,\varepsilon}-s)\\ &\qquad=-\frac{1}{\pi}\int_{\mathbb{C}}\bar{\partial}\tilde{f}(z) \chi_{\hbar}(z-s)\varphi[(z-\mathcal{H}_{h,\varepsilon})^{-1}-(z-H_{h, \varepsilon})^{-1}]\,L(dz),\end{split} \tag{3.9}\]
where \(\tilde{f}\) is an almost analytic extension of \(f\). Estimate (3.4) of Lemma 3.7 give us that
\[\left\|\varphi[(z-\mathcal{H}_{h,\mu})^{-1}-(z-H_{h,\mu})^{-1}]\right\|_{ \mathrm{op}}\leq C_{N}\frac{\langle z\rangle^{N+\frac{d+1}{2}}\hbar^{2N-d}}{| \operatorname{Im}(z)|^{2N+2}}. \tag{3.10}\]
Since the trace norm dominates the operator norm. Combining (3.9), (3.10) and using the properties of \(\tilde{f}\) and \(\chi_{\hbar}\) we obtain that
\[\|\varphi f(\mathcal{H}_{h,\mu})\chi_{\hbar}(\mathcal{H}_{h,\mu}-s)-\varphi f (H_{h,\mu})\chi_{\hbar}(H_{h,\mu}-s)\|_{\mathrm{op}}\leq C_{N}\hbar^{N}, \tag{3.11}\]
where \(C_{N}\) depends on the dimension, the number \(N\) and the functions \(f\), \(\varphi\). Combining the estimates in (3.8) and (3.11) we obtain that
\[\|\varphi f(\mathcal{H}_{h,\varepsilon})\chi_{\hbar}(\mathcal{H}_{h, \varepsilon}-s)f(\mathcal{H}_{h,\varepsilon})\varphi-\varphi f(H_{h,\varepsilon })\chi_{\hbar}(H_{h,\varepsilon}-s)f(H_{h,\varepsilon})\varphi\|_{1}\leq C \hbar^{N}. \tag{3.12}\]
This concludes first part of the proof. By combining the the estimate in (3.6) with Lemma 3.8 we can obtain the estimate (3.7). This concludes the proof.
**Lemma 3.10**.: _Let \(\gamma\in[0,1]\) and \(\mathcal{H}_{h,\varepsilon}\) be an operator acting in \(L^{2}(\mathbb{R}^{d})\). Suppose \(\mathcal{H}_{h,\varepsilon}\) satisfies Assumption 3.1 with the open set \(\Omega\) and let \(H_{h,\varepsilon}=-\hbar^{2}\Delta+V_{\varepsilon}\) be the associated rough Schrodinger operator of regularity \(\tau\geq 1\) if \(\gamma=0\) and \(\tau\geq 2\) if \(\gamma>0\). Assume that \(\hbar\in(0,\hbar_{0}]\), with \(\hbar_{0}\) sufficiently small and there exists a \(\delta\in(0,1]\) such that \(\varepsilon\geq\hbar^{1-\delta}\). Moreover, suppose there exists some \(c>0\) such that_
\[|V_{\varepsilon}(x)|+\hbar^{\frac{2}{3}}\geq c\qquad\text{for all $x\in\Omega$}\]
_Then for \(\varphi\in C_{0}^{\infty}(\Omega)\) it holds that_
\[\Big{|}\operatorname{Tr}\big{[}\varphi g_{\gamma}(\mathcal{H}_{h,\varepsilon}) \big{]}-\operatorname{Tr}\big{[}\varphi g_{\gamma}(H_{h,\varepsilon})\big{]} \Big{|}\leq C\hbar^{1+\gamma-d}+C^{\prime}_{N}\hbar^{N}. \tag{3.13}\]
_The constants \(C\) and \(C^{\prime}_{N}\) depends on the dimension and the numbers \(\|f\|_{L^{\infty}(\mathbb{R})}\), \(\|\partial^{\alpha}\varphi\|_{L^{\infty}(\mathbb{R})}\) and \(\varepsilon^{-\min(0,\tau-|\alpha|)}\|\partial^{\alpha}V_{\varepsilon}\|_{L^{ \infty}(\Omega)}\) for all \(\alpha\in\mathbb{N}_{0}^{d}\)._
Proof.: Since both operators are lower semi-bounded we may assume that \(g\) is compactly supported. Let \(f\in C_{0}^{\infty}(\mathbb{R})\) such that \(f(t)g_{\gamma}(t)=g_{\gamma}(t)\) for all \(t\in\mathbb{R}\) and let \(\varphi_{1}\in C_{0}^{\infty}(\Omega)\) such that \(\varphi(x)\varphi_{1}(x)=\varphi(x)\) for all \(x\in\mathbb{R}^{d}\). Moreover, let \(\chi_{\hbar}(t)\) be the function from Remark 3.4 and set \(g_{\gamma}^{(\hbar)}(t)=g_{\gamma}\ast\chi_{\hbar}(t)\). With this notation set up we have that
\[\begin{split}&\Big{|}\operatorname{Tr}\big{[}\varphi g_{\gamma}( \mathcal{H}_{h,\varepsilon})\big{]}-\operatorname{Tr}\big{[}\varphi g_{\gamma }(H_{h,\varepsilon})\big{]}\Big{|}\\ &\leq\|\varphi\varphi_{1}f(\mathcal{H}_{h,\varepsilon})(g_{\gamma }(\mathcal{H}_{h,\varepsilon})-g_{\gamma}^{(\hbar)}(\mathcal{H}_{h, \varepsilon}))f(\mathcal{H}_{h,\varepsilon})\varphi_{1}\|_{1}\\ &\qquad+\|\varphi\varphi_{1}f(H_{h,\varepsilon})(g_{\gamma}(H_{h,\varepsilon})-g_{\gamma}^{(\hbar)}(H_{h,\varepsilon}))f(H_{h,\varepsilon}) \varphi_{1}\|_{1}+\|\varphi\|_{L^{\infty}(\mathbb{R}^{d})}\int_{\mathbb{R}}g_ {\gamma}(s)\,ds\\ &\qquad\times\sup_{s\in\mathbb{R}}\|\varphi\varphi_{1}f(\mathcal{ H}_{h,\varepsilon})\chi_{\hbar}(\mathcal{H}_{h,\varepsilon}-s)f(\mathcal{H}_{h, \varepsilon})\varphi_{1}-\varphi_{1}f(H_{h,\varepsilon})\chi_{\hbar}(H_{h, \varepsilon}-s)f(H_{h,\varepsilon})\varphi_{1}\|_{1}.\end{split} \tag{3.14}\]
Lemma 3.8 and Lemma 3.9 gives us that the assumptions of Proposition 3.6 is fulfilled with \(B\) equal to \(\varphi_{1}f(H_{\hbar})\) and \(\varphi_{1}f(H_{h,\varepsilon})\) respectively. Hence we have that
\[\|\varphi\varphi_{1}f(\mathcal{H}_{h,\varepsilon})(g_{\gamma}(\mathcal{H}_{h, \varepsilon})-g_{\gamma}^{(\hbar)}(\mathcal{H}_{h,\varepsilon}))f(\mathcal{H} _{h,\varepsilon})\varphi_{1}\|_{1}\leq C\hbar^{1+\gamma-d} \tag{3.15}\]
and
\[\|\varphi\varphi_{1}f(H_{h,\varepsilon})(g_{\gamma}(H_{h,\varepsilon})-g_{ \gamma}^{(\hbar)}(H_{h,\varepsilon}))f(H_{h,\varepsilon})\varphi_{1}\|_{1} \leq C\hbar^{1+\gamma-d}. \tag{3.16}\]
From applying Lemma 3.9 we get for all \(N\in\mathbb{N}\) that
\[\begin{split}\sup_{s\in\mathbb{R}}&\|\varphi\varphi_ {1}f(\mathcal{H}_{h,\mu})\chi_{\hbar}(\mathcal{H}_{h,\mu}-s)f(\mathcal{H}_{h,\mu})\varphi_{1}-\varphi_{1}f(H_{h,\mu\varepsilon})\chi_{\hbar}(H_{h,\mu \varepsilon}-s)f(H_{h,\mu,\varepsilon})\varphi_{1}\|_{1}\\ &\leq C_{N}\hbar^{N}.\end{split} \tag{3.17}\]
Finally from combining the estimates in (3.14), (3.15), (3.16) and (3.17)we obtain the desired estimate and this concludes the proof.
For operators that satisfies Assumption 3.1 we can establish the following model theorem. The proof of the theorem is similar to the proof of Theorem 5.1 in [14].
**Theorem 3.11**.: _Let \(\gamma\in[0,1]\) and \(\mathcal{H}_{h,\varepsilon}\) be an operator acting in \(L^{2}(\mathbb{R}^{d})\). Suppose \(\mathcal{H}_{h,\varepsilon}\) satisfies Assumption 3.1 with the open set \(\Omega\) and let \(H_{h,\varepsilon}=-\hbar^{2}\Delta+V_{\varepsilon}\) be the associated rough Schrodinger operator of regularity \(\tau\geq 1\) if \(\gamma=0\) and \(\tau\geq 2\) if \(\gamma>0\). Assume that \(\hbar\in(0,\hbar_{0}]\), with \(\hbar_{0}\) sufficiently small and there exists a \(\delta\in(0,1]\) such that \(\varepsilon\geq\hbar^{1-\delta}\). Moreover, suppose there exists some \(c>0\) such that_
\[|V_{\varepsilon}(x)|+\hbar^{\frac{2}{3}}\geq c\qquad\text{for all }x\in\Omega. \tag{3.18}\]
_Then for any \(\varphi\in C_{0}^{\infty}(\Omega)\) it holds that_
\[\Big{|}\operatorname{Tr}\big{[}\varphi g_{\gamma}(\mathcal{H}_{h,\varepsilon}) \big{]}-\frac{1}{(2\pi\hbar)^{d}}\int_{\mathbb{R}^{2d}}g_{\gamma}(p^{2}+V_{ \varepsilon}(x))\varphi(x)\,dxdp\Big{|}\leq C\hbar^{1+\gamma-d},\]
_where the constant \(C\) depends only on the dimension and the numbers \(\|\partial^{\alpha}\varphi\|_{L^{\infty}(\mathbb{R}^{d})}\) and \(\varepsilon^{-\min(0,\tau-|\alpha|)}\|\partial^{\alpha}V_{\varepsilon}\|_{L^{ \infty}(\Omega)}\) for all \(\alpha\in N_{0}^{d}\)._
Proof.: Firstly observe that under the assumptions of this theorem we have that \(\mathcal{H}_{h,\varepsilon}\) and \(H_{h,\varepsilon}\) satisfies the assumptions of Lemma 3.10. Moreover, we have that \(H_{h,\varepsilon}\) satisfies the assumptions of Lemma 3.2. Hence from applying Lemma 3.10 and Lemma 3.2 we obtain that
\[\Big{|} \operatorname{Tr}\big{[}\varphi g_{\gamma}(\mathcal{H}_{h, \varepsilon})\big{]}-\frac{1}{(2\pi\hbar)^{d}}\int_{\mathbb{R}^{2d}}g_{\gamma} (p^{2}+V_{\varepsilon}(x))\varphi(x)\,dxdp\Big{|}\] \[\leq\Big{|}\operatorname{Tr}\big{[}\varphi g_{\gamma}(\mathcal{H }_{h,\varepsilon})]-\varphi g_{\gamma}(H_{h,\varepsilon})\big{]}\Big{|}+ \Big{|}\operatorname{Tr}\big{[}\varphi g_{\gamma}(H_{h,\varepsilon})\big{]}- \frac{1}{(2\pi\hbar)^{d}}\int_{\mathbb{R}^{2d}}g_{\gamma}(p^{2}+V_{ \varepsilon}(x))\varphi(x)\,dxdp\Big{|}\] \[\leq C\hbar^{1+\gamma-d}.\]
This concludes the proof.
## 4 Towards a proof of the main theorem
At the end of this section we will prove our main theorem. But before we do this we will first prove some lemmas that we will need in the proof. The first is a lemma that allow us to localise the trace we consider. The second one is a comparison of phase space integrals.
### Localisation of traces and comparison of phase-space integrals
Before we state the lemma on localisation of the trace we recall the following Agmon type estimate from [5], where it is Lemma A.1.
**Lemma 4.1**.: _Let \(H_{\hbar}=-\hbar^{2}\Delta+V\) be a Schrodinger operator acting in \(L^{2}(\mathbb{R}^{d})\), where \(V\) is in \(L^{1}_{loc}(\mathbb{R}^{d})\) and suppose that there exist an \(\nu>0\) and a open bounded sets \(U\) such that_
\[V(x)\geq\nu\quad\text{when $x\in U^{c}$}. \tag{4.1}\]
_Let \(d(x)=\operatorname{dist}(x,U_{a})\), where_
\[U_{a}=\{x\in\mathbb{R}^{d}\,|\,\operatorname{dist}(x,U)<a\}\]
_and let \(\psi\) be a normalised solution to the equation_
\[H_{\hbar}\psi=E\psi,\]
_with \(E<\nu/4\). Then there exists a \(C>0\) such that_
\[\|e^{\delta\hbar^{-1}d}\psi\|_{L^{2}(\mathbb{R}^{d})}\leq C,\]
_for \(\delta=\frac{\sqrt{\nu}}{8}\). The constant \(C\) depends on \(a\) and is uniform in \(V\), \(\nu\) and \(U\) satisfying (4.1)._
In the formulation of the lemma, we have presented here we consider \(U_{a}\) for \(a>0\) and not just \(U_{1}\) as in [5]. There are no difference in the proof. However, one thing to remark is that the constant \(C\) will diverge to infinity as \(a\) tends to \(0\). Moreover, we have in the statement highlighted the uniformity of the constant in the potential \(V\), the number \(\nu\) and the set \(U\). That the constant is indeed uniform in these follows directly from the proof given in [5].
**Lemma 4.2**.: _Let \(\gamma\in[0,1]\) and \(H_{\hbar}=-\hbar^{2}\Delta+V\) be a Schrodinger operator acting in \(L^{2}(\mathbb{R}^{d})\), where \(V\) is in \(L^{1}_{loc}(\mathbb{R}^{d})\) and suppose that there exist an \(\nu>0\) and a open bounded sets \(U\) such that \(V(x)\mathbf{1}_{U}(x)\in L^{\gamma+\frac{d}{2}}(\mathbb{R}^{d})\) and_
\[V(x)\geq\nu\quad\text{when $x\in U^{c}$}. \tag{4.2}\]
_Fix \(a>0\) and let \(\varphi\in C_{0}^{\infty}(\mathbb{R}^{d})\) such that \(\varphi(x)=1\) for all \(x\in U_{a}\), where_
\[U_{a}=\{x\in\mathbb{R}^{d}\,|\,\operatorname{dist}(x,U)<a\}.\]
_Then for every \(N\in\mathbb{N}\) it holds that_
\[\operatorname{Tr}\big{[}g_{\gamma}(H_{\hbar})\big{]}=\operatorname{Tr}\big{[} g_{\gamma}(H_{\hbar})\varphi\big{]}+C_{N}\hbar^{N},\]
_where the constant is \(C_{N}\) depends on \(a\) and is uniform in \(V\), \(\nu\) and \(U\) satisfying (4.2)._
Proof.: Using linearity of the trace we have that
\[\operatorname{Tr}\big{[}g_{\gamma}(H_{\hbar})\big{]}=\operatorname{Tr}\big{[} g_{\gamma}(H_{\hbar})\varphi\big{]}+\operatorname{Tr}\big{[}g_{\gamma}(H_{ \hbar})(1-\varphi)\big{]}. \tag{4.3}\]
For the second term on the right hand side of (4.3) we calculate the trace in a normalised basis of eigenfunctions for \(H_{\hbar}\) called \(\psi_{n}\) with eigenvalue \(E_{n}\).
\[\operatorname{Tr}\big{[}g_{\gamma}(H_{\hbar})(1-\varphi)\big{]}=\sum_{E_{n} \leq 0}\langle g_{\gamma}(H_{\hbar})(1-\varphi)\psi_{n},\psi_{n}\rangle=\sum_{E_ {n}\leq 0}g_{\gamma}(E_{n})\|\sqrt{1-\varphi}\psi_{n}\|_{L^{2}(\mathbb{R}^{d} )}^{2}. \tag{4.4}\]
To estimate the \(L^{2}\)-norms we let \(d(x)=\operatorname{dist}(x,U_{\frac{1}{2}a})\). For all \(x\in\operatorname{supp}(1-\varphi)\) we have that \(d(x)>0\) since \(\varphi(x)=1\) for all \(x\in U_{a}\). We get from Lemma 4.1 that there exists a constant \(C\) depending on \(a\) such that for all normalised eigenfunctions \(\psi_{n}\) with eigenvalue less than \(\frac{\nu}{4}\) we have the estimate
\[\|e^{\tilde{\delta}\hbar^{-1}d}\psi_{n}\|_{L^{2}(\mathbb{R}^{d})}\leq C, \tag{4.5}\]
where \(\tilde{\delta}=\frac{\sqrt{\nu}}{8}\) and \(C\) is uniform in \(V\), \(\nu\) and \(U\) satisfying (4.2). Using this estimate and the observations made for \(d(x)\) we get for all norms in (4.4) and all \(N\in\mathbb{N}\) the estimate
\[\begin{split}\|\sqrt{1-\varphi}\psi_{n}\|_{L^{2}(\mathbb{R}^{d}) }^{2}&\leq\|\sqrt{1-\varphi}e^{-\tilde{\delta}\hbar^{-1}d}\|_{L^ {\infty}(\mathbb{R}^{d})}^{2}\|e^{\tilde{\delta}\hbar^{-1}d}\psi_{n}\|_{L^{2} (\mathbb{R}^{d})}^{2}\\ &\leq C\big{\|}\sqrt{1-\varphi}\big{(}\tfrac{\hbar}{\delta d} \big{)}^{N}\big{(}\tfrac{\tilde{\delta}d}{\hbar}\big{)}^{N}e^{-\tilde{\delta }\hbar^{-1}d}\|_{L^{\infty}(\mathbb{R}^{d})}^{2}\leq C_{N}\hbar^{2N}.\end{split} \tag{4.6}\]
Combining (4.4) with the estimate obtained in (4.6) we get for all \(N\in\mathbb{N}\) that
\[\operatorname{Tr}\big{[}g_{\gamma}(H_{\hbar})(1-\varphi)\big{]}\leq C_{N} \hbar^{2N}\sum_{E_{n}\leq 0}g_{\gamma}(E_{n})=C_{N}\hbar^{2N}\operatorname{Tr} \big{[}g_{\gamma}(H_{\hbar})\big{]}\leq\tilde{C}_{N}\hbar^{2N-d}, \tag{4.7}\]
where we in the last estimate have used Theorem 1.1. Combining (4.3) and (4.7) we obtain the desired estimate.
_Remark 4.3_.: When we will apply the above Lemma we need to ensure the constant is the same for the two cases we consider. To ensure this we use Theorem 1.1 as descibed in Remark 2.6 at the end of the proof.
The next lemma is a result on comparing phase-space integrals. Similar estimates are obtained with different methods in [13]. These are parts of larger proofs and not an independent lemma. The following Lemma is taken from [14], where it is Lemma 5.1.
**Lemma 4.4**.: _Suppose \(\Omega\subset\mathbb{R}^{d}\) is an open set and let \(\varphi\in C_{0}^{\infty}(\Omega)\). Moreover, let \(\varepsilon>0\), \(\hbar\in(0,\hbar_{0}]\) and \(V,V_{\varepsilon}\in L^{1}_{\text{loc}}(\mathbb{R}^{d})\cap C(\Omega)\). Suppose that_
\[\|V-V_{\varepsilon}\|_{L^{\infty}(\Omega)}\leq c\varepsilon^{k+\mu}. \tag{4.8}\]
_Then for \(\gamma\in[0,1]\) and \(\varepsilon\) sufficiently small it holds that_
\[\Big{|}\int_{\mathbb{R}^{2d}}[g_{\gamma}(p^{2}+V_{\varepsilon}(x))-g_{\gamma} (p^{2}+V(x))]\varphi(x)\,dxdp\Big{|}\leq C\varepsilon^{k+\mu}, \tag{4.9}\]
_where the constant \(C\) depends on the dimension and the numbers \(\gamma\) and \(c\) in (4.8)._
### Proof of main theorem
The proof of the main theorem is based on a multi-scale argument. Before we prove the main theorem by using these techniques we will recall the following crucial lemma.
**Lemma 4.5**.: _Let \(\Omega\subset\mathbb{R}^{d}\) be an open set and let \(l\) be a function in \(C^{1}(\bar{\Omega})\) such that \(l>0\) on \(\bar{\Omega}\) and assume that there exists \(\rho\) in \((0,1)\) such that_
\[|\nabla_{x}l(x)|\leq\rho, \tag{4.10}\]
_for all \(x\) in \(\Omega\)._
_Then_
* _There exists a sequence_ \(\{x_{k}\}_{k=0}^{\infty}\) _in_ \(\Omega\) _such that the open balls_ \(B(x_{k},l(x_{k}))\) _form a covering of_ \(\Omega\)_. Furthermore, there exists a constant_ \(N_{\rho}\)_, depending only on the constant_ \(\rho\)_, such that the intersection of more than_ \(N_{\rho}\) _balls is empty._
* _One can choose a sequence_ \(\{\varphi_{k}\}_{k=0}^{\infty}\) _such that_ \(\varphi_{k}\in C_{0}^{\infty}(B(x_{k},l(x_{k})))\) _for all_ \(k\) _in_ \(\mathbb{N}\)_. Moreover, for all multiindices_ \(\alpha\) _and all_ \(k\) _in_ \(\mathbb{N}\)__ \[|\partial_{x}^{\alpha}\varphi_{k}(x)|\leq C_{\alpha}l(x_{k})^{-|\alpha|},\] _and_ \[\sum_{k=1}^{\infty}\varphi_{k}(x)=1,\] _for all_ \(x\) _in_ \(\Omega\)_._
This lemma is taken from [18] where it is Lemma 5.4. The proof is analogous to the proof of [9, Theorem 1.4.10]. We are now ready to prove the main theorem.
Proof of Theorem 1.5.: Let \(H^{\pm}_{\hbar,\varepsilon}\) be the two framing operators constructed in Lemma 2.4, where we choose \(\varepsilon=\hbar^{1-\delta}\). For \(\gamma=0\) we choose \(\delta=\frac{\mu}{1+\mu}\) and if \(\gamma>0\) we choose \(\delta=\frac{1+\mu-\gamma}{2+\mu}\). Note that our assumptions on \(\mu\) will in all cases ensure that \(\delta\geq\frac{1}{3}\). Moreover, we get that
\[\varepsilon^{1+\mu} =\hbar \gamma =0 \tag{4.11}\] \[\varepsilon^{2+\mu} =\hbar^{1+\gamma} \gamma >0.\]
Since we have that \(H^{-}_{\hbar,\varepsilon}\leq H_{\hbar}\leq H^{+}_{\hbar,\varepsilon}\) in the sense of quadratic forms. It follows from the min-max theorem that
\[\operatorname{Tr}\big{[}g_{\gamma}(H^{+}_{\hbar,\varepsilon})\big{]}\leq \operatorname{Tr}\big{[}g_{\gamma}(H_{\hbar})\big{]}\leq\operatorname{Tr} \big{[}g_{\gamma}(H^{-}_{\hbar,\varepsilon})\big{]}. \tag{4.12}\]
The aim is now to obtain spectral asymptotics for \(\operatorname{Tr}\big{[}g_{\gamma}(H^{+}_{\hbar,\varepsilon})\big{]}\) and \(\operatorname{Tr}\big{[}g_{\gamma}(H^{-}_{\hbar,\varepsilon})\big{]}\). The arguments will be analogous so we will drop the superscript \(\pm\) for the operator \(H^{\pm}_{\hbar,\varepsilon}\) in what follows. Let \(\varphi\in C_{0}^{\infty}(\mathbb{R}^{d})\) with \(\varphi(x)=1\) for all \(x\in\Omega_{\vec{\nu},V_{\varepsilon}}\) and \(\operatorname{supp}(\varphi)\subset\Omega_{2\vec{\nu},V_{\varepsilon}}\). Then applying Lemma 4.2 we obtain for all \(N\in\mathbb{N}\) that
\[\operatorname{Tr}\big{[}g_{\gamma}(H_{\hbar,\varepsilon})\big{]}=\operatorname {Tr}\big{[}g_{\gamma}(H_{\hbar,\varepsilon})\varphi\big{]}+C_{N}\hbar^{N}. \tag{4.13}\]
For the terms \(\operatorname{Tr}\big{[}g_{\gamma}(H_{\hbar,\varepsilon})\varphi\big{]}\) we use a multiscale argument such that we locally can apply Theorem 3.11. Recall that from from Lemma 2.4 we have that
\[V_{\varepsilon}(x)=V_{\varepsilon}^{1}(x)+V^{2}(x)\pm C\varepsilon^{\tau+\mu},\]
where \(\mathrm{supp}(V^{2})\cap\Omega^{C}_{4\tilde{\nu},V_{\varepsilon}}=\emptyset\) and \(V^{1}_{\varepsilon}\in C^{\infty}_{0}(\mathbb{R}^{d})\). We let \(\varphi_{1}\in C^{\infty}_{0}(\mathbb{R}^{d})\) such that \(\varphi_{1}(x)=1\) for all \(x\in\Omega_{2\tilde{\nu},V_{\varepsilon}}\) and \(\mathrm{supp}(\varphi_{1})\subset\Omega_{4\tilde{\nu},V_{\varepsilon}}\). With this function we have that
\[\varphi_{1}(x)V^{\pm}_{\varepsilon}(x)=\varphi_{1}(x)(V^{1}_{\varepsilon}(x) \pm C\varepsilon^{\tau+\mu}). \tag{4.14}\]
Note that with these assumptions on \(\varphi_{1}(x)\) we have that \(\varphi_{1}(x)\varphi(x)=\varphi(x)\) for all \(x\in\mathbb{R}^{d}\). This observations ensures that when we define our localisation function \(l(x)\) below it is positive on the set \(\mathrm{supp}(\varphi)\). Before we define our localisation functions we remark that due to the continuity of \(V_{\varepsilon}\) on \(\Omega_{4\tilde{\nu},V_{\varepsilon}}\) there exists a number \(\epsilon>0\) such that
\[\mathrm{dist}(\mathrm{supp}(\varphi),\Omega^{c}_{2\tilde{\nu},V_{\varepsilon }})>\epsilon.\]
The number \(\epsilon\) is important for our localisation functions. As we need to ensure the supports are contained in the set \(\Omega_{2\tilde{\nu},V_{\varepsilon}}\). We let
\[l(x)=A^{-1}\sqrt{|\varphi_{1}(x)V_{\varepsilon}(x)|^{2}+\hbar^{\frac{4}{3}}} \quad\text{and}\quad f(x)=\sqrt{l(x)}.\]
Where we choose \(A>0\) sufficiently large such that
\[l(x)\leq\frac{\epsilon}{9}\quad\text{and}\quad|\nabla l(x)|\leq\rho<\frac{1}{8} \tag{4.15}\]
for all \(x\in\overline{\mathrm{supp}(\varphi)}\). Note that due to our assumptions on \(V_{\varepsilon}\) we can choose \(A\) independent of \(\hbar\) and uniformly for \(\hbar\in(0,\hbar_{0}]\). Moreover, we have that
\[|\varphi_{1}(x)V_{\varepsilon}(x)|\leq Al(x) \tag{4.16}\]
for all \(x\in\mathbb{R}^{d}\). By Lemma 4.5 with the set \(\mathrm{supp}(\varphi)\) and the function \(l(x)\) there exists a sequence \(\{x_{k}\}_{k=1}^{\infty}\) in \(\mathrm{supp}(\varphi)\) such that \(\mathrm{supp}(\varphi)\subset\cup_{k\in\mathbb{N}}B(x_{k},l(x_{k}))\) and there exists a constant \(N_{\frac{1}{8}}\) such that at most \(N_{\frac{1}{8}}\) of the sets \(B(x_{k},l(x_{k}))\) can have a non-empty intersection. Moreover, there exists a sequence \(\{\varphi_{k}\}_{k=1}^{\infty}\) such that \(\varphi_{k}\in C^{\infty}_{0}(B(x_{k},l(x_{k})))\),
\[\big{|}\partial_{x}^{\alpha}\varphi_{k}(x)\big{|}\leq C_{\alpha}l(x_{k})^{-| \alpha|}\qquad\text{for all $\alpha\in\mathbb{N}_{0}$}, \tag{4.17}\]
and
\[\sum_{k=1}^{\infty}\varphi_{k}(x)=1\qquad\text{for all $\mathrm{supp}( \varphi)$}.\]
We have that \(\cup_{k\in\mathbb{N}}B(x_{k},l(x_{k}))\) is an open covering of \(\mathrm{supp}(\varphi)\) and since this set is compact there exists a finite subset \(\mathcal{I}^{\prime}\subset\mathbb{N}\) such that
\[\mathrm{supp}(\varphi)\subset\bigcup_{k\in\mathcal{I}^{\prime}}B(x_{k},l(x_{k })).\]
In order to ensure that we have a finite partition of unity over the set \(\mathrm{supp}(\varphi)\) we define the set
\[\mathcal{I}=\bigcup_{j\in\mathcal{I}^{\prime}}\big{\{}k\in\mathbb{N}\,|\,B(x_ {k},l(x_{k}))\cap B(x_{j},l(x_{j}))\neq\emptyset\big{\}}.\]
Then we have that \(\mathcal{I}\) is still finite since at most \(N_{\frac{1}{8}}\) balls can have non-empty intersection. Moreover, we have that
\[\sum_{k\in\mathcal{I}}\varphi_{k}(x)=1\qquad\text{for all $\mathrm{supp}( \varphi)$}.\]
From this we get the following identity
\[\operatorname{Tr}\big{[}\varphi\mathbf{1}_{(-\infty,0]}(H_{h,\varepsilon})\big{]} =\sum_{k\in\mathcal{I}}\operatorname{Tr}\big{[}\varphi_{k}\varphi\mathbf{1}_{(- \infty,0]}(H_{h,\varepsilon})\big{]}, \tag{4.18}\]
where we have used linearity of the trace. We will for the remaining part of the proof use the following notation
\[l_{k}=l(x_{k}),\quad f_{k}=f(x_{k})\quad h_{k}=\frac{\hbar}{l_{k}f_{k}}\quad \text{and}\quad\varepsilon_{k}=\hbar_{k}^{1-\delta}.\]
We have that \(h_{k}\) is uniformly bounded from above since
\[l(x)f(x)=A^{-\frac{3}{2}}(\left|\varphi_{1}(x)V_{\varepsilon}(x)\right|^{2}+ \hbar^{\frac{4}{3}})^{\frac{3}{4}}\geq A^{-\frac{3}{2}}\hbar,\]
for all \(x\). Moreover, since we by assumption have that \(\delta\geq\frac{1}{3}\) and \(l_{k}=f_{k}^{2}\) we obtain that
\[l_{k}\varepsilon^{-1}\leq\varepsilon_{k}^{-1} \tag{4.19}\]
We define the two unitary operators \(U_{l}\) and \(T_{z}\) by
\[U_{l}f(x)=l^{\frac{d}{2}}f(lx)\quad\text{and}\quad T_{z}f(x)=f(x+z)\qquad \text{for }f\in L^{2}(\mathbb{R}^{d}).\]
Moreover we set
\[\tilde{H}_{\varepsilon,h_{k}}=f_{k}^{-2}(T_{x_{k}}U_{l_{k}})H_{h}(T_{x_{k}}U_ {l_{k}})^{*}=-h_{k}^{2}\Delta+\tilde{V}_{\varepsilon}(x),\]
where \(\tilde{V}_{\varepsilon}(x)=f_{k}^{-2}V_{\varepsilon}(l_{k}x+x_{k})\). We will here need to establish that this rescaled operator satisfies the assumptions of Theorem 3.11 with \(\hbar_{k}\), \(\varepsilon_{k}\) and the set \(B(0,8)\). To establish this we firstly observe that by (4.15) we have
\[(1-8\rho)l_{k}\leq l(x)\leq(1+8\rho)l_{k}\qquad\text{for all }x\in B(x_{k},8l_{k}). \tag{4.20}\]
We start by verifying that the operator \(\tilde{H}_{\varepsilon,h_{k}}\) satisfies Assumption 3.1. From Lemma 2.4 it follows that the operator \(\tilde{H}_{\varepsilon,h_{k}}\) is lower semibounded and selfadjoint. By our choice of \(\varphi_{1}\) we have that \(\tilde{H}_{\varepsilon,h_{k}}\) will satisfies part two of Assumption 3.1 with the set \(B(0,8)\) and the potential
\[\widetilde{\varphi_{1}}\widetilde{V}_{\varepsilon}(x)=\varphi_{1}(l_{k}x+x_{ k})f_{k}^{-2}V_{\varepsilon}(l_{k}x+x_{k}), \tag{4.21}\]
where we by (4.21) have that \(\widetilde{\varphi_{1}}\widetilde{V}_{\varepsilon}(x)\in C_{0}^{\infty}( \mathbb{R}^{d})\). What remains to verify is that we have obtained a non-critical condition (3.18). Using (4.16) we have for for \(x\) in \(B(0,8)\) that
\[\left|\widetilde{\varphi_{1}}\widetilde{V}_{\varepsilon}(x) \right|+h_{k}^{\frac{2}{3}} =f_{k}^{-2}\left|\varphi_{1}V_{\varepsilon}(l_{k}x+x_{k})\right|+ (\tfrac{\hbar}{f_{k}l_{k}})^{\frac{2}{3}}=l_{k}^{-1}(\left|\varphi_{1}V_{ \varepsilon}(l_{k}x+x_{k})\right|+\hbar^{\frac{2}{3}})\] \[\geq l_{k}^{-1}Al(l_{k}x+x_{k})\geq(1-8\rho)A.\]
Hence we have obtained the non-critical condition on \(B(0,8)\). So all assumptions of Theorem 3.11 is fulfilled. But before applying it we will verify that the numbers the constant from Theorem 3.11 depends on are independent of \(k\) and \(\hbar\). Firstly we have for the norm estimate for the potential that
\[\widetilde{\left|\widetilde{\varphi_{1}}\widetilde{V}_{\varepsilon}\right|}_{L ^{\infty}(B(0,8))}=\sup_{x\in B(0,8)}\left|\varphi_{1}(l_{k}x+x_{k})f_{k}^{-2 }V_{\varepsilon}(l_{k}x+x_{k})\right|\leq(1+8\rho)A,\]
where we have used (4.16) and (4.20). When considering the derivatives we have for \(\alpha\in\mathbb{N}_{0}^{d}\) with \(|\alpha|\geq 1\) that
\[\varepsilon_{k}^{-\min(0,\tau-|\alpha|)}\|\partial^{\alpha} \widetilde{\varphi_{1}V}_{\varepsilon}\|_{L^{\infty}(\mathbb{R}^{d})}\\ \leq\varepsilon_{k}^{-\min(0,\tau-|\alpha|)}f_{k}^{-2}l_{k}^{| \alpha|}\varepsilon^{\min(0,\tau-|\alpha|)}\sup_{x\in\mathbb{R}^{d}}\sum_{ \beta\leq\alpha}\binom{\alpha}{\beta}|(\partial^{\alpha-\beta}\varphi_{1})( \partial^{\beta}V_{\varepsilon})(l_{k}x+x_{k})| \tag{4.22}\]
\[\leq C_{\alpha},\]
where \(C_{\alpha}\) is independent of \(k\) and \(\hbar\). We have in the estimate used the definition of \(\varepsilon_{k}\), \(f_{k}\), (4.19) and Proposition 2.3. Hence we have that all these estimates are independent of \(\hbar\) and \(k\). The last numbers we check are the numbers \(\|\partial_{x}^{\alpha}\widetilde{\varphi_{k}\varphi}\|_{L^{\infty}(\mathbb{ R}^{d})}\) for all \(\alpha\in\mathbb{N}_{0}^{d}\), where \(\widetilde{\varphi_{k}\varphi}=(T_{x_{k}}U_{l_{k}})\varphi_{k}\varphi(T_{x_{ k}}U_{l_{k}})^{*}\). Here we have by construction of \(\varphi_{k}\) (4.17) for all \(\alpha\in\mathbb{N}_{0}^{d}\)
\[\|\partial_{x}^{\alpha}\widetilde{\varphi_{k}\varphi}\|_{L^{ \infty}(\mathbb{R}^{d})} =\sup_{x\in\mathbb{R}^{d}}\left|l_{k}^{|\alpha|}\sum_{\beta\leq \alpha}\binom{\alpha}{\beta}(\partial_{x}^{\beta}\varphi_{k})(l_{k}x+x_{k})( \partial_{x}^{\alpha-\beta}\varphi)(l_{k}x+x_{k})\right|\] \[\leq C_{\alpha}\sup_{x\in\mathbb{R}^{d}}\sum_{\beta\leq\alpha} \binom{\alpha}{\beta}l_{k}^{|\alpha-\beta|}\left|(\partial_{x}^{\alpha-\beta} \varphi)(l_{k}x+x_{k})\right|\leq\widetilde{C}_{\alpha}.\]
With this we have established that all numbers the constant from Theorem 3.11 are independent of \(\hbar\) and \(k\). From applying Theorem 3.11 we get that
\[\big{|}\operatorname{Tr}\big{[}\varphi g_{\gamma}(H_{\varepsilon, h})\big{]}-\frac{1}{(2\pi\hbar)^{d}}\int_{\mathbb{R}^{2d}}g_{\gamma}(p^{2}+V(x)) \varphi(x)\,dxdp\big{|} \tag{4.23}\] \[\leq\,\sum_{k\in\mathcal{I}}f_{k}^{2\gamma}\big{|}\operatorname{ Tr}\big{[}g_{\gamma}\tilde{H}_{\varepsilon,h_{k}})\widetilde{\varphi_{k} \varphi}\big{]}-\frac{1}{(2\pi h_{k})^{d}}\int_{\mathbb{R}^{2d}}g_{\gamma}(p^{ 2}+\tilde{V}(x))\widetilde{\varphi_{k}\varphi}(x)\,dxdp\big{|}\] \[\leq\,\sum_{k\in\mathcal{I}}f_{k}^{2\gamma}\big{|}\operatorname{ Tr}\big{[}g_{\gamma}(\tilde{H}_{\varepsilon,h_{k}})\widetilde{\varphi_{k} \varphi}\big{]}-\frac{1}{(2\pi h_{k})^{d}}\int_{\mathbb{R}^{2d}}g_{\gamma}(p^ {2}+\widetilde{\varphi_{1}V_{\varepsilon}}(x))\widetilde{\varphi_{k}\varphi} (x)\,dxdp\big{|}\] \[\quad+\sum_{k\in\mathcal{I}}\frac{f_{k}^{2\gamma}}{(2\pi h_{k})^ {d}}\Big{|}\int_{\mathbb{R}^{2d}}\big{[}g_{\gamma}(p^{2}+\widetilde{\varphi_ {1}V_{\varepsilon}}(x))\widetilde{\varphi_{k}\varphi}(x)-g_{\gamma}(p^{2}+ \tilde{V}(x))\big{]}\widetilde{\varphi_{k}\varphi}(x)\,dxdp\big{|}\] \[\leq C\sum_{k\in\mathcal{I}}\frac{f_{k}^{2\gamma}}{h_{k}^{d}} \Big{[}h_{k}^{1+\gamma}+\Big{|}\int_{\mathbb{R}^{2d}}\big{[}g_{\gamma}(p^{2}+ \widetilde{\varphi_{1}V_{\varepsilon}}(x))\widetilde{\varphi_{k}\varphi}(x)-g _{\gamma}(p^{2}+\tilde{V}(x))\big{]}\widetilde{\varphi_{k}\varphi}(x)\,dxdp \big{|}\Big{]}\]
To estimate the remaining integrals we will use Lemma 4.4. Combining this Lemma with (4.11) we obtain that
\[\Big{|}\int_{\mathbb{R}^{2d}}\big{[}g_{\gamma}(p^{2}+\widetilde{\varphi_{1}V_ {\varepsilon}}(x))\widetilde{\varphi_{k}\varphi}(x)-g_{\gamma}(p^{2}+\tilde{V} (x))\big{]}\widetilde{\varphi_{k}\varphi}(x)\,dxdp\Big{|}\leq C\hbar^{1+ \gamma}\leq Ch_{k}^{1+\gamma}. \tag{4.24}\]
Hence we obtain from combining (4.23) and (4.24) that
\[\big{|}\operatorname{Tr}\big{[}\varphi g_{\gamma}(H_{\varepsilon,h})\big{]}- \frac{1}{(2\pi\hbar)^{d}}\int_{\mathbb{R}^{2d}}g_{\gamma}(p^{2}+V(x)) \varphi(x)\,dxdp\big{|}\leq C\sum_{k\in\mathcal{I}}f_{k}^{2\gamma}h_{k}^{1+ \gamma-d}. \tag{4.25}\]
By just considering the sum over \(k\) on the righthand side of (4.25) and by using (4.20) we have that
\[\sum_{k\in\mathcal{I}}Ch_{k}^{1+\gamma-d}f_{k}^{2\gamma} =\sum_{k\in\mathcal{I}}\tilde{C}\hbar^{1+\gamma-d}\int_{B(x_{k},l_{ k})}l_{k}^{-d}f_{k}^{2\gamma}(l_{k}f_{k})^{d-1-\gamma}\,dx\] \[=\sum_{k\in\mathcal{I}}\tilde{C}\hbar^{1+\gamma-d}\int_{B(x_{k},l _{k})}l_{k}^{\gamma-d}l_{k}^{\frac{3d-3-\gamma}{2}}\,dx \tag{4.26}\] \[\leq\sum_{k\in\mathcal{I}}\hat{C}\hbar^{1+\gamma-d}\int_{B(x_{k}, l_{k})}l(x)^{\frac{d-3-\gamma}{2}}\,dx\leq C\hbar^{1+\gamma-d}\]
where we in the last inequality have used that \(\operatorname{supp}(\varphi)\subset\Omega_{2\tilde{\nu},V_{\varepsilon}}\) and that \(\Omega_{2\tilde{\nu},V_{\varepsilon}}\) is assumed to be compact. This ensures that the constant obtained in the last inequality is finite. From combing the estimates and identities in (4.12), (4.13), (4.25) and (4.26) we obtain that
\[\Big{|}\operatorname{Tr}\big{[}\mathbf{1}_{(-\infty,0]}(H_{h,\varepsilon}) \big{]}-\frac{1}{(2\pi\hbar)^{d}}\int_{\mathbb{R}^{2d}}\mathbf{1}_{(-\infty, 0]}(p^{2}+V_{\varepsilon}(x))\,dxdp\Big{|}\leq C\hbar^{1-d}\]
for all \(\hbar\in(0,\hbar_{0}]\). This concludes the proof.
Proof of Theorem 1.6 and Theorem 1.7.: The proof are almost analogous to the proof just given for Theorem 1.5. The difference is that \(\delta\) is here always chosen to be \(\frac{1}{3}\) when choosing the scaling of the framing operators \(H_{h,\varepsilon}^{\pm}\) with \(\varepsilon=\hbar^{1-\delta}\). After this choice the remainder of the proof is identical. This concludes the proof.
|
2301.00233 | Lagrangians Manifesting Color-Kinematics Duality in the NMHV Sector of
Yang-Mills | Scattering amplitudes in Yang-Mills theory are known to exhibit kinematic
structures which hint to an underlying kinematic algebra that is dual to the
gauge group color algebra. This color-kinematics duality is still poorly
understood in terms of conventional Feynman rules, or from a Lagrangian
formalism. In this work, we present explicit Lagrangians whose Feynman rules
generate duality-satisfying tree-level BCJ numerators, to any multiplicity in
the next-to-MHV sector of pure Yang Mills theory. Our Lagrangians make use of
at most three pairs of auxiliary fields (2,1,0-forms) -- surprisingly few
compared to previous attempts of Lagrangians at low multiplicities. To restrict
the Lagrangian freedom it is necessary to make several non-trivial assumptions
regarding field content, kinetic terms, and interactions, which we discuss in
some detail. Future progress likely hinges on relaxing these assumptions. | Maor Ben-Shahar, Lucia Garozzo, Henrik Johansson | 2022-12-31T15:55:41Z | http://arxiv.org/abs/2301.00233v2 | # Lagrangians Manifesting Color-Kinematics Duality in the NMHV Sector of Yang-Mills
###### Abstract
Scattering amplitudes in Yang-Mills theory are known to exhibit kinematic structures which hint to an underlying kinematic algebra that is dual to the gauge group color algebra. This color-kinematics duality is still poorly understood in terms of conventional Feynman rules, or from a Lagrangian formalism. In this work, we present explicit Lagrangians whose Feynman rules generate duality-satisfying tree-level BCJ numerators, to any multiplicity in the next-to-MHV sector of pure Yang Mills theory. Our Lagrangians make use of at most three pairs of auxiliary fields (2,1,0-forms) - surprisingly few compared to previous attempts of Lagrangians at low multiplicities. To restrict the Lagrangian freedom it is necessary to make several non-trivial assumptions regarding field content, kinetic terms, and interactions, which we discuss in some detail. Future progress likely hinges on relaxing these assumptions.
## 1 Introduction
Scattering amplitudes offer valuable insight to the mathematical structures of quantum field theory and gravity by uncovering patterns that are not readily apparent through standard Lagrangian techniques. The Bern-Carrasco-Johansson (BCJ) duality between color and kinematics [1; 2; 3] is a clear example where on-shell formulations preceded any Lagrangian understanding. According to the duality, scattering amplitudes in many gauge theories can be represented using cubic diagrams, each consisting of a kinematic numerator and a color factor that obey isomorphic relations. The color factors draw their characteristics from the Lie algebra of the gauge group, whereas the mathematical structure of the numerators is believed to originate from an unknown kinematic Lie algebra. See recent reviews [3; 4; 5; 6; 7; 8; 9].
In massless purely adjoint gauge theories, such as pure Yang-Mills (YM) theory, color-kinematics duality can be rephrased at tree level as the presence of BCJ amplitude relations [1; 10; 11; 12; 13]. The duality and amplitude relations were initially identified in pure YM [1] and its supersymmetric generalizations [10; 11; 14; 2], but have since been discovered in a
range of other gauge theories, including matter representations [15; 16; 17; 18; 19; 20; 21; 22; 23], higher-derivative interactions [24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35] or Chern-Simons fields [36; 37; 38; 39; 40], and massive gauge theories [18; 19; 21; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55]. Surprisingly, also certain scalar effective field theories [56; 57; 58; 59; 60; 61; 62; 63; 64; 65] obey the duality. The duality has been extended to loop-level amplitudes [66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81], form factors [92; 93; 94; 95; 96; 97; 98; 99; 100; 101], and even to curved-space correlators [102; 103; 104; 105; 106; 107; 108; 109; 110; 111; 112; 113; 114; 115; 116; 117; 118; 119; 120]. There have been various approaches to understanding the existence of color-kinematics duality and BCJ relations in massless gauge theories, including string theory, scattering equations, positive geometry [121; 122; 123; 124; 125; 126; 127; 128; 129; 130; 140; 15; 160; 17; 18; 19; 200; 201; 202; 203; 204; 205; 206; 207; 208; 209; 210; 211; 212; 213; 214; 215; 216].
Color-kinematics duality is intimately connected to the double-copy structure of gravity [1; 2]. For non-abelian gauge theories that obey color-kinematics duality, and contain physical spin-1 gluons, the double copy gives gravitational interactions after taking each cubic diagram and replacing the color factor by a second numerator copy. The double copy provides a versatile generalization of the Kawai-Lewellen-Tye (KLT) relations [127], which were originally derived for string theory at genus zero (for higher-genus generalizations see e.g. [82; 84; 128; 129; 130; 131; 132; 133; 134]). Through color-kinematics duality the double copy permits the construction of gravity loop amplitudes [135; 136; 137; 138; 139; 140; 141; 142; 143; 144; 145; 146; 147; 148; 149; 2, 75; 140; 144; 145; 146; 147; 148; 149; 2, 76; 150; 151; 152; 153; 154; 155; 156; 157; 158; 159; 2
decade [216; 217; 218; 219; 220; 221; 222; 14; 222] using non-local interaction terms or auxiliary fields. The early attempts produced Feynman rules capable of computing local BCJ numerators up to five [14] and six points [216]. However, the non-uniqueness of the BCJ numerators led to a proliferation of ambiguities in the resulting Lagrangians. In hindsight, it is clear that in a local formalism the ambiguities are absent in the MHV sector and first appear in the NMHV sector [206]. They can be traced back to the so-called generalized gauge freedom of the BCJ numerators [1; 2], which include the standard gauge freedom and non-linear field re-definitions. Understanding how to constrain this freedom and write down broadly valid and practical YM Lagrangians that manifest the duality is the topic of this paper.
Recently there has been an upswing in formulations that attempts to approach the problem of realizing YM color-kinematics duality and double copy off shell [222; 223; 224; 225; 226; 227; 228; 229; 230; 231; 232; 233; 234]. This involves understanding the role of BRST symmetries, homotopy algebras, double field theory, equations of motions and other off-shell Langrangian perspectives. We note that, in principle, the problem has been solved using the 10D pure-spinor formalism [222]; however, the practical usefulness of this duality-satisfying supersymmetric YM Lagrangian for advanced calculations remains to be understood.
In this paper, we approach the problem of finding duality-satisfying YM Lagrangians head on, by making a suitable small ansatz for the NMHV sector Lagrangian where the first ambiguities show up. We work in general dimension using covariant building blocks, and thus we generalize the 4D notion of helicity sectors by grading the tree-level numerators by the "polarization power" [206; 207], i.e. the number of inner products between polarization vectors \(\varepsilon_{i}\cdot\varepsilon_{j}\). Thus the NMHV sector corresponds to numerators with at most two polarization powers \((\varepsilon_{i}\cdot\varepsilon_{j})^{2}\); or, equivalently, one power of momentum inner products \(p_{i}\cdot p_{j}\). The construction of the ansatz is aided by the observation that there exist a bi-scalar subsector1 of the NMHV sector that has particularly simple BCJ numerators. This bi-scalar subsector was also considered in ref. [206] since it generates master numerators from which all other NMHV numerators can be determined. We find that the bi-scalar numerators can be computed using an exact rewriting of the standard YM tree-level Lagrangian, by "integrating in" a pair of auxiliary 2-form fields \(\{B^{\mu\nu},\tilde{B}^{\mu\nu}\}\) with additional simple cubic interactions.
Footnote 1: Terms in the half-ladder BCJ numerator proportional to a fixed polarization product \(\varepsilon_{1}\cdot\varepsilon_{n}\), which effectively becomes two scalars interacting with the YM field.
From the bi-scalar numerators one can obtain the remaining contributions to the NMHV numerators using a simple formula; however, we also demand that those contributions come directly from a Lagrangian. This required us to either use clever inspection and identification of needed new interactions terms aided by pictorial diagrams that expose the tensors structures of intermediate fields, or alternatively, a brute-force Lagrangian ansatz where the assumptions are more clearly spelled out. We find several interesting solutions which involve a pair of auxiliary vector fields \(\{Z^{\mu},\tilde{Z}^{\mu}\}\), and a pair of auxiliary scalar fields \(\{X,\tilde{X}\}\). We show that by considering an extended ansatz the need for scalar auxiliary fields is not obvious, as there exists interesting Lagrangian solutions where they are absent. Further relaxing the assumptions that went into the construction may provide simpler NMHV Lagrangians, but we leave this for future work.
the N\({}^{2}\)MHV sector, and also spell out some details about the possible generalization of the Lagrangian Feynman rules to one-loop calculations.
The paper is organized as follows: In section 2, we introduce the necessary notation used for describing amplitudes and numerators that satisfy color-kinematics duality, including decomposition into partial amplitudes, bi-scalar half-ladder numerators and useful pictorial diagrams. In section 3, we consider a cubic Lagrangian for computing MHV numerators, obtained by truncating the standard YM action. In section 4, we transform the standard YM Lagrangian by integrating in a pair of two-form auxiliary fields and then we add interactions and further auxiliary fields using both diagrammatic and ansatz approaches. In section 5, we briefly discuss consequences for one-loop numerators. Conclusions and outlook are given in section 6.
## 2 Graphs, Numerators and Tree Amplitudes
Here we will set the notation used for the amplitude and numerator building blocks, and discuss the decompositions and diagrammatic notation that are convenient for later sections.
### Color-kinematics duality and double copy
Scattering amplitudes in YM theory can be represented diagrammatically as a sum over cubic Feynman-like graphs [1; 2], for \(n\)-point tree level amplitudes this takes the form
\[\mathcal{A}_{n}=g^{n-2}\sum_{\Gamma\in\mathcal{G}_{n}}\frac{C_{\Gamma}N_{ \Gamma}}{D_{\Gamma}}\,. \tag{1}\]
Each cubic graph \(\Gamma\) is associated with a color factor \(C_{\Gamma}\), kinematic numerator \(N_{\Gamma}\) and propagator denominator \(D_{\Gamma}\). Here the set of cubic \(n\)-point graphs is denoted by \(\mathcal{G}_{n}\), and it can be constructed recursively starting from the three-point case, \(\mathcal{G}_{3}\), which contains only one such graph \(\Gamma=[1,2]\). At \(n\) points the set of cubic graphs is given by
\[\mathcal{G}_{n}=\left\{\Gamma|_{\ell\to[\ell,\,n-1]}\ \Big{|}\ \ell\in \Gamma\in\mathcal{G}_{n-1}\right\}, \tag{2}\]
where \(\ell\) denotes a Lie-valued element of the graph \(\Gamma\). For example, the graph \(\Gamma=[1,2]\) has three such elements, the external leg labels \(\ell=1\), \(\ell=2\) and the commutator of the labels \(\ell=[1,2]\). Applying the rule \(\ell\to[\ell,3]\) then gives three four-point graphs represented by nested commutators [207; 235; 206]
\[\mathcal{G}_{4}=\left\{[[1,3],2],\ [1,[2,3]],\ [[1,2],3]\right\}. \tag{3}\]
Each one of the four-point graphs have five Lie-valued elements (three labels, and two commutators), thus the total number of five-point graphs is \(|\mathcal{G}_{5}|=3\times 5\). In general, at \(n\) points, the number of cubic graphs is \(|\mathcal{G}_{n}|=3\times 5\times 7\times\cdots\times(2n-5)=(2n-5)!!\).
With the above graph notation the propagator denominator can be written as
\[D_{\Gamma}=\prod_{\begin{subarray}{c}\ell\in\Gamma\\ \ell\neq\mathbb{I},\ell\neq\Gamma\end{subarray}}p_{\ell}^{2}\,, \tag{4}\]
where \(p_{\ell}\) is the sum of the momentum of all the external legs in the nested commutator \(\ell\). For example, \(p_{[1,2]}=p_{1}+p_{2}\) and \(p_{[[1,3],2]}=p_{1}+p_{2}+p_{3}\), etc. Since the external legs are on shell \(p_{1}^{2},\ldots,p_{n}^{2}=0\), we do not include cases where \(\ell\) is an integer, nor where it is the graph \(\Gamma\), since by momentum conservation \(p_{\Gamma}=-p_{n}\).
The color factors for purely-adjoint YM theory correspond to rank-\(n\) tensors in the gauge-group Lie algebra, which can be constructed through contraction of structure constants, \(f^{abc}\), or as traces of products of generators \(T^{a}\). The latter case is most transparent in our graph notation, and an explicit formula can be given
\[C_{\Gamma}=C_{\Gamma}^{a_{1}a_{2}\cdots a_{n}}=\mathrm{Tr}\big{\{}(\Gamma|_{i \to T^{a_{i}}})\,T^{a_{n}}\big{\}}\,, \tag{5}\]
where the external legs \(i=1,\ldots n-1\) of the graph \(\Gamma\) are replaced by Lie-algebra generators, thus justifying the graph notation as nested commutators of Lie-valued elements. For example, the three-point graph \(\Gamma=[1,2]\) has the color factor
\[C_{[1,2]}=\mathrm{Tr}\big{\{}[T^{a_{1}},T^{a_{2}}]T^{a_{3}}\big{\}}=f^{a_{1}a _{2}a_{3}}\,. \tag{6}\]
Here, and throughout the paper, we use the normalization conventions \([T^{a},T^{b}]=f^{abc}T^{c}\) and \(\mathrm{Tr}\big{\{}T^{a}T^{b}\big{\}}=\delta^{ab}\).
Finally the numerators \(N_{\Gamma}\) capture all the remaining dependence on the kinematic data of the YM tree amplitude, and in this paper we assume that they are _local_ polynomials of the polarization vectors \(\varepsilon_{i}\) and momenta \(p_{i}\). To describe gluon scattering in pure YM theory, the polynomials must be multi-linear in the individual polarizations and of degree-\((n-2)\) in the momenta. It implies that the contributing monomial terms have the schematic form
\[N_{\Gamma}\ \sim\ \sum_{k=1}^{\lfloor n/2\rfloor}\,(\varepsilon\cdot \varepsilon)^{k}\,(p\cdot p)^{k-1}\,(\varepsilon\cdot p)^{n-2k}\,. \tag{7}\]
For example, for \(n=3\) points the numerator is equivalent to the color-stripped cubic Feynman rule of YM,
\[N_{[1,2]}=\varepsilon_{1}\cdot\varepsilon_{2}\,\varepsilon_{3}\cdot(p_{1}-p_{ 2})+\mathrm{cyclic}(1,2,3)\,. \tag{8}\]
It satisfies the same dihedral permutation symmetries \(N_{[2,1]}=-N_{[1,2]}\), \(N_{[2,3]}=N_{[3,1]}=N_{[1,2]}\) as the color factor \(C_{[1,2]}=f^{a_{1}a_{2}a_{3}}\).
Due to the Jacobi identity of the gauge-group Lie algebra, there are certain cubic relations between color factors. The statement of the color-kinematics duality [1, 2] is that for a large class of gauge theories there exists a choice of so-called BCJ numerators that obey the same relations as the color factors,
\[C_{\cdots[[X,Y],Z]\cdots}+C_{\cdots[[Y,Z],X]\cdots}+C_{\cdots[[Z, X],Y]\cdots}=0\] \[\Leftrightarrow\] \[N_{\cdots[[X,Y],Z]\cdots}+N_{\cdots[[Y,Z],X]\cdots}+N_{\cdots[[Z, X],Y]\cdots}=0\,. \tag{9}\]
A four-point YM numerator that satisfies this property is [3]
\[N_{[[1,2],3]} = \big{(}\varepsilon_{1}\!\cdot\!\varepsilon_{2}p_{1}^{\mu}+2 \varepsilon_{1}\!\cdot\!p_{2}\varepsilon_{2}^{\mu}-(1\leftrightarrow 2)\big{)} \big{(}\varepsilon_{3}\!\cdot\!\varepsilon_{4}p_{3\mu}+2\varepsilon_{3}\! \cdot\!p_{4}\varepsilon_{4\mu}-(3\leftrightarrow 4)\big{)} \tag{10}\] \[+\,s_{12}(\varepsilon_{1}\!\cdot\!\varepsilon_{3}\varepsilon_{2} \!\cdot\!\varepsilon_{4}-\varepsilon_{1}\!\cdot\!\varepsilon_{4}\varepsilon_{2} \!\cdot\!\varepsilon_{3})\,.\]
It is straightforward to check that, subject to on-shell conditions and momentum conservation, the numerator satisfies the Jacobi relation
\[N_{[[1,2],3]}+N_{[[2,3],1]}+N_{[[3,1],2]}=0\,, \tag{11}\]
as well as the permutation symmetries
\[N_{[[2,1],3]}=-N_{[[1,2],3]}\,,\ \ \ \ N_{[[4,3],2]}=N_{[[1,2],3]}\,. \tag{12}\]
In general, it is a difficult task to find YM numerators that satisfy color-kinematics duality. However, efforts during the last decade have led to several different constructions of all-multiplicity numerators with various properties [198, 199, 192, 193, 194, 195, 196, 200, 237]. Nevertheless, until now it has not been known how to construct such numerators directly from a duality-satisfying Lagrangian or Feynman rules. We will attempt address this problem in section 4.
The color factors in eq. (1) can be substituted by a second copy of color-kinematics satisfying numerators \(C_{\Gamma}\to\tilde{N}_{\Gamma}\), which gives the double-copy formula [1, 2]
\[\mathcal{M}_{n}=\Big{(}\frac{\kappa}{2}\Big{)}^{n-2}\sum_{\Gamma\in\mathcal{ G}_{n}}\frac{N_{\Gamma}\tilde{N}_{\Gamma}}{D_{\Gamma}}\,, \tag{13}\]
where, for dimensional consistency, we also substituted the couplings \(g\to\kappa/2\). The two sets of numerators can belong to either the same or different theories. If both sets of numerators belong to theories that contain physical spin-1 gauge fields, then the double-copy formula gives an amplitude for spin-2 fields in some theory of gravity. This follows from the fact that the double copy automatically gives diffeomorphism invariant amplitudes. To see this, consider a linearized gauge transformation acting on one of the polarizations of the gauge theory amplitude, \(\delta\varepsilon_{i}=p_{i}\). By definition, we expect that \(\delta\mathcal{A}_{n}=0\), from which it follows that it must be true that
\[\sum_{\Gamma\in\mathcal{G}_{n}}\frac{C_{\Gamma}\,\delta N_{\Gamma}}{D_{\Gamma }}=0\,, \tag{14}\]
where \(\delta N_{\Gamma}\) are the gauge transformed numerators. The fact that this vanishes does not depend on the details of the color factors, only on the fact that they come from a Lie algebra and thus satisfy Jacobi identities. If we have kinematic numerators that satisfy the same Jacobi identities, then it also follows that [1, 2, 238]
\[\sum_{\Gamma\in\mathcal{G}_{n}}\frac{\tilde{N}_{\Gamma}\,\delta N_{\Gamma}} {D_{\Gamma}}=0\,, \tag{15}\]
and likewise for \(N_{\Gamma}\) and \(\tilde{N}_{\Gamma}\) swapped. Hence, it follows that \(\delta\mathcal{M}_{n}=0\), and thus the double-copy formula is invariant under linearized diffeomorphisms [238]. Given that all amplitudes have this invariance, it follows that the theory enjoys non-linear diffeomorphism symmetry, so it can be interpreted as a theory of gravity. When both numerators come from four-dimensional pure YM, the double copy gives amplitudes in Einstein-dilaton-axion gravity, with Lagrangian [3]
\[\mathcal{L}=-\frac{\sqrt{-g}}{\kappa^{2}}\Big{(}2R-\frac{\partial_{\mu}\tau \partial^{\mu}\bar{\tau}}{(\text{Im}\,\tau)^{2}}\Big{)}\,, \tag{16}\]
where the complex field \(\tau=ie^{-\phi}+\chi\) contains the dilaton \(\phi\) and axion \(\chi\) scalars. In \(D\) dimensions the axion is promoted to a two-form \(B^{\mu\nu}\) and the Lagragian is also well known, see e.g. [3].
### Decompositions for amplitudes and numerators
Color-ordered partial amplitudes are defined as the gauge-invariant coefficients of the trace decomposition of the full amplitude,
\[\mathcal{A}_{n}=g^{n-2}\sum_{\sigma\in S_{n-1}}\mathrm{Tr}(T^{a_{1}}T^{a_{ \sigma(2)}}\ldots T^{a_{\sigma(n)}})A_{n}(1,\sigma_{2},\ldots,\sigma_{n})\,, \tag{17}\]
where the sum over permutations \(\sigma\) runs over the symmetric group \(S_{n-1}\) with \((n-1)!\) elements. The partial amplitudes can be computed directly using color-ordered Feynman rules, or from eq. (1) after expanding out the color factors \(C_{\Gamma}\) into the basis of ordered traces of generators.
Alternatively, repeatedly using the Jacobi identity (9) to obtain a smaller basis of color factors, gives the Del-Duca-Dixon-Maltoni (DDM) decomposition [239],
\[\mathcal{A}_{n}=g^{n-2}\sum_{\sigma\in S_{n-2}}C(1,\sigma_{2},\ldots,\sigma_{ n-1},n)A_{n}(1,\sigma_{2},\ldots,\sigma_{n-1},n)\,, \tag{18}\]
which uses a subset of the same partial amplitudes, and the independent color factors are
\[C(1,\sigma_{2},\ldots,\sigma_{n-1},n) \equiv C_{[[\cdots[[1,\rho_{2}],\rho_{3}],\ldots],\rho_{n-1}]}= \mathrm{Tr}([[\cdots[T^{a_{1}},T^{a_{\sigma(2)}}],\ldots],T^{a_{\sigma(n-1)}}] T^{a_{n}}) \tag{19}\] \[=\big{(}f^{a_{\sigma(2)}}\ldots f^{a_{\sigma(n-1)}}\big{)}_{a_{1} a_{n}}\,. \tag{20}\]
The \((n-2)!\)-fold DDM basis of color factors \(C_{\Gamma}\) makes use of the so-called half-ladder graphs
\[\Gamma_{\text{half-ladder}}(\rho)\equiv[[\cdots[[1,\rho_{2}],\rho_{3}],\ldots],\rho_{n-1}]\,, \tag{21}\]
which will serve as our canonical basis choice of graphs. Similarly, repeatedly using the kinematic Jacobi identity (9) on the BCJ numerators gives the decomposition [1, 240]
\[\mathcal{A}_{n}=\sum_{\Gamma\in\mathcal{G}_{n}}\frac{C_{\Gamma}\,N_{\Gamma}}{ D_{\Gamma}}=\sum_{\sigma,\rho\in S_{n-2}}C(1,\sigma_{2},\cdots,\sigma_{n-1},n)m( \sigma|\rho)N(1,\rho_{2}\cdots,\rho_{n-1},n)\,, \tag{22}\]
where we have set \(g=1\) and the independent half-ladder numerators are written as
\[N(1,\rho_{2}\cdots,\rho_{n-1},n)\equiv N_{[[\cdots[[1,\rho_{2}],\rho_{3}], \ldots],\rho_{n-1}]}\,. \tag{23}\]
Assuming that the half-ladder numerators come from a manifestly crossing-symmetric construction, they obey a reflection symmetry \(N(1,2,3\ldots,n)=(-1)^{n}N(n,\ldots,3,2,1)\), as well as anti-symmetry in the first and last pairs of arguments: \(N(2,1\ldots)=-N(1,2\ldots)\) and \(N(\ldots,n,n-1)=-N(\ldots,n-1,n)\).
The matrix \(m(\sigma|\rho)\) in eq. (22) is of size \((n{-}2)!\times(n{-}2)!\) and is built out of linear combinations of the scalar-type propagators, \(1/D_{\Gamma}\), as can be worked out from the above
two DDM decompositions of the BCJ numerators and color factors. The matrix \(m(\sigma|\rho)\) has appeared in many places in the literature, and it goes by a variety of different names, such as "propagator matrix" [240], the "bi-adjoint scalar amplitude" [241], or "inverse of the KLT kernel" [241; 127]. Is is also equivalent to the color-stripped partial amplitudes of "dual-scalar theory" [242; 14], "color-scalar theory" [243] or "scalar \(\phi^{3}\) theory" [17; 244].
Using the propagator matrix the color-ordered partial amplitudes can be written as
\[A(1,\sigma,n)=\sum_{\rho\in S_{n-2}}m(\sigma|\rho)N(1,\rho,n)\,, \tag{24}\]
which gives a (non-invertible) map between BCJ numerators and partial amplitudes [1]. The propagator matrix is not invertible for on-shell conserved momenta and hence it has a kernel, or null space. This implies that BCJ numerators contain unphysical contributions that live in this kernel. This explains why BCJ numerators are in general not unique. The ambiguity is called _generalized gauge freedom_[1; 2] and it corresponds to the freedom of shifting the BCJ numerators by _pure gauge_ contributions
\[N(1,\rho,n)\sim N(1,\rho,n)+N^{\rm gauge}(1,\rho,n)\,, \tag{25}\]
where pure gauge numerators are annihilated by the propagator matrix,
\[\sum_{\rho\in S_{n-2}}m(\sigma|\rho)N^{\rm gauge}(1,\rho,n)=0\,. \tag{26}\]
The generalized gauge freedom includes both standard gauge freedom and field redefinitions, and more generally any operation that changes the cubic diagram numerators while leaving the partial amplitudes invariant.
As illustrated in eq. (7), tree-level YM numerators are polynomial functions of the independent Lorentz contractions of polarizations and momenta \(\{\varepsilon_{i}\cdot\varepsilon_{j},\varepsilon_{i}\cdot p_{j},p_{i}\cdot p _{j}\}\). Most numerator operations that we are interested in do not mix terms that contains different powers \(\sim(\varepsilon_{i}\cdot\varepsilon_{j})^{k}\), so it is convenient to decompose the numerators into sectors defined by their _polarization power_\(k\)[206]. Thus from the general structure (7), we decompose the numerators as
\[N(1,\ldots,n)=\sum_{k=1}^{\lfloor n/2\rfloor}N^{(k)}(1,\ldots,n)\,, \tag{27}\]
where the polarization power \(k\) keeps track of how many \(\varepsilon_{i}\cdot\varepsilon_{j}\)-contractions are present in each monomial. This decomposition also makes physical sense since by a suitable choice of the reference momenta \(q_{i}^{\mu}\) in the polarizations \(\varepsilon_{i}=\varepsilon_{i}(p_{i},q_{i})\) one can show [206] that only the numerators \(N^{(1)},N^{(2)},\ldots,N^{(k)}\) contribute to the N\({}^{k-1}\)MHV sector of YM. Therefore when we refer to the N\({}^{k-1}\)MHV sector in this paper, we implicitly mean that we consider the \(N^{(\leq k)}\) numerators that contribute to this sector.
Alternatively, if one considers a dimensional reduction of YM, \(SO(1,D-1)\to SO(1,3)\times SO(D-4)\), then the \(N^{(k)}\) numerators give rise to tree amplitudes with at most \(2k\) external scalars. This can be formalized by considering derivative operators \(\frac{\partial}{\partial\varepsilon_{i}\cdot\varepsilon_{j}}\) that can act on the numerators and convert a pair of gluons to a pair of scalars [209; 210].
We often work with particles \(1\) and \(n\) being scalars, so it is convenient to introduce a bar notation on the half-ladder numerators to indicate that we are considering a bi-scalar YM sector [206],
\[\overline{N}^{(k)}(1,\ldots,n)\equiv\frac{\partial}{\partial\varepsilon_{1}{ \cdot}\varepsilon_{n}}N^{(k)}(1,\ldots,n)\,. \tag{28}\]
The complete bi-scalar sector numerator is then \(\overline{N}(1,\ldots,n)=\sum_{k=1}^{[n/2]}\overline{N}^{(k)}(1,\ldots,n)\).
It is possible to invert the operation in eq. (28) and construct the YM numerators from the bi-scalar numerators [206],
\[N^{(k)}(1,\ldots,n)=\frac{1}{k}\sum_{1\leq i<j\leq n}\varepsilon_{i}{\cdot} \varepsilon_{j}\overline{N}^{(k)}(i,\alpha_{i},i+1,\ldots,j-1,\beta_{j},j)\,, \tag{29}\]
where \(\alpha_{i}=[\cdots[1,2],3],\ldots,i-1]\) and \(\beta_{j}=[j+1,[\ldots,n-2,[n-1,n]]\cdots]\) are nested commutators. Numerators of commutators distribute over their arguments to be consistent with the Lie-algebraic interpretation, \(\overline{N}^{(k)}(\ldots,[X,Y],\ldots)\equiv\overline{N}^{(k)}(\ldots,X,Y, \ldots)-\overline{N}^{(k)}(\ldots,Y,X,\ldots)\). The boundary cases of the sum are handled through the identifications \(\alpha_{2}=[1]=1\), \(\beta_{n-1}=[n]=n\), and when either bracket is empty \(\alpha_{1}=\beta_{n}=[\,]\to(-1)\) the numerator is multiplied by a minus sign.
We can demonstrate eq. (29) using the four-point numerator as an example. The bi-scalar numerator is unique (given that \(\overline{N}(1,2,3,4)=\overline{N}(4,3,2,1)\)),
\[\overline{N}(1,2,3,4)=4\varepsilon_{2}{\cdot}p_{1}\varepsilon_{3}{\cdot}p_{1} +4\varepsilon_{2}{\cdot}p_{1}\varepsilon_{3}{\cdot}p_{2}-\varepsilon_{2}{ \cdot}\varepsilon_{3}s_{12}\,, \tag{30}\]
and can be further decomposed into polarization powers
\[\overline{N}^{(1)}(1,2,3,4) =4\varepsilon_{2}{\cdot}p_{1}\varepsilon_{3}{\cdot}p_{1}+4 \varepsilon_{2}{\cdot}p_{1}\varepsilon_{3}{\cdot}p_{2}\,,\] \[\overline{N}^{(2)}(1,2,3,4) =-\varepsilon_{2}{\cdot}\varepsilon_{3}s_{12}\,. \tag{31}\]
Note that the polarization-power labels also keep track of the overall \(\varepsilon_{1}{\cdot}\varepsilon_{4}\) that was removed by the derivative \(\frac{\partial}{\partial\varepsilon_{1}{\cdot}\varepsilon_{n}}\). Using eq. (29) one can verify that
\[N^{(k)}(1,2,3,4)= \frac{1}{k}\Big{[}(\varepsilon_{1}{\cdot}\varepsilon_{4}) \overline{N}^{(k)}(1,2,3,4)-(\varepsilon_{1}{\cdot}\varepsilon_{3}) \overline{N}^{(k)}(1,2,4,3)-(\varepsilon_{1}{\cdot}\varepsilon_{2}) \overline{N}^{(k)}(1,[3,4],2)\] \[-(\varepsilon_{2}{\cdot}\varepsilon_{4})\overline{N}^{(k)}(2,1,3, 4)+(\varepsilon_{2}{\cdot}\varepsilon_{3})\overline{N}^{(k)}(2,1,4,3)-( \varepsilon_{3}\cdot\varepsilon_{4})\overline{N}^{(k)}(3,[1,2],4)\Big{]} \tag{32}\]
matches the numerator in eq. (10), given that \(N_{[[1,2],3]}=N^{(1)}(1,2,3,4)+N^{(2)}(1,2,3,4)\).
For later purposes, it is convenient to introduce shorthand notation for the kinematic variables that frequently appear in the numerators. We define the variables2
Footnote 2: The \(x_{i}\) are sometimes called “region momenta”, and the \(u_{i}\) are equivalent to cubic scalar-gluon vertices.
\[x_{i}^{\mu}\equiv\sum_{j=1}^{i}p_{j}^{\mu}\,,\hskip 28.452756ptu_{i}\equiv 2 \varepsilon_{i}\cdot x_{i}\,, \tag{33}\]
which make the bi-scalar half-ladder numerators simpler to work with. For example, the four-point bi-scalar numerator (30) now takes the simple form
\[\overline{N}(1,2,3,4)=u_{2}u_{3}-\varepsilon_{2}\cdot\varepsilon_{3}x_{2}^{2}\,, \tag{34}\]
and the polarization-power components are then \(\overline{N}^{(1)}=u_{2}u_{3}\) and \(\overline{N}^{(2)}=-\varepsilon_{2}\cdot\varepsilon_{3}x_{2}^{2}\). Note that this notation obscures the relabeling symmetries of the numerators so it should be used with some care, and we will mainly use it for bi-scalar half-ladder numerators.
It is illuminating to attempt to represent the contributions to the numerators using diagrammatic notation. As a general guiding principle, we track inner products between polarization vectors using solid lines and wavy lines represents other contributions (typically inner products between polarizations and momenta). For example, the four-point bi-scalar numerators are represented by the two diagrams
\[\overline{N}^{(1)}=1\,\raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.26378pt]{ \includegraphics[width=14.26378pt]{\includegraphics[width=14.26378pt]{ \includegraphics[width=14.26378pt]{\includegraphics[width=14.26378pt]{ \includegraphics[width=14.26378pt]{\includegraphics[width=14.26378pt]{ \includegraphics[width=14.26378pt]{\includegraphics[width=14.26378pt]{ \includegraphics[width=14.26378pt]{\includegraphics[width=14.26378pt]{ \includegraphics[width=14.26378pt]{\includegraphics[width=14.26378pt]{ \includegraphics[width=14.26378pt]{\includegraphics[width=14.26378pt]{ \includegraphics[width=14.26378pt]{\includegraphics[width=14.26378pt]{ \includegraphics[width=14.26378pt]{\includegraphics[width=14.26378pt]{ \includegraphics[width=14.26378pt]{\includegraphics[width=14.26378pt]{ \includegraphics[width=14.26378pt]{\includegraphics[width=14.26378pt]{ \includegraphics[width=14.26378pt]{\includegraphics[width=14.26378pt]{ \includegraphics[width=14.26378pt]{\includegraphics[width=14.26378pt]{ \includegraphics[width=14.26378pt]{\includegraphics[width=14.26378pt]{ \includegraphics[width=14.26378pt]{\includegraphics[width=14.26378pt]{ \includegraphics[width=14.26378pt]{\includegraphics[width=14.26378pt]{ \includegraphics[width=14.26378pt]{\includegraphics[width=14.26378pt]{ \includegraphics[width=14.26378pt]{\includegraphics[width=14.26378pt]{ \includegraphics[width=14.26378pt]{\includegraphics[width=14.26378pt]{ \includegraphics[width=14.26378pt]{\includegraphics[width=14.26378pt]{ \includegraphics[width=14.26378pt]{\includegraphics[width=14.26378pt]{ \includegraphics[width=14.26378pt]{\includegraphics[width=14.26378pt]{ \includegraphics[width=14.26378pt]{\includegraphics[width=14.26378pt]{ \includegraphics[width=14.26378pt]{\includegraphics[width=14.26378pt]{ \includegraphics[width=14.26378pt]{\includegraphics[width=14.26378pt]{ \includegraphics[width=14.26378pt]{includegraphics[width=14.26378pt]{ \includegraphics[width=14.26378pt]{\includegraphics[width=14.26378pt]{ \includegraphics[width=14.26378pt]{\includegraphics[width=14.26378pt]{ \includegraphics[width=14.26378pt]{includegraphics[width=14.26378pt]{ \includegraphics[width=14.26378pt]{includegraphics[width=14.26378pt]{includegraphics[width=14.26378pt]{ \includegraphics[width=14.26378pt]{includegraphics[width=14.26378pt]{includegraphics[width=14.26378pt]{ \includegraphics[width=14.26378pt]{includegraphics[width=14.26378pt]{includegraphics[width=14.26378pt]{includegraphics[ width=14.26378pt]{includegraphics[width=14.26378pt]{ \includegraphics[width=14.26378pt]{includegraphics[width=14.26378pt]{includegraphics[width=14.26378pt]{ \includegraphics[width=14.26378pt]{includegraphics[width=14.26378pt]{includegraphics[ width=14.26378pt]{includegraphics[width=14.26378pt]{ \includegraphics[width=14.26378pt]{includegraphics[width=14.26378pt]{includegraphics[width=14.26378pt]{includegraphics[width 14.26378pt]{includegraphics[width=14.26378pt]{includegraphics[width=14.26378pt]{includegraphics[ width=14.26378pt]{includegraphics[width 14.2
MHV Lagrangian
Tree-level MHV amplitudes of YM can be computed from the BCJ numerators with the lowest polarization power \(N^{(1)}\)[206]. For example, if legs 1 and 2 carry negative helicity and the remaining legs have positive helicity, then choosing reference null momenta \(q_{i}^{\mu}\) as
\[\varepsilon_{1}^{-}(p_{1},q_{1}=p_{2})\,,\ \ \ \ \ \varepsilon_{2}^{-}(p_{2},q_{2}=p_{1}) \,,\ \ \ \ \ \varepsilon_{i>2}^{+}(p_{i},q_{i}=p_{1}) \tag{18}\]
ensures that the only non-vanishing polarization products are \(\varepsilon_{2}\cdot\varepsilon_{i}\). Because they all contain \(\varepsilon_{2}\) these factors can at most appear linearly in the numerator, hence \(N^{(1)}\) is sufficient.
By dimensional analysis the numerators \(N^{(1)}\) cannot contain products of momenta \(p_{i}\cdot p_{j}\) hence they cannot give rise to inverse propagators \(p^{2}\sim\Box\), and thus there are no hidden contact terms inside the numerators. This implies that \(N^{(1)}\) must originate from a YM Lagrangian that neither has contact terms nor auxiliary fields. Thus we can simply truncate the YM Lagrangian3 to cubic order, which defines the MHV Lagrangian
Footnote 3: We assume Lorenz gauge \(\partial\cdot A=0\) (or, more accurately, Feynman gauge) since the BCJ numerators are defined to sit on top of propagators that are in Feynman gauge, see eq. (4).
\[\mathcal{L}_{\rm MHV}=\mathcal{L}_{\rm YM}\Big{|}_{\rm cubic}={\rm Tr}\, \left(\frac{1}{2}A_{\mu}\Box A^{\mu}-\partial_{\mu}A_{\nu}[A^{\mu},A^{\nu}] \right)\,. \tag{19}\]
For simplicity, we have set the gauge coupling to unity, \(g=1\), which we will do henceforth.
The Lagrangian (19) gives standard cubic YM Feynman rules that produce numerators of all polarization power sectors \(k\geq 1\), but for the purpose of this section we can assume that the resulting numerators are truncated to \(N^{(1)}\) because of the conditions (18) imposed on the polarization vectors. That this Lagrangian is invalid beyond this sector is obvious from the fact that the truncation of the quartic term is not a valid (gauge-fixing) operation.
Let us now show that the numerators \(N^{(1)}\) computed from the Lagrangian (19) obey color-kinematics duality. We use the diagrammatic notation from the previous section to make the argument simpler. The \(N^{(1)}\) numerators are computed from sums of diagrams each having just one solid line (representing the contracted polarization vectors). We can think of the solid line as a scalar line, and we have to sum over all possible pairs of external states that are joined by this line. The scalar line can be identified from the equations of motion of the MHV Lagrangian,
\[\Box A^{\mu}=-2[A_{\nu},\partial^{\nu}A^{\mu}]+[A_{\nu},\partial^{\mu}A^{\nu} ]\,, \tag{20}\]
where the second term produces contractions that ultimately connect external polarization vectors. The first term corresponds to cubic interactions were no scalar line is present, thus we think of it as consisting of only wavy lines. It is straightforward to then study the triplet sum of off-shell numerators obtained using these two interactions. One can show that the equation
\[\begin{array}{c}\includegraphics[width=14.226378pt, width=14.226378pt]{figs.eps}\end{array} \tag{21}\]
holds up to terms proportional to \(\varepsilon_{2}\!\cdot\!p_{2}\) and \(\varepsilon_{3}\!\cdot\!p_{3}\). Such terms vanish on-shell because of the transversality of \(\varepsilon_{i}\), but we need to show that the above three-term identity holds for a generic off-shell numerator embedded into a larger tree diagram. Thus we need to show that transversality holds for all off-shell wavy lines in the this MHV sector.
We can show this using induction, essentially feeding an arbitrary number of wavy-line interactions into eq. (10). Suppose that to some wavy line \(i\) we attach a cubic diagram generated from the first interaction term in eq. (11), producing two new wavy lines \(j,k\), so \(i\to[jk]\). Then its effective polarization vector4 takes the form
Footnote 4: Note that this is not to be confused with an external properly normalized polarization vector.
\[\varepsilon_{i}^{\mu}=-2(\varepsilon_{j}\!\cdot\!p_{k}\varepsilon_{k}^{\mu}- \varepsilon_{k}\!\cdot\!p_{j}\varepsilon_{j}^{\mu})\,, \tag{12}\]
in terms of the effective polarizations of the lines \(j,k\). By the induction hypothesis, we assume that the latter polarizations are transverse, \(\varepsilon_{j}\!\cdot\!p_{j}=\varepsilon_{k}\!\cdot\!p_{k}=0\). Using this, and the antisymmetry in the labels \(j\) and \(k\), eq. (12) implies that \(\varepsilon_{i}\!\cdot\!p_{i}=0\). Thus the wavy-line interaction preserves transversality, and since the wavy lines will eventually terminate in proper external states a tree level the induction hypothesis is correct. This completes the argument that eq. (10) holds off shell.
We also need to consider kinematic Jacobi identities of diagrams containing only the first term in eq. (11), which is the wavy-line interaction (12). However, note that eq. (12) is essentially a momentum-space Lie bracket of plane-wave vector fields \(A_{i}^{\mu}=\varepsilon_{i}^{\mu}e^{ip_{i}\cdot x}\),
\[A_{i}\cdot\partial=-2\left[A_{j}\cdot\partial,\,A_{k}\cdot\partial\,\right], \tag{13}\]
and so it automatically obeys the Jacobi identity. Since the vector fields are transverse \(\partial\cdot A_{i}=0\), we have thus exposed a kinematic sub-algebra that corresponds to volume-preserving diffeomorphisms (in analogy with the previously found 2D area-preserving [204] and 3D volume-preserving diffeomorphisms [40]). However, we cannot reproduce the full \(N^{(1)}\) numerators from commutators of only these diffeomorphism generators, since the full numerator is a superposition of all diagrams with a single solid line going between pairs of external states. Nevertheless, by linearity of the kinematic Jacobi identity, the superposition of all such diagrams will give \(N^{(1)}\) numerators that obey color-kinematics duality.
To summarize, for tree level diagrams obtained from the standard cubic YM Lagrangian, color-kinematics duality is satisfied at polarization power one, or equivalently, for MHV amplitudes. Note that the off-shell argument does not automatically extend to loop level since it was important that the recursive transversality argument terminates with external on-shell legs, which is always true at tree level, but not at loop level.
Finally, let us give the half-ladder bi-scalar numerators that are generated from the Feynman rules of the MHV Lagrangian (13). They are simply
\[\overline{N}^{(1)}(1,2\ldots,n-2,n)=\underbrace{\begin{array}{c}2\\ \raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{figs/2.eps}} \end{array}}_{\mbox{$\begin{array}{c}$\begin{array}{c}$\begin{array}{c}$n-1 \\ \raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{figs/2.eps}} \end{array}}\\ \raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{figs/2.eps}} \end{array}}_{\mbox{$\begin{array}{c}$\begin{array}{c}$\begin{array}{c}$ \begin{array}{c}$\begin{array}{c}$\begin{array}{c}$\begin{array}{c} $\begin{array}{c}$\begin{array}{c}$\begin{array}{c}$\begin{array}{c} $\begin{array}{c}$\begin{array}{c}$\begin{array}{c}$\begin{array}{c} $\begin{array}{c}$\begin{array}{c}$\begin{array}{c}$\begin{array}{c} $\begin{array}{c}$\begin{array}{c}$\begin{array}{c}$\begin{array}{c} $\begin{array}{c}$\begin{array}{c}$\begin{array}{c}$\begin{array}{c} $\begin{array}{c}$\begin{array}{c}$\begin{array}{c}$\begin{array}{c} $\begin{array}{c}$\begin{array}{c}$\begin{array}{c}$\begin{array}{c} $\begin{array}{c}$\begin{array}{c}$\begin{array}{c}$\begin{array}{c} $\begin{array}{c}$\begin{array}{c}$\begin{array}{c}$\begin{array}{c} $\begin{array}{c}$\begin{array}{c}$\begin{array}{c}$\begin{array}{c} $\begin{array}{c}$\begin{array}{c}\begin{array}{c}$\begin{array}{c} $\begin{array}{c}\begin{array}{c}$\begin{array}{c}$\begin{array}{c} $\begin{array}{c}$\begin{array}{c}\begin{array}{c}$\begin{array}{c} $\begin{array}{c}\begin{array}{c}$\begin{array}{c}\begin{array}{c} $\begin{array}{c}\begin{array}{c}$\begin{array}{c}\begin{array}{c} $\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} $\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} $\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} $\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} $\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} $\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} $\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} $\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} $\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} $\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} $\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} $\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} $\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} $\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} $\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} $\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\end{array}\begin{array}[ ]{c}\begin{array}{c}\begin{array}[c]{c}\begin{array}[c]{array}\end{array}[c]{\array}{c} \begin{array}{c}\begin{array}[c]{array}\begin{array}[c]{c}\begin{array}[c]{ \array}[c]{c}\begin{array}[c]{\array}[c]{c}\begin{array}[c]{array}\begin{array}[c]{c}\begin{array}[c]{ \array}[c]{c}\begin{array}[c]{array}\begin{array}[c]{c}\begin{array}[c]{\array}[c]{ \array}[c]{c}\begin{array}[c]{array}\
and the remaining contributions to the half-ladder numerator \(N^{(1)}\) can be obtained using eq. (29), or from the Feynman rules of the Lagrangian (20). The non-half-ladder diagrams are similarly computed either from commutators of the half-ladder numerator \(N^{(1)}\), or from the same Feynman rules.
## 4 NMHV Lagrangian
In this section we construct several cubic Lagrangians that produce BCJ numerators at polarization power two, which allow us to compute helicity amplitudes up to the NMHV sector of YM. We begin the construction at four points, and work our way upwards in multiplicity by adding terms to the Lagrangian, until the corrections terminate at seven points.
### The four-point NMHV Lagrangian
Let us start from the standard YM Lagrangian (subject to Lorenz gauge \(\partial\cdot A=0\)),
\[\mathcal{L}_{\text{YM}}=-\frac{1}{4}\text{Tr}\left(F^{\mu\nu}\right)^{2}=\text {Tr}\,\left(\frac{1}{2}A_{\mu}\Box A^{\mu}-\partial_{\mu}A_{\nu}[A^{\mu},A^{ \nu}]-\frac{1}{4}[A_{\mu},A_{\nu}][A^{\mu},A^{\nu}]\right)\,, \tag{21}\]
where the field strength is \(F^{\mu\nu}=\partial^{\mu}A^{\nu}-\partial^{\nu}A^{\mu}+[A^{\mu},A^{\nu}]\). For later purposes, let us also quote the corresponding equations of motion
\[\Box A^{\nu}=-2[A_{\mu},\partial^{\mu}A^{\nu}]+[A_{\mu},\partial^{\nu}A^{\mu} ]-[A_{\mu},[A^{\mu},A^{\nu}]]\,. \tag{22}\]
We seek to find a cubic rewriting of this Lagrangian that generates BCJ numerators at polarization power two. As a first step, we can resolve the quartic interaction by introducing propagating auxiliary fields. There are several ways to do this. We find that an elegant solution is to reinterpret the nested commutator, appearing in the last term of the above equations of motion, to contain a two-form tensor \(B^{\mu\nu}=-1/2[A^{\mu},A^{\nu}]\). In order to allow for a suitable kinetic term of correct mass dimension, we introduce a companion field \(\tilde{B}^{\mu\nu}\) of mass dimension zero, into which \(B^{\mu\nu}\) propagates. The Lagrangian we obtain is
\[\mathcal{L}_{4}=\text{Tr}\,\left(\frac{1}{2}A_{\mu}\Box A^{\mu}-\partial_{\mu }A_{\nu}[A^{\mu},A^{\nu}]+B_{\mu\nu}\Box\tilde{B}^{\mu\nu}+\frac{1}{2}[A_{\mu},A_{\nu}](B^{\mu\nu}+\Box\tilde{B}^{\mu\nu})\right), \tag{23}\]
and the equations of motion for \(B^{\mu\nu}\) and \(\tilde{B}^{\mu\nu}\) are given by
\[B^{\mu\nu}=\Box\tilde{B}^{\mu\nu}=-\frac{1}{2}[A^{\mu},A^{\nu}]\,, \tag{24}\]
which we can plug into the Lagrangian above to recover \(\mathcal{L}_{\text{YM}}\) as in eq. (21). To compute amplitudes from this Lagrangian one has to restrict the external legs to be vectors \(A^{\mu}\), since the \(B\) and \(\tilde{B}\) fields are composite (the linearized excitations of \(B\) vanish). As discussed in the previous section, the cubic-in-\(A^{\mu}\) interactions of this Lagrangian correctly reproduce the MHV sector of YM, while the double line in the polarization-power two sector (see for example the second graph in eq. (35)) corresponds to propagation of the \(B^{\mu\nu}\) and \(\tilde{B}^{\mu\nu}\) fields. Since these fields can be integrated out yielding the standard YM Lagrangian, the
Feynman rules of eq. (4.3) reproduce standard YM tree amplitudes. At four points, the BCJ numerator is unique [207] (up to an overall normalization) upon imposing relabeling symmetry and reflection symmetry, as given in eq. (10), and it is also correctly reproduced by the Lagrangian (4.3). In fact, any cubic action for YM theory would reproduce the correct four-point numerator, but we find that the above choice of \(\mathcal{L}_{4}\) with both the \(B\) and \(\tilde{B}\) fields allows for the necessary freedom to proceed to generating BCJ numerators at higher points.
### The five-point NMHV Lagrangian
Going to five points, we can immediately see that \(\mathcal{L}_{4}\) is unable to generate half-ladder numerators \(N^{(2)}\) proportional to \(\varepsilon_{1}\!\cdot\!\varepsilon_{5}\,\varepsilon_{2}\!\cdot\!\varepsilon_ {4}\), which is the last diagram pictured in eq. (36), and corresponds to the double-line field emitting a gluon line. To generate this kind of graph we need new interaction terms of the schematic form \(\partial AB\tilde{B}\). In principle, introducing new interactions could change the equations of motion, change the four-point amplitude, and break gauge-invariance of YM theory. Thus one can expect it to be a delicate procedure.
Let consider adding new linear-in-\(B^{\mu\nu}\) or linear-in-\(\tilde{B}^{\mu\nu}\) terms on the right-hand side of their corresponding equations of motion. By repeated insertion of the equations of motion into themselves, we would generate a set of non-local higher order \(A^{\mu}\) interactions (similar to refs. [216; 14]). Can we make sure that such new interactions always cancel out in any tree-level amplitude computation? Let us start, for the sake of simplicity, by requiring something more specific, namely that the deformation introduced by the Lagrangian terms \(\partial AB\tilde{B}\) vanish within the subsystem of equations of motion for the auxiliary fields. This can be easily achieved if they take the following form
\[B^{\mu\nu}=-\frac{1}{2}[A^{\mu},A^{\nu}]+\alpha\frac{\partial_{\rho}}{\Box} \left([A^{\mu},B^{\rho\nu}]+\text{cyclic}(\mu\rho\nu)\right), \tag{4.5}\]
where \(\alpha\) is a free parameter for now. When these equations of motion are repeatedly substituted back into themselves \(n\) times, the sum over cyclic permutations of Lorentz indices in the commutator manifestly vanishes by Jacobi identity at any order in \(A\), yielding
\[B^{\mu\nu,(n)}=-\frac{1}{2}[A^{\mu},A^{\nu}]+\mathcal{O}(\alpha^{n})\,. \tag{4.6}\]
where the last term is \(\mathcal{O}(\alpha^{n})\sim\alpha^{n}(A^{\mu})^{n}B^{\nu\rho,(0)}\), and it vanishes in perturbative tree-level computations since we take \(B^{\nu\rho,(0)}=0\) for external states. The equations of motion in eq. (4.5) must arise from introducing the following interactions in the Lagrangian:
\[\Delta\mathcal{L}_{5}=-\alpha\operatorname{Tr}\partial_{\rho}\tilde{B}_{\mu \nu}\Big{(}[A^{\mu},B^{\nu\rho}]+\text{cyclic}(\mu\nu\rho)\Big{)}. \tag{4.7}\]
where \(\mathcal{L}_{5}=\mathcal{L}_{4}+\Delta\mathcal{L}_{5}\) is the total Lagrangian. As argued, the term in parentheses does not contribute to on-shell tree-level scattering amplitudes by the Jacobi identity, but it does alter the individual \(N^{(2)}\) numerators.
With the new Lagrangian \(\mathcal{L}_{5}\) the equations of motion for \(\tilde{B}^{\mu\nu}\) now read
\[\tilde{B}^{\mu\nu}=-\frac{1}{2}\frac{1}{\Box}[A^{\mu},A^{\nu}]+\alpha\frac{1} {\Box}\big{[}A_{\rho}\,,\,\partial^{\mu}\tilde{B}^{\rho\nu}+\text{cyclic}(\mu \rho\nu)\big{]}. \tag{4.8}\]
Note that repeated insertion of the \(\tilde{B}\)-field equations of motion into themselves generates an infinite set of contact terms of increasing \(A\) order. However, from the \(B\)-field equations of motion is clear that after integrating out the combined pair of auxiliary fields, the standard YM tree-amplitudes amplitudes are recovered.
We now need to probe whether the deformed numerators enjoy color-kinematics duality for some choice of \(\alpha\). We can do so by explicitly checking Jacobi relations between the \(N^{(2)}\) numerators, or checking that the DDM-decomposed partial amplitude (24) returns a gauge invariant quantity. At five points, we confirm that BCJ numerators are obtained after fixing the free parameter in \(\Delta\mathcal{L}_{5}\) to the value \(\alpha=2\), giving the duality-satisfying Lagrangian
\[\mathcal{L}_{5} = \text{Tr}\left(\frac{1}{2}A_{\mu}\Box A^{\mu}-\partial_{\mu}A_{ \nu}[A^{\mu},A^{\nu}]+B_{\mu\nu}\Box\tilde{B}^{\mu\nu}+\frac{1}{2}[A_{\mu},A_{ \nu}](B^{\mu\nu}+\Box\tilde{B}^{\mu\nu})\right. \tag{49}\] \[\qquad+4\partial_{\nu}\tilde{B}_{\mu\rho}[A^{\mu},B^{\nu\rho}]-2 \partial_{\rho}\tilde{B}_{\mu\nu}[A^{\rho},B^{\mu\nu}]\right).\]
We thus find the following NMHV contributions to the bi-scalar sector diagrams that we introduced in section 2,
\[\begin{array}{c}\includegraphics[width=142.26378pt, width=142.26378pt]{figs.eps} \end{array}\quad\begin{array}{c}\includegraphics[width=142.26378pt, width=142.26378pt]{figs.eps}\end{array}\quad= \quad\varepsilon_{3}\cdot\varepsilon_{4}(u_{2}\,x_{3}^{2}-\varepsilon_{2}\! \cdot\!x_{3}\,x_{2}^{2})\,, \tag{50}\] \[\begin{array}{c}\includegraphics[width=142.26378pt, width=142.26378pt]{figs.eps} \end{array}\quad=\quad\frac{1}{2}\varepsilon_{2}\cdot\varepsilon_{4}u_{3}(x_{2}^ {2}+x_{3}^{2})\,. \tag{51}\]
As before, the remaining \(N^{(2)}\) contributions to the half-ladder diagrams can be obtained using eq. (29), or from the Feynman rules of the Lagrangian (49). Since we are at five points, all cubic diagrams are of the half-ladder type, and thus this completes the five-point construction.
Before proceeding to discuss the six-point case, let us point out a remarkable fact about the Lagrangian (49). We have checked through multiplicity \(n=11\) that its Feynman rules correctly compute half-ladder BCJ numerators in the bi-scalar sector \(\overline{N}^{(2)}\). Inserting those into eq. (29), gives all half-ladder numerators \(N^{(2)}\), and further use of Jacobi relations give all non-half-ladder numerators in the NMHV sector, up to the multiplicity we checked. Based on this rigorous pattern, we conjecture that the Lagrangian (49) correctly computes the bi-scalar NMHV numerators \(N^{(2)}\) to any multiplicity at tree level, and furthermore implicitly provides a good BCJ representation for the remaining diagrams in this sector.
### Closed form representation of the bi-scalar sector numerators
Let us summarize what we have achieved thus far by giving a closed form expression for the bi-scalar numerators. As shown in eq. (37), the MHV bi-scalar numerators take a simple form
\[\overline{N}^{(1)}(1,2,\ldots,n-1,n)=u_{2}u_{3}\ldots u_{n-1}\,. \tag{52}\]
Next, we gave a schematic diagram representation for the bi-scalar sector NMHV numerators in eq. (37), and with the Lagrangian \(\mathcal{L}_{5}\) in eq. (49) we can make this more precise. Since we are considering numerators at polarization power two, each contribution must have exactly one vertex of type \(AAB\) and one of type \(AA\Box\tilde{B}\) in the diagram. Consider a half-ladder diagram where the \(B\) field is first sourced at position \(i\) to the left of the final \(\tilde{B}\) field at position \(k\). Between these two vertices a chain of \(AB\partial\tilde{B}\)-type vertices is inserted, giving the below diagram
\[\mathcal{D}_{ijkn}= \tag{53}\]
where \(j\) labels a scalar line (which need not be at position \(k\)), and the \(\Box\) indicates the location of the inverse propagator \(x_{k-1}^{2}\). The diagram where \(\tilde{B}\) is sourced first can be obtained from reflection of this diagram. Since a solid line stretches all the way between \(i\) and \(j\), the intermediate vertices must be of the form \(A^{\mu}B_{\nu\rho}\partial_{\mu}B^{\nu\rho}\), which gives a product of \(u_{l}\) variables. Vertices between legs \(j\) and \(k\) can either be of the same type, or of the type \(A^{\mu}B^{\nu\rho}\partial_{\nu}\tilde{B}_{\mu\rho}\). The vertices before \(i\) or after \(k\) are of the type \(AA\partial A\) and can only give rise to \(u_{l}\) variables in order to not increase the polarization power. Thus, all together, each diagram is given by the expression
\[\mathcal{D}_{ijkn}=(-1)^{n}\varepsilon_{i}\cdot\varepsilon_{j}x_{k-1}^{2}U_{2, i-1}U_{i+1,j-1}\big{(}x_{j-1}\cdot V_{j+1}\cdots V_{k-1}\cdot\varepsilon_{k} \big{)}U_{k+1,n-1}\,, \tag{54}\]
where the matrices \(V_{i}^{\mu\nu}\) are given by
\[V_{i}^{\mu\nu}=u_{i}\eta^{\mu\nu}-2\varepsilon_{i}^{\mu}x_{i-1}^{\nu}\,, \tag{55}\]
and the \(U\)'s are products of consecutive \(u_{i}\), defined by
\[U_{i,j}=u_{i}u_{i+1}\cdots u_{j}\,. \tag{56}\]
We set the boundary case \(j=k\) of the expression in parenthesis \((x_{j-1}\cdot V_{j}\cdots V_{k-1}\cdot\epsilon_{k})\) to be equal to the number \((-1/2)\) since otherwise the polarization vector \(\varepsilon_{k}\) is double counted. Finally, using the diagrams \(\mathcal{D}_{ijkn}\) the bi-scalar numerator in this sector is
\[\overline{N}^{(2)}(1,\dots,n)=\sum_{1<i<j\leq k}^{n-1}\mathcal{D}_{ijkn}+ \text{reflection}\,. \tag{57}\]
The reflected diagram is obtained by reversing the labels \(\{1,2,\dots,n\}\) on the momenta \(p_{i}\) and polarizations \(\varepsilon_{i}\), not the region momenta \(x_{i}\). It is interesting to note that the matrices \(V_{i}\) play a similar role to the \(G^{i}{}_{j}\) matrices used for NMHV numerators in ref. [207], except that the former depend on region momenta \(x_{i}\) and the latter on particle momenta \(p_{i}\).
### NMHV Lagrangian beyond the bi-scalar sector
As detailed in the previous two subsections, we conjectured that the bi-scalar sector numerators (4.17) generated by the five-point Lagrangian \(\mathcal{L}_{5}\) (4.9) give valid BCJ numerators to all multiplicity at tree level. From these we can then uniquely compute all NMHV contributions beyond the bi-scalar sector. We now wish to find a Lagrangian description of such contributions, which starts at six points.
First, let us write down an explicit expression for the six-point bi-scalar numerator (summing over polarization power one and two),
\[\overline{N}(1,2,3,4,5,6) =U_{2,5}+(\mathcal{D}_{2336}+\mathcal{D}_{2346}+\mathcal{D}_{2356 }+\mathcal{D}_{2446}+\mathcal{D}_{2456}+\mathcal{D}_{2556}+\mathcal{D}_{3446}\] \[\quad+\mathcal{D}_{3456}+\mathcal{D}_{3556}+\mathcal{D}_{4556}+ \text{reflections})\] \[=u_{2}u_{3}u_{4}u_{5}-2\varepsilon_{2}\!\cdot\!\varepsilon_{3} \left(2\varepsilon_{4}\!\cdot\!x_{2}\varepsilon_{5}\!\cdot\!x_{3}x_{4}^{2}- \varepsilon_{5}\!\cdot\!x_{2}u_{4}x_{4}^{2}-\varepsilon_{4}\!\cdot\!x_{2}u_{5 }x_{3}^{2}+u_{4}u_{5}x_{2}^{2}\right)\] \[\quad+\varepsilon_{2}\!\cdot\!\varepsilon_{4}\left(2\varepsilon _{5}\!\cdot\!x_{3}u_{3}x_{4}^{2}-(x_{2}^{2}+x_{3}^{2})u_{3}u_{5}\right)\] \[\quad-\varepsilon_{2}\!\cdot\!\varepsilon_{5}\left(x_{2}^{2}+x_{4 }^{2}\right)u_{3}u_{4}\] \[\quad+2\varepsilon_{3}\!\cdot\!\varepsilon_{4}\left(\varepsilon _{5}\!\cdot\!x_{3}u_{2}x_{4}^{2}+\varepsilon_{2}\!\cdot\!x_{3}u_{5}x_{2}^{2}-u _{2}u_{5}x_{3}^{2}\right)\] \[\quad+\varepsilon_{3}\!\cdot\!\varepsilon_{5}\left(2\varepsilon _{2}\!\cdot\!x_{3}u_{4}x_{2}^{2}-(x_{4}^{2}+x_{3}^{2})u_{2}u_{4}\right)\] \[\quad-2\varepsilon_{4}\!\cdot\!\varepsilon_{5}\left(2\varepsilon _{2}\!\cdot\!x_{3}\varepsilon_{3}\!\cdot\!x_{4}x_{2}^{2}-\varepsilon_{3}\! \cdot\!x_{4}u_{2}x_{3}^{2}-\varepsilon_{2}\!\cdot\!x_{4}u_{3}x_{2}^{2}+u_{2}u _{3}x_{4}^{2}\right). \tag{4.18}\]
Plugging this numerator into the formula (2.29) gives the remaining NMHV numerator contributions \(N^{(2)}\) at six points, which do not match (diagram-by-diagram) the same contributions computed from the Lagrangian \(\mathcal{L}_{5}\). It is clear that the numerators obtained from eq. (2.29) are the ones we need since they satisfy color-kinematics duality, thus we must modify the Lagrangian \(\mathcal{L}_{5}\) with new six-point contributions.
We proceed by looking at individual kinematic monomials in \(N^{(2)}\) that pinpoint the mismatch, and then infer what are the simplest interactions that can restore color-kinematics duality. For example, considering the term \(u_{3}\varepsilon_{2}\!\cdot\!\varepsilon_{4}\varepsilon_{5}\!\cdot\! \varepsilon_{6}\varepsilon_{3}\!\cdot\!(p_{5}-p_{6})x_{2}^{2}\), we find that it is necessary to introduce a pair of new vector fields \(Z^{\mu}\) and \(\tilde{Z}^{\mu}\) of mass dimension one, which interact with \(A^{\mu}\), \(B^{\mu\nu}\) and \(\tilde{B}^{\mu\nu}\). To ensure that the new auxiliary vectors do not pollute the four and five-point construction, we must constrain certain interactions. Specifically we will only allow an interaction of the form \(AAZ\) but no conjugate \(AA\tilde{Z}\) interaction, thus ensuring that at most the \(Z\) field can be sourced at four points, and hence it cannot propagate to a \(\tilde{Z}\). Furthermore, we require that the \(Z\), \(\tilde{Z}\) fields can only source the linear combination of the tensor field (\(B^{\mu\nu}-\square\tilde{B}^{\mu\nu}\)), which is specifically not sourced by \(A^{\mu}\) fields at five points, hence the \(Z\), \(\tilde{Z}\) field cannot propagate at five points.
Let us see how this example plays out in detail. Adding two interactions of the form \(A^{\mu}\tilde{Z}^{\nu}B_{\mu\nu}\) and \(\partial_{\nu}A^{\mu}A_{\mu}Z^{\nu}\) generates a new diagram contribution of the schematic form
\[u_{3}\,\varepsilon_{2}\!\cdot\!\varepsilon_{4}\,\varepsilon_{5}\!\cdot\! \varepsilon_{6}\,\varepsilon_{1}\!\cdot\!(p_{5}-p_{6})x_{2}^{2}\quad \longrightarrow\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad 1\frac{\begin{array}{c}2 &3&4&5\\ &\raisebox{-1.5pt}{\includegraphics[scale=0.5]{fig-1.pdf}}\\ \square\tilde{B}&B\tilde{B}&B\tilde{Z}&Z\end{array}}{\partial\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
where we have have placed the previously mentioned offending monomial to the left and the corresponding schematic diagram with new vertices \(A\tilde{Z}B\) and \(AAZ\) to the right. The solid line that ends on a derivative indicates the \(\varepsilon_{1}\cdot(p_{5}-p_{6})\) contraction.
Similarly, flipping the tilde and non-tilde fields yields a correspondence between the following offending monomial and new diagram
\[u_{3}\,\varepsilon_{1}\cdot\varepsilon_{4}\,\varepsilon_{5}\cdot\varepsilon_{6 }\,\varepsilon_{2}\cdot(p_{5}-p_{6})x_{3}^{2}\quad\longrightarrow \tag{4.20}\]
It is clear that the needed modification to the Lagrangian coming from these terms takes the following form:
\[\Delta{\cal L}_{6}\sim{\rm Tr}\left(Z^{\mu}\Box\tilde{Z}_{\mu}+[\partial_{ \nu}A^{\mu},A_{\mu}]Z^{\nu}+[A^{\mu},\tilde{Z}^{\nu}]\big{(}B_{\mu\nu}-\Box \tilde{B}_{\mu\nu}\big{)}\right). \tag{4.21}\]
where we fixed the relative couplings by using the normalization freedom of the kinetic term, as well as the rescaling freedom \(Z\to\alpha Z\) and \(\tilde{Z}\to\tilde{Z}/\alpha\) that leaves the kinetic term invariant. There is only one free parameter, which we can take to the overall normalization, and we find that it is equal to unity in order to match the correct \(N^{(2)}\) contribution.
There are further mismatching monomials that we need to deal with. The \(N^{(2)}\) numerators contain problematic terms such as \(\varepsilon_{2}\cdot\varepsilon_{3}\,\varepsilon_{5}\cdot\varepsilon_{6}\, \varepsilon_{1}\cdot x_{2}\,\varepsilon_{4}\cdot(p_{6}-p_{5})\,x_{2}^{2}\), where no Lorentz indices are crossing the central half-ladder propagator, and also the mass dimensions are unbalanced. This suggests that we need to introduce a pair of scalar fields \(X\) and \(\tilde{X}\) of mass dimension two and zero respectively. Again we need to ensure that the new fields do not modify the five-point numerators. Consider the new interactions \(AA\tilde{X}\) and \(AX\tilde{Z}\) which contribute to the mentioned monomial through the diagram
\[\varepsilon_{2}\cdot\varepsilon_{3}\varepsilon_{5}\cdot\varepsilon_{6} \varepsilon_{1}\cdot x_{2}\varepsilon_{4}\cdot(p_{5}-p_{6})\,x_{2}^{2}\quad \longrightarrow \tag{4.22}\]
As mentioned, there are more derivatives on the left half of the diagram than the right half, thus the imbalance of the dimensions of the \(X\) and \(\tilde{X}\) fields.
Additionally, we add \(A\tilde{X}\tilde{Z}\) interaction to address the offending term
\[\varepsilon_{1}\cdot\varepsilon_{2}\varepsilon_{5}\cdot\varepsilon_{6} \varepsilon_{3}\cdot(p_{1}-p_{2})\varepsilon_{4}\cdot(p_{6}-p_{5})x_{2}^{2} \quad\longrightarrow \tag{4.23}\]
We also need \(\tilde{X}\) to interact with the \(B\) field, to address offending terms of the form
\[\varepsilon_{2}\cdot\varepsilon_{3}\,\varepsilon_{5}\cdot\varepsilon_{6}\, \varepsilon_{1}\cdot x_{3}\,\varepsilon_{4}\cdot(p_{5}-p_{6})\,x_{2}^{2}\quad \longrightarrow \tag{4.24}\]
The pair of scalar fields enjoys the same rescaling freedom as the auxiliary vectors, which we use to fix the coefficient of the \(AA\tilde{X}\) interaction. The above three nonmial structures are then enough to constrain all remaining coefficients of the needed interactions, which we find to be
\[\Delta\mathcal{L}_{6}\sim\mathrm{Tr}\left(X\Box\tilde{X}+[A^{\mu},\Box A_{\mu} ]\tilde{X}-[A^{\mu},X]\tilde{Z}_{\mu}+\frac{1}{2}[A^{\mu},\Box\tilde{X}]\tilde{ Z}_{\mu}-2[A_{\mu},B^{\mu\nu}]\partial_{\nu}\tilde{X}\right). \tag{4.25}\]
This completes the construction of interactions that contribute to the six-point half-ladder numerator.
As a side remark that we will come back to later, note that one can rearrange the flow of Lorentz indices in the half-ladder diagrams using conservation of momentum, and it turns out that it is not strictly necessary to introduce a pair of scalar fields. Indeed, in the next subsection we show that it is possible to formulate a completion of the \(\mathcal{L}_{5}\) Lagrangian in the NMHV sector using only the pair of vectors \(Z\) and \(\tilde{Z}\).
Finally, we need to address those diagrams that contribute to the non-half-ladder topology5 appearing at six points. Our Lagrangian does not yet get those contributions to match with what is predicted from the \(N^{(2)}\) numerator. Assuming that no new fields are needed, the possible interactions are highly constrained by dimensional analysis and by the requirement that they must involve three auxiliary fields in order to not spoil the half-ladder numerators. Consider the following example of a missing term and corresponding diagram with a new \(B\tilde{Z}\tilde{Z}\) interaction:
Footnote 5: Sometimes called star or Mercedes topology.
\[s_{12}\varepsilon_{3}\cdot\varepsilon_{4}\varepsilon_{5}\cdot\varepsilon_{6} \varepsilon_{1}\cdot(p_{5}-p_{6})\varepsilon_{2}\cdot(p_{3}-p_{4})\quad \longrightarrow\]
(4.26)
We find that a suitable interaction has the form
\[\mathrm{Tr}\Big{\{}[\tilde{Z}_{\mu},\tilde{Z}_{\nu}]\big{(}\beta B^{\mu\nu}+( 1-\beta)\Box\tilde{B}^{\mu\nu}\big{)}\Big{\}}\,, \tag{4.27}\]
but at this multiplicity we can not yet fix the free parameter \(\beta\). There is one more needed interaction which is rather simple, and fully constrained at six points, it is
\[4\mathrm{Tr}\Big{\{}[B^{\mu\nu},\partial_{\mu}\tilde{B}_{\nu\rho}]\tilde{Z}^{ \rho}\Big{\}}\,. \tag{4.28}\]
This finishes the construction of the duality-satisfying six-point NMHV Lagrangian \(\mathcal{L}_{6}=\mathcal{L}_{5}+\Delta\mathcal{L}_{6}\), where all the new terms are assembled as
\[\Delta\mathcal{L}_{6} =\mathrm{Tr}\left(Z^{\mu}\Box\tilde{Z}_{\mu}+X\Box\tilde{X}+[ \partial_{\nu}A^{\mu},A_{\mu}]Z^{\nu}+[A^{\mu},\Box A_{\mu}]\tilde{X}+[A_{\mu },\tilde{Z}_{\nu}]\big{(}B^{\mu\nu}-\Box\tilde{B}^{\mu\nu}\big{)}\right.\] \[\quad-[A^{\mu},X]\tilde{Z}_{\mu}+\frac{1}{2}[A^{\mu},\Box\tilde{ X}]\tilde{Z}_{\mu}-2[A_{\mu},B^{\mu\nu}]\partial_{\nu}\tilde{X}+[\tilde{Z}_{ \mu},\tilde{Z}_{\nu}]\big{(}\beta B^{\mu\nu}+(1-\beta)\Box\tilde{B}^{\mu\nu} \big{)}\] \[\quad+4[B^{\mu\nu},\partial_{\mu}\tilde{B}_{\nu\rho}]\tilde{Z}^{ \rho}\Big{)}\,, \tag{4.29}\]
and, as already mentioned, the \(\beta\) parameter is not yet fixed.
Moving on to seven points, interactions of the form \(\partial AZ\tilde{Z}\) and \(\partial AX\tilde{X}\) can now partake in the half-ladder diagrams. These rather simple contributions are needed for generating correct half-ladder factors \(u_{i}\), which in hindsight is not surprising. Their coefficients in the Lagrangian can be fixed by computing the terms \(\varepsilon_{1}\cdot\varepsilon_{2}\varepsilon_{6}\cdot\varepsilon_{7} \varepsilon_{3}\cdot(p_{1}-p_{2})\varepsilon_{4}\cdot(p_{6}-p_{7})u_{5}x_{2}^ {2}\) and \(\varepsilon_{1}\cdot\varepsilon_{2}\varepsilon_{6}\cdot\varepsilon_{7} \varepsilon_{3}\cdot(p_{1}-p_{2})\varepsilon_{5}\cdot(p_{6}-p_{7})u_{4}x_{2}^ {2}\), and comparing the result to the predicted \(N^{(2)}\) numerator. Furthermore, a new interaction \(B\tilde{X}\tilde{Z}\) is required for the non-half-ladder graphs, and finally the unknown parameter \(\beta\) from \(\Delta\mathcal{L}_{6}\) is now constrained to \(\beta=1/2\).
Thus we conclude that the seven-point corrections to the duality-satisfying NMHV Lagrangian consists of the three terms
\[\Delta\mathcal{L}_{7}=\mathrm{Tr}\left(-\,2[A_{\mu},Z_{\nu}]\partial^{\mu} \tilde{Z}^{\nu}-2[A_{\mu},X]\partial^{\mu}\tilde{X}-2[B^{\mu\nu},\partial_{ \nu}\tilde{X}]\tilde{Z}_{\mu}\right). \tag{112}\]
No further corrections are needed at higher multiplicity. We explicitly computed and checked the properties of color-kinematics duality and gauge invariance for all NMHV numerators and amplitudes through ten points, which worked flawlessly.
Based on the robust patterns observed up to multiplicity ten, we conjecture that the following assembled NMHV Lagrangian computes all BCJ numerators and NMHV amplitudes to any multiplicity at tree level:
\[\mathcal{L}=\mathcal{L}_{5} +\mathrm{Tr}\left(Z^{\mu}\Box\tilde{Z}_{\mu}+X\Box\tilde{X}+[ \partial_{\nu}A^{\mu},A_{\mu}]Z^{\nu}+[A^{\mu},\Box A_{\mu}]\tilde{X}-2[A_{\mu },Z_{\nu}]\partial^{\mu}\tilde{Z}^{\nu}\right.\] \[\left.-\,2[A_{\mu},X]\partial^{\mu}\tilde{X}+[A_{\mu},\tilde{Z}_{ \nu}]\big{(}B^{\mu\nu}-\Box\tilde{B}^{\mu\nu}\big{)}-[A^{\mu},X]\tilde{Z}_{ \mu}+\frac{1}{2}[A^{\mu},\Box\tilde{X}]\tilde{Z}_{\mu}\right.\] \[\left.-\,2[A_{\mu},B^{\mu\nu}]\partial_{\nu}\tilde{X}+\frac{1}{2} [\tilde{Z}_{\mu},\tilde{Z}_{\nu}]\big{(}B^{\mu\nu}+\Box\tilde{B}^{\mu\nu} \big{)}+4[B^{\mu\nu},\partial_{\mu}\tilde{B}_{\nu\rho}]\tilde{Z}^{\rho}\right.\] \[\left.-\,2[B^{\mu\nu},\partial_{\nu}\tilde{X}]\tilde{Z}_{\mu} \right). \tag{113}\]
This is the simplest Lagrangian that we have found in this paper. A natural question to ask next is: how unique is it?
### How unique is the NMHV Lagrangian?
The way in which we obtained the NMHV Lagrangian (113) does not give strong evidence for its uniqueness. To check how unique it really is, we constructed a more general ansatz that is vastly larger in complexity compared to the above construction. Again we constrained the results by checking color-kinematics duality and gauge invariance, but we were more meticulous in keeping track of our assumptions. This is useful for future work were these assumptions can be further relaxed.
Our enlarged ansatz for the NMHV Lagrangian is subject to the following assumptions:
1. The simple Lagrangian \(\mathcal{L}_{5}\) (109) is still assumed to give the bi-scalar numerators.
2. No additional fields beyond those in the previous sections are used. That is, only the tensors \(B\), \(\tilde{B}\), vectors \(Z\), \(\tilde{Z}\) and scalars \(X\), \(\tilde{X}\) appear as auxiliary fields.
3. Kinetic terms do not mix fields: \(\mathcal{L}_{2}=\mathrm{Tr}\left(\frac{1}{2}A^{\mu}\Box A_{\mu}+B^{\mu\nu}\Box \tilde{B}_{\mu\nu}+Z^{\mu}\Box\tilde{Z}_{\mu}+X\Box\tilde{X}\right)\).
4. To preserve the four-point numerator, a pair of external \(A\)'s can either source \(Z\) or \(\tilde{Z}\). As before, we choose to exclude \(AA\tilde{Z}\) interactions.
5. Two-derivative interactions always appear as a d'Alembertian \(\square\). This makes it manifest that the interaction contributes at most to polarization-power two.
6. We make an _ad hoc_ simplifying choice to exclude interactions \(AZZ\) and \(A\tilde{Z}\tilde{Z}\), in order the make the ansatz space more manageable.
After taking care of the rescaling freedom of the auxiliary fields, the Lagrangian ansatz we obtain with the above constraints has 174 free parameters. We constrain the numerators generated by the ansatz up to eight points by comparison with the predicted numerators \(N^{(2)}\), as obtained from linear combinations of the bi-scalar numerators coming from \(\mathcal{L}_{5}\). Note these constrains are non-linear equations in the free parameters of the Lagrangian ansatz, hence the equation system is non-trivial to deal with.
With the constraints imposed up to eight points, this only fixes a subset of all parameters and we are left with 129 free coefficients. At nine points, we find that all numerator topologies generated by the Lagrangian ansatz are independent of the leftover coefficients. It is difficult to go to higher points due to the non-linearities, and we stop trying to find further constraints. Instead, we now seek solutions that minimizes the number of terms in the Lagrangian. We find four solutions which all give 13-term Lagrangians. Provided that \(\partial\cdot A=0\), all solutions coincide (up to total derivatives) to give the same Lagrangian (4.3) we found in the previous section. Thus it seems reasonable to think that this is the simplest NMHV Lagrangian, given the above list of assumptions.
For example, another simple solution gives a 14-term Lagrangian. It is reachable by deforming the 13 term Lagrangian \(\mathcal{L}\) (4.3) by the following interactions:
\[\mathcal{L}-\mathcal{L}_{\text{14-term}}=\text{Tr}\left([A^{\mu},\square A_{ \mu}]\tilde{X}+4[A_{\mu},\partial_{\nu}B^{\mu\nu}]\tilde{X}+[A^{\mu},\tilde{X}] \square\tilde{Z}_{\mu}\right). \tag{4.32}\]
A more interesting scenario would arise if we managed to find a solution that makes use of fewer auxiliary fields. We find no solutions that discards of the vectors \(Z\) and \(\tilde{Z}\), but interestingly, the scalars \(X\) and \(\tilde{X}\) do seem to not be strictly necessary for all solutions. Consider replacing the above _ad hoc_ assumption 6, by the following new assumption:
* Exclude the scalar fields \(X\), \(\tilde{X}\), but now include all interactions \(AZZ\) and \(A\tilde{Z}\tilde{Z}\).
We start with a 95 parameter ansatz and constrain it by comparing to the predicted \(N^{(2)}\) numerators up to eight points. At nine points, there is a single non-half-ladder graph depending on one free parameter. To fix this parameter, we use the unique half-ladder graphs generated by our ansatz and through Jacobi identities construct the graph containing the free parameter. At this stage, we have 50 free coefficients. Assuming they do not enter the numerators at higher multiplicity, we look for the smallest possible Lagrangian and find six solutions with 20 terms each. To select one of the solutions, we will assume that
\(\partial\cdot A=0\). The solution is not particularly simple but we report it below for completeness:
\[\begin{split}\mathcal{L}^{\prime}=\mathcal{L}_{5}&+ \operatorname{Tr}\left(Z^{\mu}\Box\tilde{Z}_{\mu}-[A^{\mu},\partial_{\nu}A_{ \mu}]Z^{\nu}+[A^{\mu},Z^{\nu}]\partial_{\nu}Z_{\mu}-\frac{3}{4}[A^{\mu},Z_{\mu }]\partial\cdot Z-[A^{\mu},\tilde{Z}_{\mu}]\partial\cdot\tilde{Z}\\ &-2[A^{\mu},Z^{\nu}]\partial_{\mu}\tilde{Z}_{\nu}+\frac{1}{2}[A^{ \mu},\partial\cdot Z]\tilde{Z}_{\mu}-2[A^{\mu},\partial_{\nu}Z_{\mu}]\tilde{Z}^ {\nu}-\frac{3}{2}[A^{\mu},Z_{\mu}]\partial\cdot\tilde{Z}\\ &-\frac{3}{2}[A^{\mu},B_{\mu\nu}]Z^{\nu}-[A^{\mu},B_{\mu\nu}] \tilde{Z}^{\nu}-\frac{1}{2}[A^{\mu},\Box\tilde{B}_{\mu\nu}]Z^{\nu}+[A^{\mu}, \Box\tilde{B}_{\mu\nu}]\tilde{Z}^{\nu}\\ &+\frac{9}{8}[B^{\mu\nu},Z_{\mu}]Z_{\nu}+\frac{3}{2}[B^{\mu\nu},Z _{\mu}]\tilde{Z}_{\nu}+\frac{1}{2}[B^{\mu\nu},\tilde{Z}_{\mu}]\tilde{Z}_{\nu} +\frac{1}{8}[\Box\tilde{B}^{\mu\nu},Z_{\mu}]Z_{\nu}\\ &-\frac{1}{2}[\Box\tilde{B}^{\mu\nu},Z_{\mu}]\tilde{Z}_{\nu}+ \frac{1}{2}[\Box\tilde{B}^{\mu\nu},\tilde{Z}_{\mu}]\tilde{Z}_{\nu}-2[B^{\mu\nu },\partial_{\mu}\tilde{B}_{\nu\rho}]Z^{\rho}+4[B^{\mu\nu},\partial_{\mu} \tilde{B}_{\nu\rho}]\tilde{Z}^{\rho}\right).\end{split} \tag{103}\]
It would be interesting to more broadly explore the ansatz space of NMHV Lagrangians, with the above six assumptions further relaxed, but we leave it for future work.
### Comments on the N\({}^{2}\)MHV sector
The complete six-point BCJ numerator must contain also terms of polarization power three, which are needed for the N\({}^{2}\)MHV amplitudes. Unfortunately, the Lagrangian ansatz space we have considered does not permit solutions which correctly reproduces this sector. This is not very surprising. Since, as a minimal extension of our ansatze, we would need to introduce a three-form auxiliary field (or even rank-three tensors of mixed symmetries). This is clear from considering a six point BCJ numerator that necessarily [207] contains terms of the form
### Polarization power zero at one loop
To obtain the one-loop polarization-power zero numerators (valid for all-plus and one-minus helicity YM sectors), we glue legs \(1\) and \(n\) of the tree-level half-ladder numerators, with appropriate contributions from Faddeev-Popov ghosts \(c,\bar{c}\) to remove unphysical degrees of freedom,
\[N_{\text{1-loop}}^{(0)}(1,2,\ldots,n)=N^{(1)}(\ell,1,2,\ldots,n,-\ell)+N(c,1,2, \ldots,n,\bar{c})+N(\bar{c},1,2,\ldots,n,c)\,. \tag{116}\]
It should be understood that the states labeled by the loop momentum \(\ell\) and \(-\ell\) are contracted, and the two ghost numerators come with a minus sign after the sewing of the states since the \(c\) and \(\bar{c}\) fields are fermionic. As we have shown in section 3, the Feynman rules for the bi-scalar sector automatically obey the color-kinematics duality so long as the external vector fields are transverse. This assumption holds for external states as well as subdiagrams that are of polarization power zero, just as argued below eq. (114).
All we have to do is add the ghosts in such a way that they respect the color-kinematics duality, and this is achieved by the standard Faddeev-Popov ghost Lagrangian
\[\mathcal{L}_{\text{ghost}}^{(0)}\,=\,\text{Tr}\left(c\Box\bar{c}+A^{\mu}c \partial_{\mu}\bar{c}\right). \tag{117}\]
The interaction term here is identical to the effective interactions of the bi-scalar sector so color-kinematics duality works out automatically when external states are transverse. Since the diagrams that contribute here come from standard YM Feynman rules, the auxiliary fields are not yet present.
Let us give the duality-satisfying \(n\)-gon master numerators that are valid for the all-plus and one-minus helicity one-loop amplitudes (see also refs. [68; 91]),
\[N_{\text{1-loop}}^{(0)}(1,\ldots,n)=\text{Tr}(W_{1}\cdots W_{n})+\text{Tr}( \widetilde{W}_{1}\cdots\widetilde{W}_{n})-(2+D)U_{1,n}\,, \tag{118}\]
where \(D=\text{Tr}(1)\) is the dimension, \(U_{1,n}\) is defined in eq. (18) and the matrices are
\[(W_{i})^{\mu\nu}=u_{i}\eta^{\mu\nu}+2\varepsilon_{i}^{\mu}p_{i}^{\nu}\,,\ \ \ \ ( \widetilde{W}_{i})^{\mu\nu}=u_{i}\eta^{\mu\nu}-2p_{i}^{\mu}\varepsilon_{i}^{ \nu}\,. \tag{119}\]
The region momenta present in \(u_{i}=2\varepsilon_{i}\!\cdot\!x_{i}\) is here defined to include the loop momentum \(x_{i}=\ell+\sum_{j=1}^{i}p_{j}\). The above numerators can be checked to give the correct maximal [91] and non-maximal cuts, since by construction they make use of the standard YM three-point vertex and there are no contact terms present in the amplitudes.
### Comment on one-loop MHV numerators
As already emphasized, tree-level all-multiplicity color-kinematics duality is in general insufficient to infer that the loop-level duality holds. Indeed, we find obstructions in realizing one-loop BCJ numerators in the MHV sector from the Feynman rules derived in previous sections. At polarization-power one, in particular, the numerators receive contributions from the new fields we added to the Lagrangian. As before, we have contributions from the tree-level polarization-power one sector after gluing half-ladder numerators,
\[N_{\text{1-loop}}^{(1)}(1,2,\ldots,n)\supset N^{(1)}(\ell,1,2,\ldots,n,-\ell )\,, \tag{120}\]
but now we must add the glued half-ladder numerator at polarization power two as well,
\[N^{(1)}_{\text{1-loop}}(1,2,\ldots,n)\supset N^{(2)}(\ell,1,2,\ldots,n,-\ell)\,. \tag{109}\]
In addition, to not break crossing symmetry we should allow for the \(B\), \(Z\) and \(X\) fields to cross the sewn loop line. This makes it possible for the auxiliary fields to propagate all the way around the loop, which is not expected to give reasonable contributions. Thus, we may attempt to project out certain contributions by adding new ghosts for the auxiliary fields. The contributions we need to remove are very similar to the bi-scalar sector contributions (as already noted in the previous section), hence we add new ghost Lagrangian terms corresponding to the \(B,\tilde{B}\) fields,
\[\mathcal{L}^{(1)}_{\text{ghost}} = \operatorname{Tr}\left(c\Box\bar{c}+b^{\mu\nu}\Box\bar{b}_{\mu \nu}+A^{\mu}c\partial_{\mu}\bar{c}+4\partial_{\nu}\bar{b}_{\mu\rho}[A^{\mu},b ^{\nu\rho}]-2\partial_{\rho}\tilde{b}_{\mu\nu}[A^{\rho},b^{\mu\nu}]\right). \tag{110}\]
As usual, the ghost fields cannot be sourced and so they only contribute through propagating in a complete closed loop.
We performed a few crude tests to the one-loop numerators as obtained from our Lagrangians, and found that for the 13-term Lagrangian in eq. (108) already the three-point one-loop numerators are not well behaved, meaning they do not give gauge-invariant unitarity cuts. For the 14-term Lagrangian obtained by the deformation (109), the maximal cuts work as tested up to four points, but the next-to maximal cuts do not. For example, the following contribution to the box diagram appears to not be correct:
(111)
Specifically, the term that spoils gauge invariance is proportional to \(\varepsilon_{1}\cdot\varepsilon_{2}\). It is clear that the ghost Lagrangian (110) does not remove this diagram, nor is it removed by introducing further obvious vector-ghost interactions, inspired by the fields already present. While it should be possible to introduce fine-tuned ghosts and interactions that precisely cancels this diagram, it is non-trivial task to ensure the new terms are consistent with color-kinematics duality and gauge invariance for all higher-multiplicity MHV numerators. We leave the problem of formulating duality-satisfying one-loop-compatible Lagrangians to future work.
## 6 Conclusions
In this paper, we considered the problem of constructing Lagrangians that manifest color-kinematics duality for YM theory. Such explicit Lagrangians can be used to compute BCJ numerators, as well as give non-trivial clues to the mathematical structure underlying color-kinematics duality. While duality-satisfying tree-level numerators are known to any multiplicity, finding corresponding Lagrangian descriptions appears to be more challenging.
The problem simplifies by restricting to helicity sectors of YM, and in this paper we fully address the NMHV sector.
As a first step, we found a simple Lagrangian (4.9) that is fully equivalent to the standard YM Lagrangian at tree level, and it computes BCJ numerators in the bi-scalar subsector of the NMHV sector. This Lagrangian was constructed by first resolving the four-gluon contact term using a pair of auxiliary two-forms fields, and subsequently deforming with new cubic interactions involving these auxiliary fields. The bi-scalar numerators can be computed to any multiplicity, and they provide complete information via eq. (2.29) to obtain all tree-level NMHV numerators. These provide a clear target for what a complete NMHV Lagrangian should reproduce.
Next we searched for a complete NMHV Lagrangian, and we found that by introducing at most two additional pairs of auxiliary fields, of vector and scalar type, there are several solutions for such Lagrangians. Using a larger ansatz, we found that solutions also exists if the scalar is removed at the cost of additional interactions between the remaining fields. Because of the large freedoms of the Lagrangians, we chose to present the simplest solutions, given in eqs. (4.31) and (4.33), which were explicitly tested through ten points and conjectured to work to all multiplicities at tree level. With the our current limited Lagrangian ansatz space, we cannot obtain the N\({}^{2}\)MHV and higher sector contributions to the BCJ numerators. Nor are the presented duality-satisfying NMHV Lagrangians equivalent to YM in the N\({}^{2}\)MHV sector and beyond, unlike the simple Lagrangian (4.9) first found. It would be desirable to revisit this problem in the future, and repeat the ansatz construction of duality-satisfying NMHV Lagrangian while maintaining gauge covariance at intermediate steps such that the N\({}^{2}\)MHV and higher sectors are not spoiled, even if they might not fully enjoy color-kinematics duality.
We briefly discussed the need of higher-rank tensor auxiliary fields to reproduce N\({}^{2}\)MHV sector BCJ numerators. It is clear that in a covariant (\(D\)-dimensional) and local formalism it is unavoidable to encounter, at the very minimum, three-form fields in a duality satisfying N\({}^{2}\)MHV Lagrangian. However, the minimal set of needed auxiliary fields in the N\({}^{2}\)MHV sector is something that needs further studies. Currently the main challenge in brute-force constructions of duality-satisfying Lagrangians is to predict the number and types of needed auxiliary fields. Clearly it would be desirable to better understand the general structure and need of such fields such that more refined Lagrangian ansatze can be constructed. Small finely tuned ansatze would speed up progress because the equation systems encountered are non-linear in the ansatz parameters, and the solutions contains large redundancy making them difficult to analyse.
Even in the NMHV sector there are more questions to be answered. Can one remove some of the assumptions that went into our constructions and perhaps obtain much simpler Lagrangians? For example, hints from Chern-Simons-type Lagrangians [40; 222] suggest that it should be beneficial to look for more intricate kinetic terms than the diagonal ones used in this paper. Furthermore, gauge covariance plays no role in the current construction and this is likely an oversight that should be addressed in more refined attempts. While our NMHV Lagrangians fit on a few lines, it is fair to say that that their complexity are likely artificially high compared to more optimal duality-satisfying rewritings of the YM
Lagrangian that might be found in the future. Nevertheless, we have taken critical steps in this research program by finding the first examples of NMHV Lagrangians that: 1) uses very few auxiliary fields 2), have very simple structure in the bi-scalar subsector, and 3) gives local all-multiplicity BCJ numerators.
###### Acknowledgments.
We thank Zvi Bern, Lucile Cangemi, Gang Chen, Paolo Pichini, Oliver Schlotterer, Fei Teng, Tianheng Wang and Maxim Zabzine for enlightening discussions related to this work. This research was supported in part by the Knut and Alice Wallenberg Foundation under grants KAW 2018.0116 (_From Scattering Amplitudes to Gravitational Waves_) and KAW 2018.0162 (_Exploring a Web of Gravitational Theories through Gauge-Theory Methods_), as well as the Ragnar Soderberg Foundation (Swedish Foundations' Starting Grant).
|
2309.12001 | Exploring Human's Gender Perception and Bias toward Non-Humanoid Robots | In this study, we investigate the human perception of gender and bias toward
non-humanoid robots. As robots increasingly integrate into various sectors
beyond industry, it is essential to understand how humans engage with
non-humanoid robotic forms. This research focuses on the role of
anthropomorphic cues, including gender signals, in influencing human robot
interaction and user acceptance of non-humanoid robots. Through three surveys,
we analyze how design elements such as physical appearance, voice modulation,
and behavioral attributes affect gender perception and task suitability. Our
findings demonstrate that even non-humanoid robots like Spot, Mini-Cheetah, and
drones are subject to gender attribution based on anthropomorphic features,
affecting their perceived roles and operational trustworthiness. The results
underscore the importance of balancing design elements to optimize both
functional efficiency and user relatability, particularly in critical contexts. | Mahya Ramezani, Jose Luis Sanchez-Lopez | 2023-09-21T12:16:32Z | http://arxiv.org/abs/2309.12001v4 | To The Effects of Anthropomorphic Cues on Human Perception of Non-Humanoid Robots: The Role of Gender*
###### Abstract
As non-humanoid robots increasingly permeate various sectors, understanding their design implications for human acceptance becomes paramount. Despite their ubiquity, studies on how to optimize their design for better human interaction are sparse. Our investigation, conducted through two comprehensive surveys, addresses this gap. The first survey delineated correlations between robot behavioral and physical attributes, perceived occupation suitability, and gender attributions, suggesting that both design and perceived gender significantly influence acceptance. Survey 2 delved into the effects of varying gender cues on robot designs and their consequent impacts on human-robot interactions. Our findings highlighted that distinct gender cues can bolster or impede interaction comfort.
## I Introduction
In an era characterized by rapid technological evolution, robots are increasingly permeating diverse aspects of human life, transcending traditional roles in industrial settings to leave indelible marks on healthcare, search and rescue, environmental monitoring, and even social companionship [1].
Non-humanoid robots have demonstrated exceptional capabilities in specialized tasks, often surpassing what humanoid robots can achieve [2]. For instance, drones are indispensable for aerial mapping and surveillance, while creature-like robots can navigate rough terrains, showcasing potential applications ranging from geological surveys to agricultural practices [3]. Yet, these robots present unique challenges and opportunities in Human-Robot Interaction (HRI), making it crucial to explore how humans perceive and interact with these specialized entities [4].
Recent advancements in cognitive psychology suggest that humans are naturally inclined to anthropomorphize objects, attributing human characteristics to non-human entities [5]. This psychological tendency has significant implications for the design and deployment of non-humanoid robots, particularly in how they are gendered and subsequently perceived [6].
Our research targets an overlooked yet pivotal aspect of HRI: the attribution and perception of gender in non-humanoid robots. Despite extensive literature on humanoid robots, a glaring gap exists in understanding how gender plays a role in the design and interaction with non-humanoid robots [7]. We aim to fill this research void by scrutinizing how human perceptions, stereotypes, and expectations converge in shaping non-humanoid robot design and utility across diverse sectors [8].
One groundbreaking objective of this study is to explore the extent to which humans attribute gender to non-humanoid robots. Previous studies affirm that familiarity, including gender attributes, influences human comfort and robot interaction [9]. However, the inherent design of non-humanoid robots often lacks explicit human features, complicating the process of gender attribution [10].
Given this, we pose several research questions: How do varying degrees of gender cues in non-humanoid robots affect the dynamics of HRI? Could interactions be more effective, trustworthy, or relatable when specific gender cues are emphasized? Could such cues improve or impair task-specific performance, and how can this knowledge inform robot design [11]? We will conduct exhaustive surveys and experiments to probe these questions, contributing empirical data to a field dominated by theoretical discourse. Our methodological approach promises not only to deepen our understanding of HRI but also to direct future robot design and programming [12].
The real-world applicability of this research cannot be overstated. As robots increasingly share our workspaces, public spaces, and even homes, understanding the subtleties of human-robot interaction is crucial for societal acceptance and ethical considerations [13]. Our study aims to help evaluate non-humanoid robot design, challenging existing norms about gender and thereby cultivating more efficient human-robot collaborations [14].
## II Survey 1: Examination of Gender Attribution in Non-Humanoid Robots
This study aims to see how people view non-human-like robots, especially in terms of giving them human traits and gender characteristics. We want to know if people think of these robots as having human qualities and if they see them as male or female. Gender cues will be systematically manipulated utilizing visual and behavioral characteristics, including size, color, and design elements traditionally associated with masculinity or femininity.
Upon observation of each robot, participants will be required to articulate their perceptions of its gender, classifying it as either more masculine, more feminine, or gender neutral. We propose the following hypothesis:
_Hypothesis 1_: Participants will tend to perceive the Spot robot as more masculine than the Mini-Cheetah. _Hypothesis 2_: Gender attributions for the robots can be extrapolated from attribute ratings. _Hypothesis 3_: There will be an inclination among participants to allocate more male-typed occupations to Spot while potentially designating some feminine-typed occupations to Mini-Cheetah.
### _Participants_
A survey involving 150 participants comprised males (53.3%), and 70 females (46.7%). The participants were diverse in terms of race, with representation from Asian (10%), Middle Eastern (19.3%), European (32%), African (6.7%), and Latino (8%) backgrounds.
The participant's age range of 18 to 60 years, with means of 32 and SD= 4.53. Before the study, participants were asked about their fluency in English, as the survey questions were presented in English.
### _Survey Instrument_
The study employed online methods to gather data on participants' perceptions of gender stereotypes in non-humanoid robots. For the online questionnaire, participants accessed a web-based form where they were provided with a video showcasing the Spot and Mini-Cheetah robots. The video included demonstrations of their capabilities, and participants were also shown photographs of the robots.
In addition to the online questionnaire, a subset of participants had the opportunity to interact with the Spot and Mini-Cheetah robots in person for a pilot study. These in-person sessions allowed participants to have hands-on experience with the robots and engage in direct interactions.
The study utilized different robot stimuli, including the Spot and Mini-Cheetah robots and a regular drone, the DJI model. The corresponding photos of these robots are displayed in Figure 1.
### _Pilot study_
Before the main investigation, an initial pilot study was conducted at the University of Luxembourg. This preliminary study was conducted in person and involved participants within the age range of 25 to 55 years. During this preliminary survey, participants were asked to express the gender they attributed to each robot. The gender perception was measured on a scale from 1 to 5, where 1 signified a stronger feminine perception, 5 a stronger masculine perception, and 3 indicated a neutral perception. The primary objective of this pilot study was to determine whether it was feasible to gauge human perceptions of robot gender directly.
As anticipated, the Spot robot received a higher masculinity score with a mean of 3.72 (Standard Deviation, \(SD=08\)), while the Mini-Cheetah robot was perceived as more feminine with a mean score of 3.08 (\(SD=~{}17\)).
Results show Individuals tend to assign gender to robots, even when the robots are non-humanoid. This tendency might suggest people seek similarities or familiar traits to the robots.
Furthermore, to ensure the appropriateness of our study's chosen attributes and occupations, we asked participants to rate each adjective based on its typical association with either males or females. The results indicated low standard deviations, suggesting a high degree of agreement among participants. In addition, the mean ratings of the participants generally aligned with societal norms and accepted gender stereotypes.
Furthermore, the results indicated that many participants were reluctant to respond directly to the question, often in a humorous or dismissive manner.
To mitigate this issue, we took inspiration from prior research by Eyssel et al. [15] and adopted an indirect approach to investigate how individuals attribute gender to non-humanoid robots, particularly non-creature-like robots like drones. Assigning gender directly to drones can be challenging for individuals to comprehend or envision.
### _Survey Detail_
The initial question of the study prompted participants to assign a name to each robot. This provided insight into how participants perceived the robots, either as alive creatures, human-like entities, or mechanical objects. This approach to naming was inspired by [16], underscoring its efficacy in probing how individuals perceive and designate non-human entities.
To analyze the assigned names, we inspired from classification categories as outlined in [17]. The names were sorted into two primary groups: anthropomorphic and non-anthropomorphic. Within the anthropomorphic category, names were further subdivided into male, female, or both-gender-associated names. The non-anthropomorphic category features three subcategories: animal-kind, machine-kind, and things-kind. These subcategories were further dissected into male, female, and neutral classifications for both animal-kind and machine-kind.
The analysis entailed utilizing dictionaries and engaging five independent raters to evaluate the names. The evaluators rated each name, and the results were aggregated to determine common usage and associations. This systematic approach allowed for an unbiased and comprehensive understanding of the naming patterns. Furthermore, an examination was conducted to discern if any names bore biases shaped by media sources.
Then, participants rating specific attributes associated with non-humanoid robots. To accomplish this, we carefully selected 20 adjectives from existing studies [10], encompassing traits conventionally associated with male and female gender attributes. The chosen adjectives included ten behavioral characteristics and ten physical attributes. Participants were then asked to assign a rating to each
Figure 1: A selection of robotic stimuli used in Survey 1. From left to right: Spot, Mini-Cheetah, and DJI UAV
adjective using a scale of 1 to 5. Table I lists the selected occupations for our study.
In continue, participants were asked to evaluate the suitability of 10 distinct occupations for the robots under consideration. The selection of occupations was based on the gender categories outlined in the framework proposed by Stroessner and Benitez [11]. Participants were instructed to assign a rating on a scale of 1 to 5 for each occupation. Higher values on the scale denoted a stronger perceived alignment between the robots and the given occupation. Table II gives the traditionally male and female occupations.
Furthermore, to comprehensively investigate participants' perceptions of the gender associations and biases related to non-humanoid robots, we implemented a ranking system for the selected occupations associated with each robot. Participants were instructed to assign a rank to each occupation. Occupation includes male-related, female-related, and traditional neutral occupations, including Security guard, health care assistant, and food server.
Finally, participants were directly asked to indicate their perception of the gender of the robots. They were instructed to provide a rating on a scale of 1 to 5, where 1 represented a perception of the robot as more feminine, 5 as more masculine, and 3 as gender-neutral. In addition to the gender perception questions, participants were also asked to provide personal information, including their age, race, and level of education.
### _Results_
The Spot robot, characterized by more masculine attributes, was consistently perceived as more masculine by the participants (\(M=395\), \(SD=12\)). Conversely, the Mini-Cheetah robot, with more feminine attributes, was perceived as more feminine than Spot but perceived more neutral (\(M=31\), \(SD=15\)). The UAV received moderate gender perception scores (\(M=289,SD=19\)), neutral but more feminine also with high variance.
Results show that about Spot, most of the anthropomorphic names were male (68.1%), followed by gender-neutral (21.8%), and female (10.1%). On the other hand, in the non-anthropomorphic category, 27.9% were machine-kind names, 55.3% were animal-kind, and 17.8% were things-kind. In addition to the anthropomorphic and non-anthropomorphic classifications, an analysis was conducted to discern if the names bore influences from media sources. 34% of the names were found to be media-inspired, while media did not influence 66%.
For Mini-Cheetah, Neutral anthropomorphic names were the most frequent, comprising 30% of the 150. Male-associated names followed at 20%, and female-associated names at 15%.
Non-anthropomorphic names comprised 35% of the total, with machine-like, animal-like, and object-like contributing 10%, 15%, and 10%, respectively. 23 names (or 15% of the total names) were found to be inspired by media sources such as popular culture, movies, and literature.
Fig. 2 shows the mean score of the different attributes of robots. The results showed that Spot received significantly higher ratings for male attributes than female ones. Spot had a mean rating (\(SD=123\)) for male attributes and a mean rating (\(SD=085\)) for female attributes. Mini-Cheetah displayed male attributes, albeit at lower levels than Spot, but female attributes more than Spot. mean rating of (\(SD=161\)) for male attributes and (\(SD=074\)) for female attributes. The UAV received lower ratings for male (\(SD=044\)) and female (\(SD=037\)) attributes. The results are shown in Table III.
Spot, Mini Cheetah, and DJI UAV--in roles typically associated with male and female occupations. Spot scored the highest in male-dominated roles with a mean rating of 3.54, while Mini Cheetah led in female-dominated roles with a mean rating of 2.34. DJI UAV had the lowest mean ratings in both categories, at 1.834 for male and 1.154 for female roles. These findings suggest that Spot is perceived as more aligned with male attributes, while none of the robots significantly resonated with attributes commonly associated with female roles, indicating potential areas for further research in Human-Robot Interaction (HRI). The result is shown in Fig. 3 and Table III.
Furthermore, the participant's perceptions of the robots' gender attributes influenced their evaluations of their suitability for different occupations. The association between the behavioral and physical attributes of the robots and specific job roles indicates that humans expect congruence between robot attributes and job requirements. This finding highlights the importance of developing robots with appropriate attributes and capabilities to enhance their acceptance and effectiveness in specific occupational contexts. The results are given in Table IV.
In this survey phase, we employed a ranking system for occupations to explore participants' gendered perceptions of non-humanoid robots, Spot and Mini Cheetah. Spot was predominantly ranked higher for traditionally considered male-dominated roles, like a security guard, while Mini Cheetah led in healthcare assistance, a role generally associated with female attributes. Fig. 4 demonstrates the ranking score of each robot for different occupations.
## III Survey 2: Examination of Different Levels of Gender cues on Non-humanoid robots
This survey examines the impact of gender cues and anthropomorphism in non-humanoid robots on HRI Specifically, the survey aims to understand how the attribution of gender-related attributes and human-like qualities to non-humanoid robots, influences various aspects of HRI, including perceived efficiency in task performance, teammate selection, perception of robot gender, comfort level with the robot, politeness in interaction, and human expectations in HRI. The hypotheses are as follows.
_Hypothesis 1_: Attributing gender-related attributes to non-humanoid robots significantly influences participants' perceptions of their task efficiency and their likelihood of choosing the robot as a teammate over a human.
_Hypothesis 2_: The level of anthropomorphism in non-humanoid robots influences participants' perception of the robots' gender and their comfort level with the robot.
_Hypothesis 3_: The degree of gender symbolism in non-humanoid robots significantly affects participants' politeness in their interactions with the robots.
_Hypothesis 4_: The presence of anthropomorphic gender cues in non-humanoid robots significantly affects participants' expectations and preferences in human-robot interaction.
### _Participants_
We surveyed 120 University of Luxembourg participants aged 20-55 years. Among the participants, 56% identified as male and 44% as female, with a mean age of 32 and a standard deviation of 10.32. Given the diverse composition of the university's student body, the participants represented various racial backgrounds, including Asian (10%), Middle Eastern (27%), European (48%), Black or African American (5%), and Latino (6%). Regarding educational qualifications, we inquired about the participants' levels of education, revealing that 73% were either Ph.D. students or held higher degrees, 20% held master's degrees, and the remaining participants fell into other categories.
### _Survey Instrument_
In this survey, participants were randomly assigned to one of the four categories we designed. Upon accessing the survey, participants were presented with three images related to the topic and watched a 20-second video featuring a robot introducing itself. Following this, participants were directed to complete the survey, which typically took approximately 5 minutes.
In this study, we utilized AI technology, to modify the design of Spot. We instructed the model to generate diverse versions of Spot, including more feminine, masculine, machinelike designs and a dog-shaped variant. In this survey, participants were randomly separated into four categories. Fig. 5 show the different design of the spot.
For each variant, a 20-second video was produced, wherein Spot introduced itself. The dialog used across all designs was, "I am Spot. I can assist you in various applications and possess numerous capabilities." The masculine design utilized a male voice, the feminine design employed a female voice, the dog-like design conveyed its message through barking accompanied by subtitles, and the machine-like design featured a neutral voice. All voiceovers were generated using Siri.
### _Pilot Study_
A preliminary study was conducted with a sample of 20 participants. The participants were asked to rate the perceived gender of various voice samples, and Spot robot designs on a 5-point Likert scale, where 1 represented a more feminine perception and 5 a more masculine perception.
The neutral voice sample was perceived as slightly more feminine, with a mean rating of (\(M=33\), \(SD=13\)). The male voice sample sounds masculine (\(M=45\), \(SD=07\)), and the female voice sample sounds feminine (\(M=14\), \(SD=05\))
Regarding Spot designs, the masculine design was perceived as the most masculine (\(M=45\), \(SD=04\) ), followed by the machine-like design (\(M=39,SD=14\)). The feminine design was perceived as less masculine (\(M=26\), \(SD=15\)), and the dog-shaped design received (\(M=32\), \(SD=08\) ). These findings ensure its validity in examining gender attributions and anthropomorphic tendencies in non-humanoid robots.
### _Survey Detail_
To assess participants' perceptions of Spot's efficiency in performing various tasks, we asked them to evaluate the likelihood that Spot could complete specific jobs. We presented participants with 10 occupations listed in Table V. Participants were instructed to rank each job on a scale from 1 to 5, with 1 indicating low likelihood and 5 indicating high likelihood of Spot's success in that task.
In addition, to investigate the influence of gender cues on the selection of a robot as a teammate, we designed a scenario in which participants had to choose between a gendered humanoid robot teammate and Spot for a competition to examine whether the presence of anthropomorphic gender cues in non-humanoid robots affects participants' expectations and preferences in HRI ask participant to assign a 1-5 to the robot, how they perceived the robot's gender.
Furthermore, to assess the impact of gender cues on participants' comfort level with the Spot robot, we asked participants to rate their comfortability in being around the robot for an extended period on a scale from 1 to 5.
Finally, we aimed to investigate whether different levels of gender cues in non-humanoid robots can influence participants' behavior towards the robots, specifically focusing on politeness. Participants were asked to rate their likelihood of exhibiting aggressive behavior towards the Spot robot if it made a mistake on a scale from 1 to 5.
### _Results_
A one-way ANOVA revealed a significant effect of gender cues on perceived efficiency in task performance [\(F(3116)=732\), \(p=0001\)]. Post hoc comparisons indicated that the Masculine Spot design, with an average suitability score of 3.22, was perceived as the most appropriate across all occupations. Conversely, the Machine-like Spot, with the lowest overall average, was deemed the least preferred for general occupations. Fig. 6 and Fig. 7 demonstrate the average suitability score for each design for different occupations.
Participants preferred the Masculine Spot design for the traditionally male scenario, reflecting the perception of masculine robots as more adept in challenging terrains. In the traditional female scenario, the Feminine Spot design was slightly favored, reflecting societal norms associating caregiving roles with femininity. However, male and female humanoid robots received higher scores, suggesting a preference for humanoid assistance in medical contexts. In the culmary competition, societal biases linking cooking roles with femininity influenced preferences towards the Feminine Spot design and the Humanoid Female Robot. The average score for corresponding result is shown in the Fig. 8.
There was a significant effect of gender cues on the perception of gender at the \(p<005\) level for the four conditions \([F(3\,116)=914\,p<001]\). Post hoc comparisons using the Tukey HSD test indicated that the mean score for the masculine Spot (\(M=46\), \(SD=06\)) was significantly different from the feminine Spot (\(M=402\), \(SD=13\)), the machine-like Spot (\(M=337\), \(SD=154\)), and the dog-like Spot (\(M=37\), \(SD=09\)) and seems more masculine than feminine.
There was a significant effect of gender cues on comfort level at the \(p<005\) level for the four conditions \([F(3\,116)=685\,p=002]\). Post hoc comparisons using the Tukey HSD test indicated that the mean score for the feminine Spot (\(M=43\), \(SD=05\)) was significantly different than the masculine Spot (\(M=39\), \(SD=08\)), the machine-like Spot (\(M=28\), \(SD=09\)), and the dog-like Spot (\(M=29\), \(SD=10\)).
There was a significant effect of gender cues on politeness at the \(p<005\) level for the four conditions \([F(3\,116)=432\,p=006]\). Post hoc comparisons using the Tukey HSD test indicated that the mean score for the feminine Spot (\(M=23,SD=08\)) was significantly different from the masculine Spot (\(M=27,SD=09\)), the machine-like Spot (\(M=32\), \(SD=10\)), and the dog-like Spot (\(M=37\), \(SD=11\)).
## IV Discussion and conclusion
Surveys 1 and 2 consistently found that gender symbols in robots significantly influenced participants' perceptions and behaviors. Survey 1 found that people tend to anthropomorphize robots and attribute human-like emotions and intentions to them. Also, people tend to perceive non-humanoid robots as more masculine.
This study addressed the gender perception of non-humanoid robots and revealed the tendency of humans to attribute human characteristics to them. For example, the Spot robot is seen as primarily masculine, a view likely influenced by its design and behaviors reminiscent of masculine characteristics. In contrast, the mini cheetah was considered more neutral and less masculine than the spot. Naming patterns further reinforced these perceptions, with Spot often giving them masculine names. Significantly, the media influenced 34% of Spot names, indicating outsider bias. This emphasizes the central role of media in shaping perceptions and potentially enhancing human-robot interaction (HRI).
In addition, this research showed the relationship between the appearance and behavior of the robot and their suitability for specific jobs. It also showed that people tend to have a connection between the features of robots and the role attached to them.
Quantifying these attributes can offer more insights into how robots are perceived and can guide their design to align with specific societal roles or expectations. Furthermore, an intriguing observation from Survey 1 was the inclination of participants to assign animal names to robots. This suggests a strong association with physical attributes, reiterating the significance of design parameters. It becomes evident that design elements are not mere aesthetics; they profoundly influence HRI. Survey 2 expanded on this by introducing different levels of anthropomorphism, ranging from the machine-like Spot to a dog-like variant. The findings indicated that the extent of anthropomorphism notably affected participants' comfort levels, gender perceptions, and teammate preferences and exceptions for assigning an occupation.
Regarding assigning occupations, people generally believe non-humanoid robots are more suitable for neutral and masculine occupations than feminine ones. Moreover, assigning masculine attributes to non-humanoid robots seems more intuitive, suggesting people can more readily identify these attributes in such robots. While the machine-like robot was deemed an ideal teammate for tasks, the feminine and masculine versions evoked more distinct gendered perceptions due to their pronounced anthropomorphic designs This indicates that when robots display certain human-like traits, people are more prone to applying gender stereotypes to them. For instance, in Survey 2, the masculine Spot was perceived as more efficient in task performance than its feminine counterpart. This aligns with traditional gender roles in which men are often linked with mechanical and technical tasks.
Additionally, the masculine Spot was preferred over the feminine Spot as a teammate, hinting at a potential bias where masculine traits are associated with competence in specific tasks. However, regarding comfort levels, Survey 2 revealed a twist: Participants felt more at ease with the feminine Spot than with its masculine or machine-like versions. This might be due to societal views associating feminine traits with warmth, friendliness, and approachability. Such results mirror the broader societal stereotypes and biases that frequently link femininity with nurturing roles and masculinity with technical competence.
A fascinating insight from Survey 2 was the role of gender cues in shaping politeness. Participants were least aggressive towards the feminine Spot when it erred, potentially mirroring societal norms that advocate for gentler interactions with females. Conversely, the machine-like or dog-like Spot designs elicited reduced politeness, suggesting that human interactions become less empathetic and courteous as a robot moves away from human-like features (either towards machinery or animals). According to the results, a robot whose design aligns with its intended purpose will likely be accepted and trusted by humans, especially in tasks requiring close cooperation between humans and robots.
Fig. 8: Mean likelihood of choosing the different design of the spot for traditionally male-female and neutral scenarios |
2309.04663 | FIAT: Fusing learning paradigms with Instruction-Accelerated Tuning | Learning paradigms for large language models (LLMs) currently tend to fall
within either in-context learning (ICL) or full fine-tuning. Each of these
comes with their own trade-offs based on available data, model size, compute
cost, ease-of-use, and final quality with neither solution performing well
across-the-board. In this article, we first describe ICL and fine-tuning
paradigms in a way that highlights their natural connections. Based on these
connections, we propose a new learning paradigm called FIAT that fuses the best
of these paradigms together, enabling prompt-engineered instructions and
chain-of-thought reasoning with the very largest models while also using
similar methods to perform parameter updates on a modestly-sized LLM with
parameter-efficient tuning. We evaluate FIAT's effectiveness on a variety of
multilingual tasks and observe that FIAT performs better than both ICL and
fine-tuning at scales ranging from 100-10,000 training examples. We hope that
FIAT provides a practical way of harnessing the full potential of LLMs without
needing to make a hard choice between learning paradigms. | Xinyi Wang, John Wieting, Jonathan H. Clark | 2023-09-09T02:43:48Z | http://arxiv.org/abs/2309.04663v2 | # Fiat: Fusing Learning Paradigms with Instruction-Accelerated Tuning
###### Abstract
Learning paradigms for large language models (LLMs) currently tend to fall within either in-context learning (ICL) or full fine-tuning. Each of these comes with their own trade-offs based on available data, model size, compute cost, ease-of-use, and final quality with neither solution performing well across-the-board. In this article, we first describe ICL and fine-tuning paradigms in a way that highlights their natural connections. Based on these connections, we propose a new learning paradigm called Fiat1 that fuses2 the best of these paradigms together, enabling prompt-engineered instructions and chain-of-thought reasoning with the very _largest models_ while also using similar methods to perform parameter updates on a _modestly-sized LLM_ with parameter-efficient tuning. We evaluate Fiat's effectiveness on a variety of multilingual tasks3 and observe that Fiat performs better than both ICL and fine-tuning at scales ranging from 100-10,000 training examples. We hope that Fiat provides a practical way of harnessing the full potential of LLMs without needing to make a hard choice between learning paradigms.
Footnote 1: We derive the name Fiat from _F_using Learning Paradigms with _I_nstruction _A_ccelerated _T_uning.
Footnote 2: Fiat fuses not only the learning paradigms but the models themselves.
Footnote 3: We say that these tasks are _naturally_ low-data because no additional data is available for such languages and it’s non-trivial to obtain more; we contrast this with artificially low-data scenarios where large data exists, but is ignored.
## 1 Introduction
Large language models (LLMs) show impressive generalization ability to new tasks and languages. Some of their most exciting capabilities, such as producing logical reasoning to solve a problem, are found to emerge only when the model size is over a certain threshold, often hundreds of billions of parameters (Wei et al., 2022b;a). The impressive capabilities of these models to produce high-quality responses without any task-specific tuning along with the very high cost of further tuning such models has led much recent work to focus on the paradigm of In-Context Learning (ICL)--placing a few task-specific examples and instructions into the model's input (Brown et al., 2020; Chowdhery et al., 2022; Google et al., 2023; OpenAI, 2023).
Although prior work has seen that fine-tuning a model on task data can often lead to superior performance on the downstream task compared to ICL (Scao & Rush, 2021; Schick & Schutze, 2020a;b; Asai et al., 2023), there are significantly fewer recent efforts on fine-tuning models for tasks with limited data, perhaps because the time and compute costs associated with tuning a very large model drives practitioners toward smaller models, abandoning the ability to take advantage of emergent model capabilities.
ICL and model fine-tuning each come with their own trade-offs. ICL does not incur any training cost and it allows one to utilize the most capable LLMs (Schick & Schutze, 2020b; OpenAI, 2023). However, while ICL can achieve competitive performance on many tasks with a handful of annotated examplars, it often requires very large models to work well and it cannot take advantage of additional training examples if they do not fit into the context window. For many tasks, this leads to ignoring a substantial amount of potentially-useful training examples. Fine-tuning, on the other hand, is not constrained by the need to fit training examples into the model's input, and it can be quite effective
even with smaller language models. These trade-offs tend to lead practitioners to arbitrarily pick a paradigm or run costly experiments on these disparate methods in order to choose the best approach.
We instead take the view that these two model learning paradigms are in fact complementary. To this end, we propose Fiat--Fusing Learning Paradigms with Instruction-Accelerated Tuning (Fiat), which utilizes both ICL on very large models and parameter tuning on moderately-sized LLM while fusing the common techniques associated with each paradigm. Fiat uses hand-engineering instruction prompts that elicit chain-of-thought reasoning from a very large model, while also using the generated reasoning and instruction prompts to tune a moderately-size LLM with parameter-efficient tuning. Figure 1 shows the workflow of Fiat and how it compares to ICL and fine-tuning.
In the remainder of this article, we formally describe the connections between ICL and fine-tuning, along with the various techniques that have developed within each paradigm (SS2); we propose Fiat, which fuses the best of these together and avoids many of the pitfalls of each of the individuals (SS2.3); we present experiments demonstrating how Fiat improves over both learning paradigms in data scenarios ranging from 100-10,000 examples along with ablations detailing where these gains come from (SS3).
## 2 Learning Paradigms for LLMs
In this section, we review two popular learning paradigms for LLMs (ICL in SS2.1 and parameter tuning in SS2.2) while considering their strengths and weaknesses, which directly lead to Fiat (SS2.3).
### In-Context Learning
Instructed ICLkeeps the parameters of the LLM fixed, but it instead selects an instruction prompt (often through manual optimization) to improve the accuracy of the downstream task. Formally, a model prediction is made by sampling4 a very large pre-trained LLM parameterized by fixed \(\theta\) and a textual instruction \(I\):
Footnote 4: Typically, the sampling is a simple argmax with temperature 0, though this isn’t always the case as in techniques such as majority voting.
\[P(y|x;\theta,I) \tag{1}\]
Figure 1: Overall flow of Fiat and how it compares to ICL and fine-tuning. The colored components are updated while building and learning a task-specific instance of Fiat, while other components are fixed.\(\theta_{\beta}\) is the parameters of the larger LLM and \(I_{\beta}\) are the instructions used to induce reasoning; \(\theta_{\tau}\) are the parameters of a moderately-sized LLM to be tuned and \(I_{\tau}\) is its instructions, which helps the model predict the correct final answer.
While the instructions \(I\) are prefixed onto the model input \(x\) in practice, we intentionally notate them as an argument of the model, which we argue better reflects how they are conceptualized; we will build on this later.
Chain-of-thought reasoningpushes instructed ICL a step further by crafting \(I\) to induce step-by-step _reasoning_ in the output of the model that improves the model's ability to arrive at a correct prediction (Wei et al., 2022b). This allows auto-regressive inference to output observations about the input or solve sub-problems of the overall task that future decoding steps can leverage when predicting the final answer; it may also elicit textual patterns that the model saw during pre-training, that would otherwise be difficult to access in the model's latent feature space (e.g. via fine-tuning).
Few-shot ICLFew-shot ICL differs from instructed ICL in that its instructions \(I\) are composed of a small number of examplars selected among training examples \(\mathcal{D}\) that have been formatted as a textual input to the model via instructions.
Instruction-tuned Base ModelsInstruction-tuned models such as FLAN and T0 (Sanh et al., 2021; Chung et al., 2022; Longpre et al., 2023) often provide significant improvements on ICL compared to using a pre-trained model. This is because instruction-tuning is essentially a second stage pretraining using a set of multitask data whose distribution is closer to the downstream task.
The ICL paradigm achieves competitive results on various tasks with no or only a handful of annotated examples. While it does not incur any additional model tuning cost, ICL often has high inference cost because it requires LLMs over a certain size to work well, especially when using techniques such as chain-of-thought. It also cannot take advantage of additional task data beyond what fits into the context window of the model.
### Parameter Tuning
Full-Parameter Fine-tuningGiven pre-trained parameters \(\theta\) of a LLM to tune,5 standard fine-tuning simply optimizes all parameters of the model on task-specific supervised training data \(\mathcal{D}\) according to:
Footnote 5: In practice, \(|\theta|\) tends to be much smaller for fine-tuning than for ICL.
\[P(y|x;\theta) \tag{2}\]
The optimization of \(\theta\) is similar in purpose to the process of human prompt engineering of \(I\) in ICL.
Since model fine-tuning does not have to fit training data into the context window of the model, it is more effective when there are slightly more training examples available. Fine-tuning also works well on smaller language models with enough training examples, leading to faster inference. However, fine-tuning incurs additional training cost and requires access to model parameters, while some of the most capable LLMs are available for inference-only API access. The model could also easily overfit to the training examples due to catastrophic forgetting (Goodfellow et al., 2013), especially for tasks with limited data.
Parameter-efficient Fine Tuning(PEFT) improves the tuning procedure by using a learning parameterization \(\theta^{\text{PEFT}}\) where \(|\theta^{\text{PEFT}}|\ll|\theta|\). Besides reducing the danger of overfitting, this learning technique also avoids forgetting features that may be useful for generalization beyond the training set. Similarly, ICL avoids catastrophic forgetting by only modifying the input to the model while keeping the parameters fixed.
### Fusing learning paradigms with Fiat
In this section, we construct Fiat, motivating the purpose of each design choice in terms of modeling capabilities. ICL and fine-tuning each have compelling strengths along with pitfalls, which we summarize in Table 1. At a high level, we observe that these properties are largely _complementary_.
Reflecting on these abilities of ICL and fine-tuning, we seek an approach that is capable of:
* _Instruction following_: follows human-engineered instructions to achieve high quality predictions;
* _Chain-of-thought reasoning_: produces intermediate text that helps the model toward correct predictions;
* _Parameter tuning_: refines its internal representation to align with a moderate to large number of supervised training examples; and
* _Data scaling_: provides high quality models with data scales from 100 to 1000's of examples.
Model stacking via CoT-augmented TuningWe begin with the observation that chain-of-thought prompting is typically _not_ supervised, but rather induced via carefully-written instructions. Motivated by this, we fuse two models for learning and inference: a _big_ model \(\beta\) with all the most powerful emergent capabilities of LLMs, and a _tunable_ model \(\tau\) whose size can be flexibly chosen depending on the capacity needs of the task of interest. We assign the responsibility of chain-of-thought inference to \(\beta\) and then provide its textual predictions \(\hat{y}_{\beta}\) to the tunable model; it can then learn how to best use these inputs (e.g. chain-of-thought explanations) based on how useful they are with regard to predicting the supervised outputs. The parameters \(\theta_{\beta}\) remain fixed as we do not have nor require any directly supervised data for its sub-task.
Instruction-augmented TuningCrafting a good instruction prompt is known to be essential to high-quality ICL performance, and so we naturally include instructions \(I_{\beta}\) to generate reasoning and explanations as a first step. Although instructions are typically not used for smaller tunable model \(I_{\tau}\), we observe that instructions have the potential to benefit tuning as well. We speculate that instructions help better align a task's inputs with the distribution seen during pre-training, allowing the model to not only converge faster but also make fewer parameter updates. This, in turn, avoids the risk of catastrophic forgetting associated with excessive parameter updates. Therefore, Fiat also provides separate instructions \(I_{\tau}\) for the tunable model.6
Footnote 6: In Fiat, instructions can be viewed as serving purpose analogous to a Bayesian prior in earlier statistical learning methods: They allow encoding human knowledge into the learning procedure alongside supervised data that empirically estimates parameters. However, textual instructions are a far more natural way of doing this than the hyperparameters of a Dirichlet.
Pervasive Instruction-tuned ModelsAlready, instruction-tuned models have become the standard for ICL; we use such models as \(\theta_{\beta}\) in all of our experiments. However, given Fiat's use of Instruction-augmented Tuning, we also depart from the common practice of fine-tuning starting from models pre-trained primarily on span corruption objectives and instead initialize with instruction-tuned checkpoint (Longpre et al., 2023). This makes optimization easier since the model is already expecting instructions; this can be especially beneficial in limited training data scenarios.
Parameter-efficient TuningSo far, we have added chain-of-thought reasoning, instruction following in tuning, and instruction-tuned initialization to Fiat's design, all of which move the pre-tuning model and the task definition toward each other in terms of increasing the probability of the desired output. We hypothesize that parameter-efficient tuning is a particularly good fit for optimizing \(\theta_{\tau}\) in Fiat over the training data, because large changes to the model parameters \(\theta_{\tau}\) should not be
\begin{table}
\begin{tabular}{l c c} \hline \hline & **ICL** & **Fine-tuning** \\ \hline \multicolumn{2}{c}{_Strengths_} & \\ \hline Works well with small model & No & Yes \\ Supports large training data & No & Yes \\ Supports chain-of-thought reasoning & Yes & No \\ Usage of instruction prompts & Yes & No \\ \hline \multicolumn{2}{c}{_Challenges_} & \\ \hline No parameter updates & Yes & No \\ Avoids catastrophic forgetting & Yes & No \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of the ICL and fine-tuning learning paradigms, according to common usage patterns.
necessary given a good initialization.7 Formalizing all the above modifications, we arrive at the final formulation of Fiat used for fine-tuning and inference in Alg. 1 and Alg. 2.
Footnote 7: In Fiat, we use LoRA (Hu et al., 2021) to parameterize the tuning procedure because it does not induce additional inference cost. Future work should consider other methods such as soft prompt tuning (Lester et al., 2021).
## 3 Experiments
DatasetsOne of our primary objectives in selecting datasets that naturally cover a broad variety of training data sizes. We consider tasks ranging from classification to exercising a model's ability to generate short answers, and we include a large number and variety of languages to evaluate the generality of the method.
First, we use Xor-AttriQA (Muller et al., 2023), a classification task where model is asked to predict whether the provided answer to the question is supported by the given passage context, which includes 5 languages with 262 examples total. We refer to this as the \(\mathcal{O}(100)\) data scenario.
We also study Fiat's behavior on the Cross-lingual QA task of Xtreme-Up (Ruder et al., 2023). This data is an expansion of the XOR QA8 dataset (Asai et al., 2020), a cross-lingual variant of the TyDi QA (Clark et al., 2020) dataset. This task asks a model to predict the correct English answer span given a non-English question and an English answer passage; this task also includes the possibility that the passage does not contain a correct answer, making it more challenging. Cross-lingual QA is a particularly important task for languages that have very little answer content as it enables providing answers to questions that would otherwise be unanswerable using only in-language content. We provide results on two focus sets. First, we use the subset of 20 Indic languages in Xtreme-Up Cross-lingual QA where each language has about 300 examples, to allow for studying a scenario with
Figure 2: Model building and inference with Fiat. **Left:** Model building with Fiat begins with interactive prompt engineering of the instructions \(I\). \(I_{\beta}\) specifies how to perform reasoning using few-shot exemplars on \(\theta_{\beta}\)—i.e. behaviors for which we have no large-scale annotations, while \(I_{\tau}\) specifies guidance to the tuned model \(\theta_{\tau}\) for using the generated reasoning and input to produce a final output. Both \(\theta_{\beta}\) and \(\theta_{\tau}\) are instruction-tuned models and only \(\theta_{\tau}\) is updated during training via parameter-efficient tuning. **Right:** Inference with Fiat is very simple, requiring only: (1) a call to the large generative model using the fixed pre-trained parameters \(\theta_{\beta}\) and the reasoning instructions \(I_{\beta}\); and (2) a call to the tuned model \(\theta_{\tau}\) along with the associated task instructions \(I_{\tau}\).
moderate data; we refer to this as the \(\mathcal{O}(1000)\) data scenario. We also study the full Xtreme-Up Cross-lingual QA task which has 22,500 examples across 27 languages where the 5 high-resource languages have more than 2500 examples each; we refer to this as the \(\mathcal{O}\)(10,000) data scenario.9 Together, these tasks allow us to test our methods on three different data size scenarios from small 100's to over training 20,000 examples. Details of the languages and the dataset size can be found in App. A.1.
Footnote 9: We report the average result on the under-represented languages, following the recommendations of the Xtreme-Up benchmark.
ModelsWe use PaLM-2 (Google et al., 2023) as our base model, and we experiment with instruction-tuned models using the FLAN mixture (Chung et al., 2022). We use PaLM-2 L as \(\mathcal{M}_{\beta}\) and we use PaLM-2 XS and S for \(\mathcal{M}_{\tau}\).
BaselinesWe compare to both ICL and fine-tuning baselines. For ICL, we use PaLM-2 L with chain-of-thought reasoning (Wei et al., 2022b). We include 4 few-shot exemplars with hand-written chain-of-thought explanations in English for _each_ of the 5 languages in the Xor-AttriqA Attribution task.10 for a total of 20 exemplars. However, for Xtreme-Up cross-lingual QA, it was not feasible to hand-engineer prompts for each of the 27 languages. Therefore, we hand-write 4 chain-of-thought explanations based on Bengali exemplars,11 and use the same ICL examples for all 20 languages.
Footnote 10: During manual prompt engineering, we used Google Translate to assist with explanation annotation.
Footnote 11: Note that while the exemplars have Bengali questions, we instruct the model to carry out its reasoning in English.
### Results
We present the performance of the baselines (ICL and fine-tuning) and our Fiat framework for all three data settings in Table 2. We show the average scores across all languages in each dataset for simplicity, and we provide the result for each language in App. A.2. Looking at the baselines, we find that few-shot ICL using PaLM-2 L model is quite competitive without any additional model tuning, but still lags behind PaLM-2 S fine-tuned on a relatively small amount of task data. However, we find that the best baseline differs between ICL and fine-tuning PaLM-2 XS across different tasks and data size settings. If one were choosing between just ICL or fine-tuning, this inconsistency makes it difficult to determine the best course of action without empirical comparisons. On the other hand, Fiat offers the best performance by combining the strengths of both ICL and fine-tuning.
## 4 Ablations and Analysis
In this section, we study the effect of individual design decisions within Fiat and present the results in Table 3, and drawing conclusions from them below. In the end, we find that while certain design
\begin{table}
\begin{tabular}{l l l c c c} \hline \hline & & & Xor-AttriqA & Xtreme-Up & Xtreme-Up \\ & & & \(\mathcal{O}\)_(100)_ & \(\mathcal{O}\)_(1000)_ & \(\mathcal{O}\)_(1000)_ \\ \(\theta_{\tau}\) & \(\theta_{\beta}\) & Method & Acc / AUC-PR & F1 & F1 \\ \hline \multirow{2}{*}{XS} & \multirow{2}{*}{L} & ICL & 78.6 / —\({}^{\dagger}\) & 68.9 & 69.2 \\ \cline{2-6} & & Fine-tune & 90.5 / 52.1 & 63.5 & 75.5 \\ \cline{2-6} & & Fiat & 94.0 / 78.1 & 73.6 & 77.8 \\ \hline \multirow{2}{*}{S} & \multirow{2}{*}{L} & Fine-tune & 90.6 / 54.5 & 67.1 & 77.8 \\ \cline{2-6} & & Fiat & 93.9 / 77.5 & 77.3 & 79.3 \\ \hline \multicolumn{6}{l}{_Gain over best baseline_} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \end{tabular}
\end{table}
Table 2: Overall results of Fiat and typical baselines. While we provide improvements with regard to the best baseline, we also point out that the best baseline often differs between ICL and fine-tuning, especially at smaller model sizes; this leaves practitioners to empirically determine the best course of action. \({}^{\dagger}\)AUC-PR is not computed for the ICL because outputs are text-only.
choices tend to have a larger effect on some settings than others, each tends to have substantial contributions in some area, and together the overall modeling recipe is very effective as a whole.
Instructed-tuned base models improve final quality of fine-tuned models.The instruction-tuned Flan XS model improves over the base model on all datasets, especially on Xor-AttriQA and Xtreme-Up Cross-lingual QA Indic, where the total amount of task data is around \(O(100)\) to \(O(1000)\). This indicates that instruction-tuned models are not only beneficial for ICL, but can also be beneficial for fine-tuning on limited data (Longpre et al., 2023). However, the advantage of instruction-tuned model on Xtreme-Up Cross-lingual QA decreases from the Indic (\(O(1000)\) training examples) to Full (\(O(10000)\) training examples), indicating that instruction-tuned model is less helpful when the fine-tuning dataset is large.
Instruction-augmented Tuning generally leads to significant improvements.Adding an appropriate prompted format to the task data is generally beneficial for all tasks. This result indicates that prompt engineering is not only helpful for direct few-shot ICL, but also has a positive impact on model fine-tuning. Prompted tuning is especially helpful for Xor-AttriQA and Xtreme-Up Cross-lingual QA Indic, where the amount of task data is very limited. This is because the prompt format aligns the distribution of downstream task closer to the model pretraining distribution, which allows the pretrained model to generalize to the downstream task with a small amount of task examples.
CoT-augmented Tuning is helpful for most tasks.Our CoT-augmented Tuning can lead to large improvement for Xtreme-Up Cross-lingual QA Indic task. Surprisingly, it does not help Xor-AttriQA, which is contradictory to findings from prior works which show that explanations can be especially helpful for classification tasks (Hsieh et al., 2023; Zhou et al., 2023). We hypothesize that this is because the model already performs quite well on Xor-AttriQA without having access to the explanations (over 90 percent accuracy) and this task may be reaching its saturation point.
CoT-augmented Tuning is even more helpful for tasks and languages with lower performance.We analyze the relationship between the gains brought by CoT-augmented Tuning on the Xtreme-Up Cross-lingual QA tasks. Figure 3 shows the improvement in F1 score of different languages versus a baseline model's F1 score that lacks CoT-augmented Tuning. We can see that there is an inverse relationship between the benefit of CoT-augmented Tuning and the baseline model score, indicating that CoT is more beneficial for harder tasks or languages where the model could not perform well without the help of the CoT augmentation. This means that while we see meaningful gains in aggregate, for individual languages (or, more generally, individual tasks and use cases), CoT can have an out-sized impact on quality.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & & Xor-AttriQA & Xtreme-Up & Xtreme-Up \\ & & & Cross-lingual QA: Indic & Cross-lingual QA: Full \\ & & & O(100) & O(1000) \\ \(\theta_{\pi}\) & \(\theta_{\beta}\) & Method & Acc / AUC-PR & F1 & F1 \\ \hline \multirow{3}{*}{XS} & L & Few-shot ICL & 78.6 / — & 68.9 & 69.2 \\ \cline{2-5} & L & Fiat & 94.0 / 78.1 & 73.6 & 77.8 \\ & — & w/o CoT-augmented tuning & 94.0 / 80.3 & 70.7 & 76.0 \\ & — & w/o Instruction-augmented tuning & 93.5 / 72.4 & 69.8 & 76.4 \\ & — & w/o Parameter-efficient tuning & 93.7 / 69.8 & 67.8 & 75.8 \\ & — & w/o Instruction-tuned base model & 90.5 / 52.1 & 63.5 & 75.5 \\ \hline \multirow{3}{*}{S} & L & Fiat & 93.9 / 77.5 & 77.3 & 79.3 \\ & — & w/o CoT-augmented tuning & 94.7 / 80.7 & 76.7 & 79.8 \\ \cline{1-1} & — & w/o Instruction-augmented tuning & 94.7 / 76.2 & 75.3 & 79.1 \\ \cline{1-1} & — & w/o Parameter-efficient tuning & 94.7 / 76.2 & 72.3 & 78.5 \\ \cline{1-1} & — & w/o Instruction-tuned base model & 90.6 / 54.5 & 67.1 & 77.8 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Ablations showing the contribution of each modification within the Fiat recipe; each removal is cumulative with the one above. We observe that each modification tends to make a substantial positive impact on at least one scenario. The bottom line in each block is equivalent to traditional fine-tuning.
CoT-augmented Tuning leads to better quality than CoT distillation.Recent work proposed distilled CoT, which uses the explanation as a multitask output target, so that the model does not need to generate additional explanations at test time (Hsieh et al., 2023). Here we compare the performance of these two different ways of using the CoT explanations and list the performance on cross-lingual QA tasks in Figure 4. Despite incurring higher inference cost, our CoT augmentation method further out-performs the distilled CoT by a large margin on the harder Xtreme-Up Cross-lingual QA Indic task. In general, we view distillation as an orthogonal technique to Fiat, which is aimed at efficiency over quality.
Adding instructions to tuning helps from beginning to end.In Figure 6, we plot the training curves of Flan PaLM-2 S model with and without Instruction-augmented Tuning. We can see that adding instructions to tuning leads to much better performance at step 0, before any model optimization. This indicates that adding the instructions to the task data _during fine-tuning12_ can significantly improve the _zero-shot_ performance of the model, probably because it makes the task
Figure 4: Performance on Xtreme-Up Cross-lingual QA Indic compared to the baseline without CoT. Our CoT-augmented Tuning method significantly outperforms previous methods on distilling CoT.
data more similar to the data used in the instruction tuning stage. Importantly, this also implies that the model parameters don't need to move as far away from their starting point in order to achieve the same level of quality, reducing the risk of catastrophic forgetting. However, the model does not only reach the same level of quality with less steps, but also manages to exceed the quality of a model without instructions.
Instruction-augmented Tuning helps more with an instruction-tuned base model.We compare the effect of prompted tuning on models with and without instruction tuning. Figure 6 shows that prompted tuning generally brings improvements for both the base model without instruction tuning and the Flan model with instruction tuning, while the gains on the instruction-tuned Flan model tend to be slightly larger and more consistent. This is likely because the data format we used for prompted tuning (task instructions followed by the input) is more similar to the Flan data mixture used for instruction tuning.
## 5 Related Work
Instruction TuningInstruction-tuned models (Wei et al., 2021; Longpre et al., 2023) often have better performance for few-shot ICL tasks than base language models since they are already primed to following instructions due to being fine-tuned on a diverse set of tasks. Using instruction-tuned models is a key component of Fiat.
In-Context LearningIn in-context learning, the parameters of the LLM remain fixed and a prompt containing a few examples along with reasoning steps is used to prime the model for solving similar tasks (Nye et al., 2021; Wei et al., 2022b). In-context learning works best for large language models. Fiat uses this capability of large language models, along with fine-tuning, to power small language models in the low-data regime.
Knowledge Transfer from Larger to Smaller LLMsA popular prior method for transferring knowledge from large models to smaller ones is model distillation (Hinton et al., 2015), where the outputs of a larger model are used as a training signal for a smaller one. Other approaches include using the larger language model to generate data and then using this data to train smaller models. More recently, the latter has approach has been extended to generate reasoning steps which are provided as fine-tuning data for the smaller language model (Magister et al., 2022; Huang et al., 2022; Li et al., 2022; Ho et al., 2023; Hsieh et al., 2023; Fu et al., 2023; Zhu et al., 2023; Li et al., 2023).
Under-represented LanguagesMost work that trains large language model and uses them for downstream tasks focus on English or the collection of 100 or so languages where there are large, easily available corpora (ImaniGoogbari et al., 2023). Tail languages have often been ignored by language technologies due to lack of available corpora (Nayak and Joshi, 2022). Recent works has focused on tail languages outside of these head languages (Bapna et al., 2022; Ruder et al., 2023). In this work, we make the low-data regime the focus of our efforts, which is especially useful for tail languages.
Fine-tuning smaller LLMsWhile fine-tuning with prompts has been studied for encoders pretrained with masked language modeling objectives (Scao and Rush, 2021), we show that it is also important to fine-tuning generative language models. For example, some works show that fine-tuning a smaller language model is a more competitive and efficient method for practical low-data learning problems than few-shot ICL (Asai et al., 2023; Ruder et al., 2023). Agrawal et al. (2022) propose to synthetic QA data generated from very large LLM to improve the performance of a smaller model.
## 6 Conclusion
We have presented Fiat, a method that fuses the ICL and fine-tuning learning paradigms and leads to improved model predictions across a variety of data scenarios, ranging from 100-10,000 training examples. We hope Fiat provides a practical way of harnessing the full potential of LLMs without needing to make a hard choice between learning paradigms. |
2309.10616 | Enhancing quantum state tomography via resource-efficient
attention-based neural networks | Resource-efficient quantum state tomography is one of the key ingredients of
future quantum technologies. In this work, we propose a new tomography protocol
combining standard quantum state reconstruction methods with an attention-based
neural network architecture. We show how the proposed protocol is able to
improve the averaged fidelity reconstruction over linear inversion and
maximum-likelihood estimation in the finite-statistics regime, reducing at
least by an order of magnitude the amount of necessary training data. We
demonstrate the potential use of our protocol in physically relevant scenarios,
in particular, to certify metrological resources in the form of many-body
entanglement generated during the spin squeezing protocols. This could be
implemented with the current quantum simulator platforms, such as trapped ions,
and ultra-cold atoms in optical lattices. | Adriano Macarone Palmieri, Guillem Müller-Rigat, Anubhav Kumar Srivastava, Maciej Lewenstein, Grzegorz Rajchel-Mieldzioć, Marcin Płodzień | 2023-09-19T13:46:21Z | http://arxiv.org/abs/2309.10616v2 | # Enhancing quantum state tomography via resource-efficient attention-based neural networks
###### Abstract
Resource-efficient quantum state tomography is one of the key ingredients of future quantum technologies. In this work, we propose a new tomography protocol combining standard quantum state reconstruction methods with an attention-based neural network architecture. We show how the proposed protocol is able to improve the averaged fidelity reconstruction over linear inversion and maximum-likelihood estimation in the finite-statistics regime, reducing at least by an order of magnitude the amount of necessary training data. We demonstrate the potential use of our protocol in physically relevant scenarios, in particular, to certify metrological resources in the form of many-body entanglement generated during the spin squeezing protocols. This could be implemented with the current quantum simulator platforms, such as trapped ions, and ultra-cold atoms in optical lattices.
+
Footnote †: These authors contributed equally.
+
Footnote †: These authors contributed equally.
+
Footnote †: These authors contributed equally.
## I Introduction
Modern quantum technologies are fuelled by resources such as coherence, quantum entanglement, or Bell nonlocality. Thus, a necessary step to assess the advantage they may provide is the certification of the above features [1; 2; 3; 4; 5; 6; 7; 8; 9; 10]. The resource content of a preparation is revealed from the statistics (e.g. correlations) the device is able to generate. Within the quantum formalism, such data is encoded in the density matrix, which can only be reconstructed based on finite information from experimentally available probes - a process known as quantum state tomography (QST) [11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21]. Hence, QST is a prerequisite to any verification task. On the other hand, the second quantum revolution brought new experimental techniques to generate and control massively correlated states in many-body systems [22; 23; 24; 25; 26], challenging established QST protocols. Both reasons elicited an extremely active field of research that throughout the years offered a plethora of algorithmic techniques [27; 28; 29; 30; 31; 32; 33; 34; 35; 36].
Over the recent years, machine learning (ML), artificial neural networks (NN), and deep learning (DL) entered the field of quantum technologies [37], offering many new solutions to quantum problems, also in the context of QST [38; 39; 40; 41; 42; 43]. The supervised DL approaches were already successfully tested on experimental data [44; 45; 46; 47], wherein a study of neural network quantum states beyond the shot-noise appeared only recently [48]. Nevertheless, neural network-based methods are not exempted from limitations, i.e. NN can provide non-physical outcomes [44; 46; 49], and are limited in ability to learn from a finite number of samples [50].
In this work, we address the abovementioned drawback by offering a computationally fast general protocol to back up any finite-statistic QST algorithms within a DL-supervised approach, greatly reducing the necessary number of measurements performed on the unknown quantum state. We treat the finite-statistic QST reconstructed density matrix as a noisy version of the target density matrix, and we employ a deep neural network as a denoising filter, which allows us to reconstruct the target density matrix. Furthermore, we connect the measured error loss function with quantum fidelity between the target and the reconstructed state. We show that the proposed protocol greatly reduces the required number of prepared measurements allowing for high-fidelity reconstruction \(\sim 98\%\) of both mixed and pure states. We also provide an illustration of the power of our method in, a physically useful, many-body spin-squeezing experimental protocol.
The paper is organized as follows: in Sec. II we introduce the main concepts behind the QST; in Sec. III we introduce the data generation protocol and neural network architecture, as well as define QST as a denoising task. Sec. IV is devoted to benchmarking our method against known approaches, and we test it on quantum states of physical interest. In Sec. V, we provide practical instructions to implement our algorithm in an experimental setting. We conclude in Sec. VI with several future research directions.
## II Preliminaries
Let us consider \(d\)-dimensional Hilbert space. A set of informationally complete (IC) measurement operators \(\hat{\mathbf{\pi}}=\{\hat{\pi}_{i}\}\), \(i=1,\ldots,d^{2}\), in principle allows to unequivocally reconstruct the underlying target quantum state \(\hat{\tau}\in\mathbb{C}^{d\times d}\) in the limit of an infinite number of ideal measurements [51; 52]. After infinitely many measurements,
one can infer the mean values:
\[p_{i}=\mathrm{Tr}[\hat{\tau}\hat{\pi}_{i}], \tag{1}\]
and construct a valid vector of probabilities \(\mathbf{p}=\{p_{i}\}\) for any proper state \(\hat{\tau}\in\mathcal{S}\), whereby \(\mathcal{S}\) we denote the set of \(d\)-dimensional quantum states, i.e. containing all unit-trace, positive semi-definite (PSD) \(d\times d\) Hermitian matrices. Moreover, \(\hat{\mathbf{\pi}}\) can be considered as a set of operators spanning the space of Hermitian matrices. In such a case, \(\mathbf{p}\) can be evaluated from multiple measurement settings (e.g., Pauli basis) and is generally no longer a probability distribution. In any case, there exists a one-to-one mapping \(Q\) from the mean values \(\mathbf{p}\) to the target density matrix \(\hat{\tau}\):
\[Q: \mathcal{F}_{\mathcal{S}}\longrightarrow\mathcal{S} \tag{2}\] \[\mathbf{p}\longmapsto Q[\mathbf{p}]=\hat{\tau},\]
where \(\mathcal{F}_{\mathcal{S}}\) is the space of accessible probability vectors. In particular, by inverting the Born's rule Eq. (1), elementary linear algebra allows us to describe the map \(Q\) as,
\[Q[\mathbf{p}]=\mathbf{p}^{T}\mathrm{Tr}[\hat{\mathbf{\pi}}\hat{\mathbf{ \pi}}^{T}]^{-1}\hat{\mathbf{\pi}}. \tag{3}\]
The inference of the mean values \(\mathbf{p}\) is only perfect in the limit of an infinite number of measurement shots, \(N\rightarrow\infty\).
In a realistic scenario, with a finite number of experimental runs \(N\), we have access to frequencies of relative occurrence \(\mathbf{f}=\{f_{i}:=n_{i}/N\}\), where \(n_{i}\) is the number of times the outcome \(i\) is observed. Such counts allow us to estimate \(\mathbf{p}\) within an unavoidable error dictated by the shot noise and of amplitude typically scaling as \(1/\sqrt{N}\)[53]. With only the frequencies \(\mathbf{f}\) available, we can use the mapping \(Q\) for an estimate \(\hat{\rho}\) of the target density matrix \(\hat{\tau}\), i.e.,
\[\hat{\rho}=Q[\mathbf{f}], \tag{4}\]
In the infinite number of trials \(N\rightarrow\infty\), \(f_{i}=p_{i}\) and \(\hat{\rho}=\hat{\tau}\). Yet, in the finite statistics regime, as considered in this work, the application of the mapping as defined in Eq. (3) to the frequency vector \(\mathbf{f}\) will generally lead to nonphysical results (i.e. \(\hat{\rho}\) not PSD). In such case, as an example of proper mapping \(Q\), we can consider different methods for standard tomography tasks, such as Linear Inversion (LI), or Maximum Likelihood Estimation (MLE), see Appendix A. As operators \(\hat{\mathbf{\pi}}\), we consider positive operator-valued measures (POVMs) and the more experimentally appealing Pauli basis (check Appendix B).
## III Methods
This section describes our density matrix reconstruction protocol, data generation, neural network (NN) training, and inference procedure. In Fig. 1, we show how such elements interact within the data flow. In the following paragraphs, we elaborate on the proposed protocol in detail.
The first step in our density matrix reconstruction protocol, called _pre-processing_, is a reconstruction of density matrix \(\hat{\rho}\) via finite-statistic QST with frequencies \(\mathbf{f}\) obtained from measurement prepared on target state \(\hat{\tau}\). Next, we feed forward the reconstructed density matrix \(\hat{\rho}\) through our neural network acting as a noise filter - we call this stage a _post-processing_. In order to enforce the positivity of the neural network output, we employ the so-called Cholesky decomposition to the density matrices, i.e. \(\hat{\rho}=C_{\rho}C_{\rho}^{\dagger}\), and \(\hat{\tau}=C_{\tau}C_{\tau}^{\dagger}\), where \(C_{\rho,\tau}\) are lower-triangular matrices. Such decomposition is unique, provided that \(\hat{\rho}\), \(\hat{\tau}\) is positive [127]. We treat the Cholesky matrix \(C_{\rho}\) obtained from finite-statistic QST protocol, as a _noisy_ version of the target Cholesky matrix \(C_{\tau}\) computed from \(\hat{\tau}\). The aim of the proposed neural network architecture is to learn a _denoising filter_ allowing reconstruction of the target \(C_{\tau}\) from the _noisy_ matrix \(C_{\rho}\) obtained via finite-statistic QST. Hence, we cast the neural network training process as a supervised denoising task.
Figure 1: Schematic representation of the data pipeline of our QST hybrid protocol. Panel (a) shows data acquisition from a generic experimental set-up, during which the frequencies \(\mathbf{f}\) are collected. Next, panel (b) presents standard density matrix reconstruction; in our work, we test the computationally cheap LI method together with the expensive MLE, in order to better analyse the network reconstruction behaviour and ability. Panel (c) depicts the matrix-to-matrix deep-learning strategy for Cholesky matrices reconstruction. The architecture herein considered is a combination of convolutional layers for input and output and a transformer model in between. Finally, we compare the reconstructed state \(\hat{\rho}\) with the target \(\hat{\tau}\).
### Training data generation
To construct the training data set, we first start with generating \(N_{\text{train}}\) Haar-random \(d\)-dimensional target density matrices, \(\{\hat{\tau}_{m}\}\), where \(m=1,\ldots,N_{\text{train}}\). Next, we simulate experimental measurement outcomes \(\mathbf{f}_{m}\), for each \(\hat{\tau}_{m}\), in one of the two ways:
1. _Directly_ : When the measurement operators \(\mathbf{\hat{\pi}}\) form an IC-POVM, we can take into account the noise by simply simulating the experiment and extracting the corresponding frequency vector \(\mathbf{f}_{m}=\{n_{i}/N\}_{m}\), where \(N\) is the total number of shots (i.i.d. trials) and the counts \(\{n_{i}\}_{m}\) are sampled from the multinomial distribution.
2. _Indirectly_ : As introduced in the preliminaries (Sec. II), if a generic basis is used \(\mathbf{\hat{\pi}}\), \(\mathbf{p}_{m}\) is no longer necessarily a probability distribution. This is the case with the Pauli basis (as defined in Appendix B), that we exploit in our examples. Then, we can add a similar amount of noise, obtaining \(\mathbf{f}_{m}=\mathbf{p}_{m}+\delta\mathbf{p}_{m}\), where \(\delta\mathbf{p}_{m}\) is sampled from the multi-normal distribution \(\mathcal{N}(\mathbf{0},\sim 1/(2\sqrt{N}))\) of mean zero and isotropic variance, saturating the shot noise limit.
Having prepared frequency vectors \(\{\mathbf{f}_{m}\}\), we apply QST via mapping \(Q\), Eq. (4), obtaining set of reconstructed density matrices \(\{\hat{\rho}_{m}\}\). We employ the most rudimentary and scalable method, i.e. the linear inversion QST, however, other QST methods can be utilized as well. Finally, we construct the training dataset as \(N_{\text{train}}\) pairs \(\big{\{}\vec{C}_{\rho},\vec{C}_{\tau}\big{\}}\), where we use \(\vec{C}\) to indicate the vectorization (flattening) of the Cholesky matrix \(C\) (see Appendix C for definition).
### Neural network training
The considered neural network, working as a denoising filter, is a non-linear function \(h_{\mathbf{\theta}}\) preparing matrix-to-matrix mapping, in its vectorized form, \(h_{\mathbf{\theta}}:\vec{C}_{\rho}\to\vec{C}_{\mathbf{\theta}}\), where \(\mathbf{\theta}\) incorporates all the variational parameters such as weights and biases to be optimized. The neural network training process relies on minimizing the cost function defined as a mean-squared error (MSE) of the network output with respect to the (vectorization of) target density matrix \(\hat{\tau}\), i.e.
\[\mathcal{L}^{\text{MSE}}(\mathbf{\theta})=\|\vec{C}_{\tau}-\vec{C}_{\mathbf{\theta}} \|^{2}, \tag{5}\]
via the presentation of \(K\) training samples \(\{\hat{\rho}_{l}\}_{l=1}^{K}\). The equivalence between MSE and Hilbert-Schmidt (HS) distance is discussed in detail in Appendix C, where we also demonstrate that the mean-squared error used in the cost function, Eq. (5), is a natural upper bound of the quantum fidelity. Hence, the choice of the cost function, Eq. (5), not only is the standard cost function for the neural network but also approximates the target state in a proper quantum metric. To make the model's optimization more efficient and avoid overfitting, we add a regularizing term resulting in the total cost function \(\mathcal{L}^{\text{MSE}}(\mathbf{\theta})+\text{Tr}[C_{\mathbf{\theta}}C_{\mathbf{\theta} }^{\dagger}]\) (chapter 7 of Ref. [54]).
The training process results in an optimal set of parameters of the neural network \(\mathbf{\bar{\theta}}=\text{arg min}_{\mathbf{\theta}}\mathcal{L}\), and trained neural network \(h_{\mathbf{\theta}}\), which allows for the reconstruction of the target density matrix \(\hat{\tau}\) via Cholesky matrix \(C_{\bar{\rho}}\)[128], i.e.
\[\hat{\bar{\rho}}=\frac{C_{\bar{\rho}}C_{\bar{\rho}}^{\dagger}}{\text{Tr} \Big{[}C_{\bar{\rho}}C_{\bar{\rho}}^{\dagger}\Big{]}}\simeq\hat{\tau}, \tag{6}\]
where \(C_{\bar{\rho}}\) is reshaped from \(\vec{C}_{\bar{\rho}}=h_{\mathbf{\theta}}(\vec{C}_{\rho})\).
### Neural network architecture
Our proposed architecture gets inspiration from the other recent models [55; 56], combining convolutional layers with a transformer layer implementing a self-attention mechanism [57; 58]. A convolutional neural network extracts important features from the input data, while a transformer block distils correlations between features via the self-attention mechanism. The self-attention mechanism utilizes the representation of the input data as nodes inside a graph [59] and aggregates relations between the nodes.
The architecture of considered neural network \(h_{\mathbf{\theta}}\) contains two convolutional layers \(h_{\text{cnn}}\) and transformer layer \(h_{\text{tr}}\) in between, i.e.:
\[h_{\mathbf{\theta}}(\vec{C}_{\rho})=\tanh[h_{\text{cnn}}\circ h_{\text{tr}}\circ \gamma(h_{\text{cnn}})](\vec{C}_{\rho}), \tag{7}\]
where \(\gamma(y)=1/2y(1+\text{Erf}(y)/\sqrt{2})\), \(y\in\mathbb{R}\), is the Gaussian Error Linear Unit (GELU) activation function [60] broadly used in the modern transformers architectures, and \(\tanh(y)\) is the hyperbolic tangent, acting element-wise of NN nodes.
The first layer \(h_{\text{cnn}}\) applies a set of \(K\) fixed-size trainable one-dimensional convolutional kernels to \(\vec{C}_{\rho}\) followed by non-linear activation function, i.e. \(\gamma(h_{\text{cnn}}(\vec{C}_{\rho}))\to\{\mathbf{F}_{\text{cnn}}^{1},\ldots,\mathbf{F }_{\text{cnn}}^{K}\}\). During the training process, convolutional kernels learn distinct features of the dataset, which are next feedforward to the transformer block \(h_{\text{tr}}\). The transformer block \(h_{\text{tr}}\) distills the correlations between the extracted features from kernels via the self-attention mechanism, providing a new set of vectors, i.e. \(h_{\text{tr}}(\mathbf{F}_{\text{cnn}}^{1},\ldots,\mathbf{F}_{\text{cnn}}^{K})\to\{\mathbf{F}_ {\text{tr}}^{1},\ldots,\mathbf{F}_{\text{tr}}^{K}\}\). The last convolutional layer \(h_{\text{cnn}}\) provides an output \(\vec{C}_{\mathbf{\theta}}\), \(\tanh(h_{\text{cnn}}(\mathbf{F}_{\text{tr}}^{1},\ldots,\mathbf{F}_{\text{tr}}^{K}))\to \vec{C}_{\mathbf{\theta}}\), where all filter outputs from the last layer are added. Finally, we reshape the output into the lower-triangular form and reconstruct the density matrix via Eq. (6).
The training data and the considered architecture allow interpretation of the trained NN as a conditional de
biaser (for details see Appendix D). The proposed protocol cannot improve the predictions of unbiased estimators, however, any estimator that outputs valid quantum states (e.g., LI, MLE) must be biased due to boundary effects. In the given framework, the task of the NN is to learn such skewness and drift the distribution towards the true mean.
## IV Results and discussion
Having introduced the features of our NN-based QST enhancer in the previous section, here, we demonstrate its advantages in scenarios of physical interest. To this aim, we consider two examples.
As the first example, we consider an idealized random quantum emitter (see e.g. Refs. [61; 62] for recent experimental proposals) that samples high-dimensional mixed states from the Hilbert-Schmidt distribution. After probing the system via single-setting square-root POVM, we are able to show the usefulness of our NN upon improving LI and MLE preprocessed states. We evaluate the generic performance of our solution and compare it with recent proposals of NN-based QST, Ref. [46].
In the second example, we focus on a specific class of muti-qubit pure states of special physical relevance, i.e. with metrological potential as quantified by the quantum Fisher information (QFI). Such states are generated via the famous one-axis twisting dynamics [63; 64]. Here, the system is measured using operators \(\mathbf{\hat{\pi}}\) via local symmetric, informationally complete, positive operator-valued measures (SIC-POVM) or experimentally relevant the Pauli operators.
In the following, we discuss the two abovementioned scenarios.
### Reconstructing high-dimensional random quantum states
**Scenario**.- As a first illustration, we consider a set of \(N_{\rm trial}\) random trial states \(\{\hat{\tau}_{j}\}\), \(j=1,\ldots,N_{\rm train}\) with Hilbert space dimension \(d=9\) sampled from the HS distribution (see Appendix E). The first task consists of assessing the average quality reconstruction over such an ensemble to benchmark the generic performance of our NN.
We prepare measurements on each trial state \(\hat{\tau}_{j}\) via informationally complete (IC) square-root POVMs as defined in Eq. (14), and obtain state reconstruction \(\hat{\rho}_{j}\) via two standard QST protocols, i.e. bare LI and MLE algorithms, as well as by our neural network enhanced protocols denoted as LI-NN, and MLE-NN, see Fig. 1. Finally, we evaluate the quality of the reconstruction as the average of the square of the Hilbert-Schmidt distance
\[D_{\rm HS}^{2}(\hat{\rho}_{j},\hat{\tau}_{j})={\rm Tr}[(\hat{\rho}_{j}-\hat{ \tau}_{j})^{2}]. \tag{8}\]
**Benchmarking**.- Fig. 2(a) presents averaged HS distance square \(\overline{D^{2}}_{\rm HS}\) as a function of the number of trials states \(N_{\rm trial}\), obtained with bare LI, MLE algorithms, and neural network enhanced LI (LI-NN), and neural network enhanced MLE (MLE-NN). The training dataset contains \(N_{\rm train}=1250\) HS distributed mixed states. Our neural network-enhanced protocol improves over the LI, and MLE, i.e. \(\overline{D^{2}}_{\rm HS}\) is lower for LI-NN, and MLE-NN compared to LI and MLE algorithms for the same \(N_{\rm trial}\). For fewer trials \(N_{\rm trial}<10^{3}\), the post-processing marginally improves over the states reconstructed only from MLE. For a larger number of trials, the lowest \(\overline{D^{2}}_{\rm HS}\) is obtained for MLE results, performing better than other considered tomographic methods. To enhance MLE in the regime of a few samples, we propose an alternative method by incorporating, as a depolarization channel, the statistical noise (see Appendix F).
Next, Fig. 2(b) presents comparison between our protocol with the state-of-the-art density matrix reconstruction via neural network, Ref. [46], where the authors report better performance than the MLE and LI algorithms having \(N_{\rm train}=8\cdot 10^{5}\) training samples, for states belonging to a Hilbert space of dimension \(d\geq 3\). Fig. 2(b) presents \(\overline{D^{2}}_{\rm HS}\) as a function of \(N_{\rm trial}/N_{\rm train}\), where \(N_{\rm train}=1250\) for MLE-NN, and \(N_{\rm train}=5000\) for LI-NN. The advantage of our protocol lies in the fact, that the amount of the necessary training data is significantly smaller compared to the training data size in Ref. [46], in order to obtain similar reconstruction qual
Figure 2: Evaluation of the QST reconstruction measured by the mean value of the Hilber-Schmidt distance square, \(\overline{D^{2}}_{\rm HS}\), for different QST protocols, averaged over \(N_{\rm test}=1000\) target states. Panel (a) shows mean \(D_{\rm HS}^{2}\) for four QST protocols: LI (green dots), MLE (red squares), NN-enhanced LI (blue diamonds), and NN-enhanced MLE (orange crosses). Panel (b) shows \(\overline{D^{2}}_{\rm HS}\) as a function of ratio \(N_{\rm trial}/N_{\rm train}\) for our LI-NN model (blue squares), MLE-NN model (orange crosses), and for the network model proposed in Ref. [46] (violet triangles). A lower number \(N_{\rm train}\) shifts lines to the left, indicating resource efficiency. Our proposed protocol achieves competitive averaged HS reconstruction for the size of training data of an order of magnitude smaller compared to the method proposed in Ref. [46] (note \(x\)-axis values increase to the left). During models’ training, we set \(N_{\rm train}=1250\) for MLE-NN protocol, and \(N_{\rm train}=5000\) for the LI-NN. Lines are to guide the eye, shadow areas represent one standard deviation.
ity, which is visible as a shift of the lines towards higher values of \(N_{\rm trial}/N_{\rm train}\).
### Certifying metrologically useful entanglement depth in many-body states
**Scenario.**- In the last experiment, we reconstruct a class of physically relevant multi-qubit pure states. Specifically, we consider a chain of \(L=4\) spins-\(1/2\) (Hilbert space of dimension \(d=16\)). The target quantum states are dynamically generated during the one-axis twisting (OAT) protocol [63; 64]
\[\ket{\Psi(t)}=e^{-itJ_{z}^{2}}\ket{+}^{\otimes L}\, \tag{9}\]
where \(\hat{J}_{z}\) is the collective spin operator along \(z\)-axis and \(\ket{+}^{\otimes L}=[(|\uparrow\rangle+|\downarrow\rangle)/\sqrt{2}]^{\otimes L}\) is the initial state prepared in a coherent spin state along \(x\)-axis (orthogonal to \(z\)). The OAT protocol generates spin-squeezed states useful for high-precision metrology, allowing to overcome the shot-noise limit [65; 66; 67; 68], as well as many-body entangled and the many-body Bell correlated states [69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82]. OAT states have been extensively studied theoretically [83; 84; 85; 86; 87; 88; 89; 90; 91; 92], and can be realized with a variety of ultra-cold systems, utilizing atom-atom collisions [93; 94; 95; 96], and atom-light interactions [97; 98]. The recent theoretical proposals for the OAT simulation with ultra-cold atoms in optical lattices effectively simulate Hubbard and Heisenberg models [99; 100; 101; 102; 103; 104; 105; 106; 107].
**Preliminary results.**- For the first task, we generate our data for testing and training with SIC-POVM operators. For the test set, we select 100 OAT states in evenly spaced times \(t\in(0,\pi)\) and assess the average reconstruction achieved by our trained NN on a training dataset with \(N_{\rm train}=10,500\) Haar random pure states. We compare the values with the average score obtained for generic Haar distributed states, which is the set the model was trained on (see Appendix E).
The qualities of reconstructions are shown in Table 1. First, we verify that the NN is able to improve substantially the OAT states, even though no examples of such a class of states were given in the training phase, which relied only on Haar-random states. Moreover, the OAT-averaged reconstruction values exceed the Haar reconstruction ones. We conjecture that this stems from the bosonic symmetry exhibited by the OAT states. This symmetry introduces redundancies in the density matrix which might help the NN to detect errors produced by the statistical noise. Finally, let us highlight that the network also displays good robustness to noise. Indeed, when we feed the same network with states prepared for \(N_{\rm trial}=10^{3}\) trials, we increase the reconstruction fidelity moving to 87% from 67%.
**Inferring the quantum Fisher information (QFI).**- Finally, we evaluate the metrological usefulness of the reconstructed states as measured by the quantum Fisher information, \(F_{Q}[\hat{\rho},\hat{G}]\). The QFI is a nonlinear function of the state and quantifies the sensitivity upon rotations generated by \(\hat{G}\). For more details, we refer the reader to Appendix G.
Here, we use the collective spin component \(\hat{J}_{\bf v}\) as the generator \(\hat{G}=\hat{J}_{\bf v}\), where the orientation \(\mathbf{v}\in\mathbb{S}^{2}\) is chosen so that it delivers the maximal sensitivity. The QFI with respect to collective rotations can also be used to certify quantum entanglement [68], in particular, the entanglement depth \(k\), which is the minimal number of genuinely entangled particles that are necessary to describe the many-body state. If \(F_{Q}[\hat{\rho},\hat{J}_{\bf v}]>kL\), then, the quantum state \(\hat{\rho}\) has at least \(k+1\) entanglement depth [108; 109]. In particular, for states with depth \(k=1\) (i.e., separable), due to the lack of entanglement detected, the metrological power is at most the shot-noise limit [110]. This bound is saturated by coherent spin states - like our initial (\(t=0\)) state for OAT evolution, \(\ket{+}^{\otimes L}\).
In Fig. 3, we present the evolution of the QFI (normalized by the coherent limit, \(L=4\)) for the OAT target states (top solid blue lines). The top row corresponds to SIC-POVM measurement operators \(\mathbf{\hat{\pi}}\), while the bottom row corresponds to projective measurements with Pauli operators. In all experiments, we use the same neural network previously trained on random Haar states with frequencies \(\{\mathbf{f}_{m}\}\) obtained from \(N_{\rm trial}=10^{4}\) measurement trials. The LI algorithm by itself shows entanglement (QFI \(>L\)) for a few times \(t\). By enhancing the LI algorithm via our NN protocol, we surpass the three-body bound (QFI/\(L=3\)), thus revealing genuine 4-body entanglement, which is the highest depth possible in this system (as it is of size \(L=4\)). For instance, let us note that at time \(t=\pi/2\), the OAT dynamic generates the cat state, \(\ket{\Psi(t=\pi/2)}=(e^{-i\pi/4}|+\rangle^{\otimes 4}+e^{+i\pi/4}|-\rangle^{ \otimes 4})/\sqrt{2}\), which is genuinely \(L\)-body entangled, and so it is certi
\begin{table}
\begin{tabular}{|c||l|l|l|} \hline & \(10^{5}\) trials & \(10^{4}\) trials & \(10^{3}\) trials \\ \hline LI-OAT & \(94.3\pm 0.4\%\) & \(86.5\pm 0.1\%\) & \(67.0\pm 2.0\%\) \\ \hline NN-OAT & \(98.6\pm 0.2\%\) & \(97.8\pm 0.5\%\) & \(87.6\pm 4.5\%\) \\ \hline NN-Haar & \(96.9\pm 3.0\%\) & \(94.2\pm 3.3\%\) & \(81.1\pm 4.1\%\) \\ \hline \end{tabular}
\end{table}
Table 1: Comparison of average fidelity and its standard deviation between the reconstructed and the target states of size \(d=16\) for various QST methods (rows), with varying size of measurement trials \(N_{\rm trial}=10^{5},10^{4},10^{3}\), as indicated by the consecutive columns. The first row presents the average fidelity reconstruction for linear inversion QST, averaged over OAT states, evenly sampled from \(t=0\) to \(t=\pi\). Employing our neural network presents an enhancement over the bare LI, as shown in the second row for the same target set. Finally, the third row also shows data for NN-enhanced LI, but averaged over general Haar-random states. All the initial Born values are calculated by noiseless SIC-POVM.
### Discussion
Currently, existing algorithms for QST tend to suffer from either of two main problems: unphysicality of results or biasedness. We tackle the problem of increasing the quality of state reconstruction by enhancing any biased QST estimator with an attention-based neural network working as a denoising filter, see Figure 1. Such an approach corrects both unphysicality and bias, at least partially. Our NN architecture is based on convolutional layers extracting crucial features from input data, and attention-based layers distilling correlations between extracted features. This choice of architecture is motivated by its improved generalization ability enabling a significant reduction of the necessary amount of training data, compared to NN-based approaches not utilizing an attention mechanism. From the examples provided in this section, we infer that our NN-enhanced protocol outperforms other methods of QST in the domain of a small number of measurement samples, see Figure 2. This is especially important due to applications in realistic scenarios, where the trade-off between accuracy and resource cost is crucial.
The results presented in this contribution prove that, with the small number of samples, the comparison of neural network enhancement to pure linear inversion protocol favors our implementation, as can be deduced by fidelity of reconstruction in Table 1. Although the NN was trained on Haar-random pure states, it achieves even better performance on its subset of measure zero, namely one-axis twisted states. We conjecture that this is due to their underlying symmetries, which allows to efficiently learn and correct the noise pattern.
Furthermore, the metrological usefulness of our method is visible through its certification of quantum Fisher information and entanglement depth, see Fig. 3. The bare QST setup, without our NN post-processing, is not able to show entanglement (QFI \(>L\)) at any finite time, as well as it never certifies the full genuine 4-body entanglement. Both of these problems are dealt with by the action of NN-enhancement.
## V Concrete experimental implementation for quantum state verification
To recapitulate this contribution, as a complement to Fig. 1 and our repository provided in Ref. [111], we summarize the practical implementation of the protocol introduced in this work.
1. _Scenario_: We consider a finite-dimensional quantum system prepared in a target state \(\hat{\tau}\). Here, we aim to verify the preparation of a quantum state \(\hat{\tau}\) via QST. To this end, we set a particular measurement basis \(\hat{\mathbf{\pi}}\) to probe the system.
2. _Experiment_: After a finite number of experimental runs, from the countings, we construct the frequency vector \(\mathbf{f}\).
3. _Preprocessed quantum state tomography_: From the frequency vector \(\mathbf{f}\) and the basis \(\hat{\mathbf{\pi}}\), we infer the first approximation of the state \(\hat{\rho}\) via the desired QST protocol (e.g., one of those introduced in Appendix A).
4. _Assessing pre-reconstruction_: We evaluate the quality of the reconstruction by e.g., computing \(D_{\mathrm{HS}}^{2}(\hat{\tau},\hat{\rho})\), quantum fidelity or any other meaningful quantum metric. To improve such a score, we resort to our NN solution to complete a denoising task. As with any deep-learning method, training is required.
Figure 3: Time evolution of the normalized QFI during the OAT protocol for \(L=4\) qubits system. Solid blue lines represent QFI calculated for target quantum states. The mean values of QFI calculated from tomographically reconstructed density matrices are denoted by green-dashed (reconstruction via LI), and red-dotted lines (reconstruction via neural network post-processed LI outputs). Shaded areas mark one standard deviation after averaging over 10 reconstructions. Panels (a) and (b) correspond to LI protocol with SIC-POVM data, whereas (c) and (d) denote LI reconstruction inferred from Pauli measurements. In the upper row, the left (right) column corresponds to \(N_{\mathrm{trial}}=10^{3}\) (\(10^{4}\)) trials; in the lower row, the left (right) column reproduces an LI initial fidelity reconstruction of \(\sim 74\%(\sim 86\%)\). The red lines represent the whole setup with NN post-processing of data from corresponding green lines, indicating improvement over the bare LI method. The NN advantage over the bare LI method can be characterized by entanglement depth certification, as shown by the horizontal lines denoting the entanglement depth bounds ranging from the separable limit (bottom line, bold), to the genuine \(L\)-body limit (top line). In particular, the presence of entanglement is witnessed by QFI \(>L\), as shown by the violation of the separable bound (bold horizontal line).
5. _Training neural network_: Different training strategies can be implemented: 1. Train over uniform ensembles (e.g., Haar, HS, Bures etc.) if \(\hat{\tau}\) is a typical state or we do not have information about it. If we know certain properties of the target state, we can take advantage of it (see next items). 2. Train over a subspace of states of interest. For example, if we reconstruct OAT states (Section IVb), we may train only on the permutation-invariant sector. 3. Train with experimental data. For example, if we have a quantum random source to characterize (Section IVa), experimental data can be used in the training set [44]. In such a case, our demonstrated reduction of the training set size translates also to a reduction of the experimental effort.
6. _Feeding the neural network_: We feed-forward the preprocessed state \(\hat{\rho}\) in our trained matrix-to-matrix NN to recover the enhanced quantum state \(\hat{\rho}\).
7. _Assessing the neural network_: We compute the updated reconstruction metric on the post-processed state \(D_{\text{HS}}^{2}(\hat{\tau},\hat{\bar{\rho}})\). Finally, we assess the usefulness of the NN by comparing how smaller such value is compared to the pre-processed score \(D_{\text{HS}}^{2}(\hat{\tau},\hat{\rho})\).
The strength of our proposed protocol lies in its broad applicability, as the choice of the basis \(\hat{\mathbf{\pi}}\) and QST pre-processing method is arbitrary.
## VI Conclusions
We proposed a novel deep learning protocol improving standard finite-statistic quantum state tomography methods, such as Linear Inversion and Maximum Likelihood Estimation. Our network, based on the attention mechanism and convolutional layers, greatly reduces the number of required measurements, serving as a denoising filter to the standard tomography output. The versatility of our approach comes from the fact that the measurement basis and reconstruction method have only an implicit impact as our central algorithm works directly with the density matrix. The proposed method reduces the number of necessary measurements on the target density matrix by at least an order of magnitude, compared to DL-QST protocols finite-statistic methods.
We verified that our proposed method is able to improve over LI and MLE preprocessed states. Moreover, the inference stage was performed on out-of-distribution data, i.e., we tested our model on density matrices forming an infinitesimally small fraction of the training dataset, indicating the robustness of the proposed method. In particular, we test our model on 4-qubits spin-squeezed, and many-body Bell correlated states, generated during one-axis twisting protocol, with average fidelity \(\sim 98\%\). We demonstrated our NN improves the reconstruction of a class of physically relevant multi-qubit states, paving the way to use such novel methods in current quantum computers and quantum simulators based on spin arrays.
Our protocol can greatly advance other QST methods, for both arbitrary states as well as for special classes that scale reasonably with the number of particles, such as symmetric states [112, 113].
**Data and code availability**.- Data and code are available at Ref. [111].
###### Acknowledgements.
We thank Leo Zambrano, Federico Bianchi, and Emilia Witkowska for the fruitful discussions. We acknowledge support from: ERC AdG NOQIA; Ministerio de Ciencia y Innovation Agencia Estatal de Investigaciones (PGC2018-097027-B-I00/10.13039/501100011033, CEX2019-000910-S/10.13039/501100011033, Plan National FIDEUA PID2019-106901GB-I00, FPI, QUANTERA MAQS PC2019-111828-2, QUANTERA DYNAMITE PCI2022-132919, Proyectos de I+D+1 "Retos Colaboracion" QUSPIN RTC2019-007196-7); MICIIN with funding from European Union NextGenerationEU(PRTR-C17.I1) and by Generalitat de Catalunya; Fundacio Cellex; Fundacio Mir-Puig; Generalitat de Catalunya (European Social Fund FEDER and CERCA program, AGAUR Grant No. 2021 SGR 01452, QuantumCAT U16-011424, co-funded by ERDF Operational Program of Catalonia 2014-2020); Barcelona Supercomputing Center MareNostrum (FI-2022-1-0042); EU (PASQuanS2.1, 101113690); EU Horizon 2020 FET-OPEN OPTologic (Grant No 899794); EU Horizon Europe Program (Grant Agreement 101080086 -- NeQST), National Science Centre, Poland (Symmonia Grant No. 2016/20/W/ST4/00314); ICFO Internal "QuantumGaudi" project; European Union's Horizon 2020 research and innovation program under the Marie-Sklodowska-Curie grant agreement No 101029393 (STREDCH). AKS acknowledges support from the European Union's Horizon 2020 Research and Innovation Programme under the Marie Sklodowska-Curie Grant Agreement No. 847517. M.P. acknowledges the support of the Polish National Agency for Academic Exchange, the Bekker programme no: PPN/BEK/2020/1/00317. Views and opinions expressed are, however, those of the author(s) only and do not necessarily reflect those of the European Union, European Commission, European Climate, Infrastructure and Environment Executive Agency (CINEA), or any other granting authority. Neither the European Union nor any granting authority can be held responsible for them. |
2310.12162 | AI Potentiality and Awareness: A Position Paper from the Perspective of
Human-AI Teaming in Cybersecurity | This position paper explores the broad landscape of AI potentiality in the
context of cybersecurity, with a particular emphasis on its possible risk
factors with awareness, which can be managed by incorporating human experts in
the loop, i.e., "Human-AI" teaming. As artificial intelligence (AI)
technologies advance, they will provide unparalleled opportunities for attack
identification, incident response, and recovery. However, the successful
deployment of AI into cybersecurity measures necessitates an in-depth
understanding of its capabilities, challenges, and ethical and legal
implications to handle associated risk factors in real-world application areas.
Towards this, we emphasize the importance of a balanced approach that
incorporates AI's computational power with human expertise. AI systems may
proactively discover vulnerabilities and detect anomalies through pattern
recognition, and predictive modeling, significantly enhancing speed and
accuracy. Human experts can explain AI-generated decisions to stakeholders,
regulators, and end-users in critical situations, ensuring responsibility and
accountability, which helps establish trust in AI-driven security solutions.
Therefore, in this position paper, we argue that human-AI teaming is worthwhile
in cybersecurity, in which human expertise such as intuition, critical
thinking, or contextual understanding is combined with AI's computational power
to improve overall cyber defenses. | Iqbal H. Sarker, Helge Janicke, Nazeeruddin Mohammad, Paul Watters, Surya Nepal | 2023-09-28T01:20:44Z | http://arxiv.org/abs/2310.12162v1 | AI Potentiality and Awareness: A Position Paper from the Perspective of Human-AI Teaming in Cybersecurity
###### Abstract
This position paper explores the broad landscape of AI potentiality in the context of cybersecurity, with a particular emphasis on its possible risk factors with awareness, which can be managed by incorporating human experts in the loop, i.e., "Human-AI" teaming. As artificial intelligence (AI) technologies advance, they will provide unparalleled opportunities for attack identification, incident response, and recovery. However, the successful deployment of AI into cybersecurity measures necessitates an in-depth understanding of its capabilities, challenges, and ethical and legal implications to handle associated risk factors in real-world application areas. Towards this, we emphasize the importance of a balanced approach that incorporates AI's computational power with human expertise. AI systems may proactively discover vulnerabilities and detect anomalies through pattern recognition, and predictive modeling, significantly enhancing speed and accuracy. Human experts can explain AI-generated decisions to stakeholders, regulators, and end-users in critical situations, ensuring responsibility and accountability, which helps establish trust in AI-driven security solutions. Therefore, in this position paper, we argue that human-AI teaming is worthwhile in cybersecurity, in which human expertise such as intuition, critical thinking, or contextual understanding is combined with AI's computational power to improve overall cyber defenses.
Keywords:Cybersecurity, Data Analytics, Machine Learning, AI Potentiality, AI Risk Factors, Human-AI Teaming, Intelligent Systems.
## 1 Introduction
In today's rapidly evolving digital technology ecosystem, cybersecurity has taken on unprecedented significance. With the growth of interconnected systems, protecting sensitive information and critical infrastructure has become a top priority
of a nation [1]. As threats to digital assets grow in complexity and sophistication, there is an urgent need for innovative approaches to effectively counter these concerns. In this position paper, we focus on the potential of artificial intelligence (AI) as well as its awareness, i.e., to handle possible risk factors, strengthened by human expertise and observation in real-world application areas.
The combination of AI with human expertise has emerged as a potential solution to addressing the ever-changing landscape of cybersecurity. AI systems have shown promise in automating repetitive tasks, analyzing large datasets, and discovering patterns that may be beyond human observation [2][3]. These competencies are particularly beneficial in the field of cybersecurity, where the rapid identification and mitigation of threats and attacks are crucial. Human experts, on the other hand, have a distinct set of abilities in their understanding of context, intuition, and ethical judgment. This position paper intends to dive into the intricate dynamics of this Human-AI collaboration, exploring the synergies between AI's computational capability and human experts' domain knowledge, contextual reasoning, and critical thinking in their roles.
The cooperation between AI and human expertise in cybersecurity is not only an issue of technological collaboration; it also extends to the ethical sphere. The successful deployment of AI in cybersecurity operations necessitates resolving issues of bias, accountability, transparency, and the right to distinguish decision-making authority between machines and humans. Human analysts can make sophisticated decisions depending on the broader context of the organization's goals and risk tolerance. Thus, human observations at the application level are still required despite AI's huge potential in computing [4]. In short, we define "AI potentiality" as _technical aspects_ of AI, emphasizing its computational capabilities in terms of speed, accuracy, scalability, and automation in cybersecurity tasks, and "AI awareness" as its _possible risk factors_ highlighting the importance of incorporating human experts in the loop, e.g., human domain knowledge, intuition, and situational awareness to make informed decisions in cybersecurity, which eventually help to achieve organizations goal with proper responsibility and accountability. Thus "Human-AI teaming", in essence, maximizes the assets of both AI and human intelligence, resulting in a more robust, effective, and explainable security system for organizations. To better comprehend the core topic of this position paper and overall contributions, we formulate three key questions below:
* _AI Potentiality:_ Is AI capable of effectively and efficiently processing, analyzing, and extracting insights or useful knowledge from massive amounts of cyber data, which is challenging for a human analyst?
* _AI Awareness:_ Are risk factors associated with real-world AI applications in the context of cybersecurity, particularly how we can ensure responsibility and accountability if the AI system fails in certain situations?
* _Human-AI Teaming:_ Is it worthwhile to establish human-AI teaming for cybersecurity solutions and rethink the current cyberspace making informed decisions with accountability in real-world application areas?
Overall, this position paper emphasizes a deeper understanding of Human-AI teaming, particularly how the combination of AI and human expertise might strengthen our cyber defense measures in various real-world application areas. In the following sections, we provide an explicit understanding of the potential of AI, as well as the importance of human experts in the loop in the context of next-generation cybersecurity solutions. Therefore, this paper aims to provide significant insights for cybersecurity academics, practitioners, policymakers, and stakeholders, advancing discussions that lead the way for successful and responsible cybersecurity practices in our digitally connected society.
## 2 Why Human-AI Teaming in Cybersecurity?
In cybersecurity, the term "Human-AI teaming" refers to cooperation between AI technologies and human expertise to boost the overall effectiveness of cybersecurity solutions. In the following, we discuss how this teaming can contribute:
* _Complementary Strengths:_ AI is particularly effective at swiftly analyzing and processing large volumes of data, finding patterns, discovering anomalies, and generating policies from complex specifications. However, there should be clear accountability for decisions made by AI systems in real-world cybersecurity applications. Human experts, on the other hand, have awareness of the environment, intuition, and the skills to make complex decisions using a combination of technological knowledge and hands-on expertise. Thus, organizations can achieve better cyber attack or threat analysis and response strategies with proper accountability through this integration.
* _Scale and Speed:_ Cyber threats are constantly evolving and can emerge at a rapid pace. Attackers often employ automated tools and methods to exploit vulnerabilities. AI can assist cybersecurity teams in analyzing threats and taking action at a speed and scale that would be impossible for humans to tackle by themselves. This speeding up is essential for minimizing potential harm and promptly addressing emerging risk factors.
* _Decision Support:_ Advanced threats often consist of numerous stages and sophisticated attack pathways. AI can give human analysts pertinent data and insights to support their decision-making. This can include information about the threat's nature, its potential effects, and recommended solutions. While humans can handle the more complicated and hidden aspects of investigations, attribution, and response, AI can discover insights from data as well as automate the necessary modules of threat detection and analysis.
* _User Behavior and Predictive Analytics:_ AI can monitor user behavior patterns to find anomalies that could point to unauthorized access or compromised accounts. Analyzing historical data AI can identify trends and predict potential future threats, which eventually enables human analysts to take proactive measures before an incident happens.
* _Continuous Learning and Improvement:_ To effectively respond to new threats human analysts can regularly update and improve AI models. Although AI
models are capable of learning from previous data, they may require human supervision to understand the importance of newly discovered patterns and anomalies. This teaming enables a feedback loop where human experts can instruct and enhance AI models based on their real-world experiences, resulting in progressively more precise and efficient threat analysis.
* _Incident Response and Recovery:_ When a cyber incident occurs, human expertise is essential for managing the situation, determining the level of the damage, coordinating the response, and communicating with stakeholders. AI can assist in quickly analyzing incident data, offering pertinent insights to guide human decision-making as well as necessary automation. This helps human experts for better strategic decisions, determine how an incident may affect other stakeholder groups, and efficiently direct the recovery process.
* _Regulatory Compliance, Trust and Accountability:_ In many industries, security operations are subject to regulations that require human oversight and accountability. Particularly, need to ensure who will be responsible if AI systems fail. Stakeholders often want to understand the rationale behind security decisions. Cybersecurity experts can provide explanations for AI outcomes that build trust, transparency, and accountability.
Although AI has huge potential, it may have risk factors, particularly in terms of situational awareness, decision-making, and accountability, as highlighted above. Thus, to increase overall cybersecurity resilience, it needs to blend human intuition, critical thinking, and decision-making skills with AI's speed, scalability, data processing and learning capabilities.
## 3 AI-based Modeling and Potentiality
In this section, we first highlight what types of AI-based modeling can be built, and then we explore multi-aspects AI methods that can contribute in the context of cybersecurity modeling.
### Major Types of AI Modeling
Generative AI and discriminative AI are two common approaches in the area, with hybrid AI being a combination of the two. Generative AI is typically focused on generating new data according to needs, while discriminative AI is focused on classifying data. Hybrid AI combines these two approaches to take advantage of their strengths to solve a particular problem. In the following, we explore these with examples in the context of cybersecurity.
* _Generative AI:_ Generative models are trained to learn the underlying structure of the data and generate new data that has similar characteristics to the original data, e.g., Generative Adversarial Network (GAN) [5], as shown in Fig 1. In cybersecurity, generative models can be used for various tasks such as attack simulation to test and enhance defense mechanisms, synthetic data generation for training machine learning models, anomaly detection, etc.
* _Discriminative AI:_ Discriminative AI models aim to learn the decision boundary between different classes of data, e.g., Random Forest (RF) with multiple decision trees [6], as shown in Fig 2. In cybersecurity, discriminative models can be used for various tasks such as intrusion detection, predicting threats, fraud detection, malware analysis, etc.
* _Hybrid AI:_ Hybrid models combine the strengths of both generative and discriminative models. For example, a hybrid model could use a generative model to generate synthetic data that can be used to train a discriminative model for a particular task [3]. In cybersecurity, hybrid AI can be used to create more robust intrusion detection systems, detect advanced persistent threats, simulate complex attack scenarios, etc.
Overall, each type of AI modeling mentioned has its own strengths and weaknesses, and the best approach for a given task depends on the specific use case and availability of the cyber data and relevant resources. A combination of generative, discriminative, and hybrid AI approaches can be effective in providing a comprehensive and effective cybersecurity solution by taking into account their individual strengths. Thus, research should be focused in this area for future-generation cybersecurity modeling.
### AI Methods and Algorithms
Although AI is a broader term in terms of techniques and application areas, we explore the key AI methods that can be used in cybersecurity according to the problem nature and available data or resources. These are:
* _Machine Learning:_ Machine learning has emerged as a powerful tool in cybersecurity. Machine learning algorithms such as Decision Trees, RF, SVM, Logistic regression, Clustering, PCA, etc. [1] can analyze large volumes of data, such as network traffic and system logs, to identify patterns and anomalies that could indicate a potential cyber attack. By analyzing attack patterns, machine learning algorithms can identify the root cause of security breaches, enabling security teams to take corrective action more quickly and effectively.
* _Deep Learning:_ Deep learning is a subset of machine learning that involves training neural networks with multiple layers to learn from complex data sets [3]. Deep learning models such as DNN, CNN, RNN, LSTM, Autoencoder, GAN, etc. [1] have the ability to extract representational information and decision rules simultaneously from data [7]. Deep learning algorithms can be used in cybersecurity, particularly in the areas of threat detection, anomaly detection, malware classification, and so on [1][8].
* _Semantic Knowledge Representation and Reasoning:_ Semantic knowledge involves a deeper understanding of the evolving threat landscape and the context of data, which eventually helps security professionals make informed decisions in real time. Several popular methods such as ontologies or knowledge graphs, i.e., graph-structured data model, can be used to represent knowledge about potential threats, vulnerabilities, and assets in cybersecurity [9].
* _Knowledge or Rule Discovery:_ Knowledge discovery, e.g., rules, in cybersecurity typically involves extracting useful insights and patterns from large datasets that are used in intrusion detection, access control, and other security systems. Rule-based AI models can leverage machine learning and data science processes to enhance security solutions by generating a set of optimized rules. For instance, clustering algorithms can be used to group similar behavioral patterns of network traffic and create rules accordingly to detect anomalies in real time. Association rule mining techniques can assist security analysts in uncovering relationships between various events and entities involved in the incident [3].
* _Language Model and Multimodality:_ Large language models (LLM) are typically at the forefront of natural language processing (NLP) research and applications due to their effectiveness in understanding and generating human language. These models can analyze and understand vast amounts of unstructured text data, such as threat reports to extract relevant threat intelligence. For example, to summarize long threat reports and classify them into different threat categories these models can be used, which eventually assist security analysts in prioritizing their responses. In addition, multimodal intelligence by taking into account the fusion of textual, visual, or sensor data [10] can lead to more robust and comprehensive solutions in cybersecurity depending on relevant data availability.
Overall, these multi-aspects AI methods and algorithms discussed as well as their hybridization can provide powerful tools for detecting and responding to security threats. However, it is important to carefully consider the strengths and weaknesses of each method and to use them appropriately based on the specific needs and characteristics of the systems being protected.
## 4 Real-World Cybersecurity Application Areas
In the real-world application areas, both IT (Information Technology) and Industrial Control Systems (ICS) or Operational Technology (OT) systems are
important to the operation of modern organizations and business [4]. Human-AI teaming is crucial for ensuring the security of IT and OT systems, networks, and data from cyber threats. Hence, we summarize the potential usage of human-AI teaming, which can be applicable for both IT and ICS/OT security systems:
* _Anomaly and Threat Detection with Real-time Monitoring:_ AI can continuously monitor network traffic, system logs or sensor data to identify anomalies or suspicious activity that might indicate a cyberattack. Human experts can then evaluate and investigate these findings and take necessary action.
* _Incident Response and Recovery_: AI can classify and prioritize security alerts depending on severity, enabling human responders to focus on the most critical incidents. AI may automate predefined incident response playbooks, reducing risks immediately while notifying human teams for further investigation, and strategic decision-making.
* _Risk Assessment and Vulnerability Management:_ AI can analyze vulnerabilities and risks in both IT and ICS/OT environments by continuously investigating configurations, system states, and emerging threats. Security analysts can assess and prioritize risks and vulnerabilities for remediation depending on the possible impact on critical infrastructure.
* _Predictive Analytics and Proactive Maintenance:_ AI can analyze historical data and current trends to anticipate potential future threats or vulnerabilities. Human experts use this predictive insight to resolve vulnerabilities and security weaknesses in advance. In many cases, these predictive AI insights can assist in designing maintenance plans and making informed decisions about equipment replacements or upgrades.
* _Incident Investigation and Forensics:_ AI assists in capturing and analyzing forensic data to determine the underlying causes of security problems. Human investigators employ AI-generated insights to build together attack narratives, identify criminals, and attribute breaches.
* _Adaptive Security:_ AI-powered adaptive security mechanisms adjust dynamically based on real-time threat intelligence and system conditions. Human analysts supervise and fine-tune AI algorithms to ensure they align with business goals and regulatory constraints.
* _Visualization and Situational Awareness:_ AI analyses and visualizes data, allowing human operators to instantly understand the security status. In critical scenarios, human analysts investigate AI-generated visualizations to make well-informed decisions.
* _Threat Intelligence and Contextualization:_ AI is capable of analyzing massive amounts of threat data and providing actionable intelligence. Human analysts contextualize threat intelligence, making it applicable to specific contexts.
* _Threat Hunting:_ Human analysts can employ AI-powered technologies to proactively hunt for evidence of compromise that may not trigger automatic alarms. Human analysts' intuition and expertise are essential in uncovering sophisticated attacks. Overall, humans can develop the hypothesis and AI can help to execute it.
* _Behavioral Analysis and User Authentication:_ AI can develop behavioral profiles for individuals and groups inside an organization. Any variations from these characteristics can generate alerts to explore potential insider threats. Human experts can evaluate suspicious behaviors to identify whether they are insider threats or false positives. Thus this can assist human experts in strengthening authentication systems of an organization.
Overall, Human-AI teaming in both IT and ICS/OT environment has the potential to greatly improve attack identification, incident response, and overall cybersecurity posture. AI can aid in automation, quick analysis, and processing of massive volumes of data, while human knowledge adds context, critical thinking, decision-making, and flexibility to a dynamic and evolving threat environment. However, the synergistic abilities of humans and AI need to be successfully exploited to handle domain-specific issues.
## 5 Challenges and Research Direction
While AI-based cybersecurity solutions have the potential to enhance security as well as assist human experts in making decisions, several challenges need to be addressed to fully realize these benefits. Hence, we outline the key challenges and directions within the scope of our study:
* _Data Quality and Diversity:_ Getting tagged data with high quality and diversity to cover a variety of attack scenarios is a challenge. Thus research should focus on investigating techniques that can identify anomalies and potential risks by using both labelled and unlabeled data as well as unsupervised learning techniques, rather than relying solely on labeled data. In addition, methods to augment and synthesize a variety of representative datasets can help to achieve generalizability and robustness of AI models.
* _Adversarial Attacks:_ Adversarial attacks against AI models in cybersecurity can lead to the evasion of detection systems by attackers. It is essential to conduct research to develop AI models that can resist adversarial attacks and maintain their effectiveness. Developing techniques such as adversarial learning, input sanitization, defensive distillation, and ensemble modeling could enhance model security.
* _Interpretable AI:_ To understand why certain decisions are made, it is crucial for cybersecurity analysts to comprehend how AI models make decisions. Thus research should focus on creating interpretable AI and machine learning models enabling cybersecurity professionals to validate and trust the model's outputs. The interaction between AI systems and human cybersecurity experts can be facilitated by developing innovative methods for explaining how AI-based cyber models make decisions.
* _Innovative and Adaptive Modeling:_ Traditional approaches have difficulty identifying zero-day vulnerabilities. A crucial research direction is the creation of AI models that can identify and mitigate previously unknown vulnerabilities. Examples of these models include anomaly detection, behavior
analysis, and vulnerability prediction. Research is required to enhance models' ability to generalize novel and previously unknown attack patterns in different cybersecurity areas. Continuous learning is a key research direction for creating AI models that can adapt to dynamic attack tactics and changing threat environments. Effective threat detection can be improved exploring several modalities and comprehend temporal patterns, such as network traffic, system logs, and user behavior.
* _Generative AI Modeling:_ Generative AI has the potential to play a significant role in cybersecurity solutions by enhancing various aspects of threat analysis and response. For instance, generative AI can help in creating adversarial scenarios to evaluate the reliability of cybersecurity systems, assisting in the identification of vulnerabilities and the development of more robust defense mechanisms against such attacks. More research is needed to investigate techniques that enable AI models to generalize to unseen threats and adapt to evolving attack techniques.
* _Privacy Concerns:_ Handling private and sensitive data could be a part of using AI in cybersecurity. It is challenging to achieve a balance between the advantages of AI and privacy issues while maintaining compliance with data protection laws. To resolve privacy issues, federated learning enables models to be trained across distributed devices without exchanging raw data. Investigating how distributed learning can be used effectively in cybersecurity to enable collaborative model training across various organizations, could be a potential area of research.
* _Regulatory and Ethical Frameworks:_ To ensure secure and responsible deployment of AI-based model into cybersecurity, establishing guidelines, regulations, and ethical considerations are important. Investigating how AI may offer insights and recommendations while allowing people to make informed decisions is necessary. Research is needed to effectively integrate AI models into the decision-making procedures of cybersecurity analysts, which can enable synergistic interaction between human expertise and AI capabilities.
Although research has been significantly progressing in the area of AI, the challenges for effective security modeling still remain unaddressed. To advance the field of AI-based cybersecurity and enhance the general security posture of today's digital systems and interconnected networks, these identified issues and potential directions might help for next-generation cybersecurity solutions.
## 6 Conclusion
This position paper emphasized the importance of a balanced strategy that capitalizes on the strengths of both AI and human expertise, creating collaboration and trust between these two entities in the context of cybersecurity. As stated in this position paper, the synergy between human expertise and AI capabilities holds enormous promise for addressing the ever-changing landscape of cyber threats. We explored how AI technology might improve human capabilities by
automating regular operations, analyzing and detecting anomalies at scale, as well as providing actionable insights for speedy decision-making. Furthermore, in terms of context understanding, intuition, accountability, and creativity in designing unique security solutions, the human element remains crucial. Overall, we believe that human-AI teaming can significantly improve the way we protect against cyber-attacks by providing a more resilient, efficient, and adaptive cybersecurity ecosystem.
## Acknowledgement
The work has been supported by the Cyber Security Research Centre Limited whose activities are partially funded by the Australian Government's Cooperative Research Centres Program.
|
2309.11508 | Towards LLM-based Autograding for Short Textual Answers | Grading exams is an important, labor-intensive, subjective, repetitive, and
frequently challenging task. The feasibility of autograding textual responses
has greatly increased thanks to the availability of large language models
(LLMs) such as ChatGPT and the substantial influx of data brought about by
digitalization. However, entrusting AI models with decision-making roles raises
ethical considerations, mainly stemming from potential biases and issues
related to generating false information. Thus, in this manuscript, we provide
an evaluation of a large language model for the purpose of autograding, while
also highlighting how LLMs can support educators in validating their grading
procedures. Our evaluation is targeted towards automatic short textual answers
grading (ASAG), spanning various languages and examinations from two distinct
courses. Our findings suggest that while "out-of-the-box" LLMs provide a
valuable tool to provide a complementary perspective, their readiness for
independent automated grading remains a work in progress, necessitating human
oversight. | Johannes Schneider, Bernd Schenk, Christina Niklaus | 2023-09-09T22:25:56Z | http://arxiv.org/abs/2309.11508v2 | # Towards LLM-based Autograding for Short Textual Answers
###### Abstract
Grading of exams is an important, labor intensive, subjective, repetitive and frequently challenging task. The feasibility of autograding textual responses has greatly increased thanks to the availability of large language models (LLMs) such as ChatGPT and because of the substantial influx of data brought about by digitalization. However, entrusting AI models with decision-making roles raises ethical considerations, mainly stemming from potential biases and issues related to generating false information. Thus, in this manuscript we provide an evaluation of a large language model for the purpose of autograding, while also highlighting how LLMs can support educators in validating their grading procedures. Our evaluation is targeted towards automatic short textual answers grading (ASAG), spanning various languages and examinations from two distinct courses. Our findings suggest that while "out-of-the-box" LLMs provide a valuable tool to provide a complementary perspective, their readiness for independent automated grading remains a work in progress, necessitating human oversight.
Keywords: grading support, autograding, large language models, trust.
## Introduction
Large language models like ChatGPT have already led to a widespread impact across both industry and academia. Various sectors in the industry are actively exploring all kinds of application areas and recognizing substantial potential. Certain experts also perceive substantial risks associated with LLMs and have advocated for a development moratorium on such technologies [12]. In academia, LLMs are used as a tool by researchers and students to such an extent that researchers themselves have called on journals to clarify the allowable extent of AI-generated content in scholarly papers [13], leading to the publication of guidelines for incorporating AI in the paper-writing process[1]. Ethical concerns have also been raised for education [23]. LLMs like ChatGPT have been commonly compared against students in various disciplines - especially with respect to their capability to pass exams. While some reports have indicated inferior performance than a master's graduate in mathematics [12], other instances showcase a successful completion of an introductory physics course [15], as well as the passing of numerous law school exams [1] and a standardized test of economics (principles) [16]. However, it is important to acknowledge the existence of limitations in the LLMs. These models can exhibit biases, discrimination and factual inaccuracies[1]. Consequently, there arises doubt regarding their suitability. In particular, the necessity for human verification has been emphasized as a pressing research priority [24] and the topic of human agency is also debated on a regulatory level (EU, 2020). Especially, high stakes decisions require careful analysis before AI can be utilized. Grading of exams is a high stakes situation, as errors in grading can cause students to fail an entire class, possibly causing a year-long delay in their education, separation of peers, etc. This, in turn, can lead to both financial and psychological strain.
As such it seems natural and even necessary to assess the suitability of large language models for supporting exam grading and to reflect upon adequate ways to include them in the grading process while mitigating their risks. To this end, we seek to contribute to two intertwined research questions:
(a) How can LLMs support educators in the grading process of exams?
(b) What are issues and concerns when using LLMs to support grading?
Our focus is on Automatic Short Answer Grading (ASAG), i.e., student replies are (short) textual answers (i.e., one or a few paragraphs). We use an LLM, i.e., ChatGPT to assess the instructor's answer, a student's answer in general as well as a student's answer with respect to the instructor's answer as illustrated in Figure 1. In our experimental evaluation, we use multiple exams from multiple educators.
While we implicitly assess the possibility of automatic grading, our target is (i) to improve the grading process rather than simply automating it and (ii) to uncover shortcomings (and possible mitigations) of LLMs for this purpose. We seek to employ LLMs as a second opinion that might pinpoint obvious flaws in the grading, i.e., due to sloppiness, in the grading as well as provide a more general view on possible answers in order to avoid bias like accepting only correct answers that have been discussed in the course.
## 2 Methodology
We use an LLM, i.e., ChatGPT (GPT 3.5, June and July versions), to assess (i) answers by the educator, (ii) answers of students to exam questions in general, and (iii) answers of students compared to the instructor's answer (see Figure 1). That is, to assess an instructor's answer, an instructor must be able to define an answer for each exam question that constitutes the optimal response from her/his perspective. We shall elaborate on lifting this requirement and allow for multiple possible answers per question in the discussion section. Our assessment is both qualitative and quantitative. That is, we highlight a few prompts that surfaced surprising issues (such as lack of robustness, i.e., sensitivity to minor variations in prompts), but we also quantify how much the LLM deviates from the educator across all graded answers. To this end, we ask the LLM to categorize its assessment, i.e., each LLM response should contain a category such as "Good", "Ok.", or "Bad" and an explanation of the chosen category. In turn, we also categorize the educator's responses. This allows us to compare the categorization of the LLM and the educator, e.g., to assess if both rate a student answer as "good", which we elaborate in the last subsection. We experimented with a few different prompts but we report only the ones used in the end.
### Assessing the educator's answer by the LLM
An educator should have a good sense on different answers to exam questions. However, it is often difficult to anticipate all arguments and answers. For example, an educator might be primed to answers that are based on the material taught in class but students might include elements with different but nonetheless valid viewpoints. We utilize the LLM to provide a more general view on the question without explicit knowledge on the lecture material. We also ask the LLM for feedback on the developed educator's solution. More concretely, the prompt has the following structure for a question Q and the educator's answer A.
_Here is a question: Q. Here is an answer: A. How good is the answer to the question? Start the reply with one of the following: Extremely good., Very good., Good., Ok., Bad., Very bad., Extremely bad. Explain the choice. Explain also what is missing._
Example Prompt (1) to LLM:
_Here is a question: What is the difference between single linkage and average linkage (hierarchical) clustering?. Here is an answer: The two differ in distance metric used to cluster. Single linkage: Merge two clusters based on minimum distance between any two points; Tendency to form long chains; Average linkage: merge two clusters based on average distance between any two points; tendency to "ball' like clusters.; How good is the answer to the question? Start the reply with one of the following: Extremely good., Very good., Good., Ok., Bad., Very bad., Extremely bad. Explain the choice._
Response of LLM:
_Good. The answer provides a clear and concise explanation of the difference between single linkage and average linkage clustering. It accurately states the different distance metrics used and describes the tendencies of each clustering method. One potential improvement is to provide a more detailed explanation of the advantages and disadvantages of each method. Additionally, it would be helpful to include an example or illustration to further clarify the concepts._
### Assessing the student's answer in general by the LLM
We let the LLM assess the student's answer to a question. The LLM should provide a second opinion that contrasts that of the educator, possibly, adding novel arguments in the assessment of the student's answer or, weighing arguments in the reply different from the educator. More concretely, the prompt has the following structure for a question Q and a student answer A.
_Here is a question: Q. Here is an answer: A. How good is the answer to the question? Start the reply with one of the following: Extremely good., Very good., Good., Ok., Bad., Very bad., Extremely bad. Explain the choice._
Example Prompt to LLM:
_Here is a question: What is the difference between single linkage and average linkage (hierarchical) clustering?. Here is an answer: In single linkage, we compare the two closest datapoints (the ones with minimal distance) from two separate clusters. In average linkage, we compare all the datapoints from a cluster with all the datapoints from another cluster and get an average distance..How good is the answer to the question? Start the reply with one of the following: Extremely good., Very good., Good., Ok., Bad., Very bad., Extremely bad. Explain the choice._
Figure 1: The three questions an LLM answers to support the grading(left) and the information used by the LLM (right) to answer them
Response of LLM:
_Good. The answer provides a clear and concise explanation of the difference between single linkage and average linkage clustering. It accurately describes how single linkage compares the closest data points from two separate clusters, while average linkage compares all data points from a cluster with all data points from another cluster and calculates the average distance. The answer effectively addresses the question and provides a solid understanding of the topic._
### Comparing student answer and educator's answer
We compare the student answer with the educator's answer. We use the LLM to assess the similarity between the student answer and the educator's answer, i.e., the optimal answer from the perspective of the educator (and student). The LLM only obtains the educator's and the student's answer as input but not the question (see also Figure 1). We found that when the question is added, the assessment of ChatGPT sometimes tended to neglect the optimal answer (although the prompt clearly stated that the task was to compare the student reply and the educator's answer).
Thus, the LLM provides a second opinion like another grader that had also knowledge on the course in the form of what is the best answer given the taught material, i.e., the educator's answer. Thus, in contrast to the prior assessments that were seeking for a more general perspective, here, we ultimately seek to assess, if an educator's grading matches that of the LLM given knowledge of the best response. That is, if both should stick to a pre-defined "ground truth" in the form of the educator's answer. We shall also discuss how to relate the grading of the LLM and the educator quantitatively in the next section. As a prerequisite to derive quantitative statements, we ask for a category but also for an explanation to assess qualitatively. More concretely, the prompt has the following structure for a student answer A and the (assumed to be optimal) educator's answer E.
_Here is an answer: A. Here is the optimal answer: E. How close is the answer to the optimal answer? Start the reply with one of the following: Very close., Close., Somewhat close., Somewhat distant., Distant., Very distant.. Explain the choice._
Example Prompt to LLM:
_Here is an answer: In single linkage, we compare the two closest datapoints (the ones with minimal distance) from two separate clusters. In average linkage, we compare all the datapoints from a cluster with all the datapoints from another cluster and get an average distance.. Here is the optimal answer: The two differ in distance metric used to cluster. Single linkage: Merge two clusters based on minimum distance between any two points; Tendency to form long chains; Average linkage: merge two clusters based on average distance between any two points; tendency to "ball" like clusters.;How good is the answer to the question? Start the reply with one of the following: Extremely good., Very good., Good., Ok., Bad., Very bad., Extremely bad. Explain the choice. Explain also what is missing. What is the difference between single linkage and average linkage (hierarchical) clustering? How close is the answer to the optimal answer? Start the reply with one of the following: Very close., Close., Somewhat close., Somewhat distant., Distant., Very distant.. Explain the choice._
Response of LLM:
_Very close. The given answer effectively highlights the key differences between single linkage and average linkage in hierarchical clustering. It mentions that single linkage does not allow linkage between different groups, while average linkage splits the dataset into average-sized groups. Overall, the given answer accurately addresses the main distinction, making it very close to the optimal answer._
### Highlighting deviations of LLM and educator
The prior assessments are helpful, but require a lot of effort for the educator, i.e., the educator must manually assess all responses of the LLM. It might be desirable to focus more on large differences between human grading and the reply of the LLM. Furthermore, quantifying differences allows for a quantitative assessment, which makes it more explicit how well aligned the educator's grading and the LLM's grading are.
We assume that an educator has provided a score in some range [0,_maxPrts_], where _maxPrts_ gives the maximum points attainable by a student. For easier interpretability, we scale the scores to be between 0 and 1. We denote the scaled points given by the educator as \(p_{h}\). We also compute a score given by the LLM denoted as \(p_{L}\) based on the LLM's responses. We compute correlations to understand the alignment between human and LLM. We sort the outcomes based on how different the LLM's ratings are from the educator's grading, which allows an educator to focus on the largest differences.
The deviations are computed as follows. We assume that if the LLM replies for a given student answer and the educator's answer with "very close" then that student should have gotten maximal points, whereas if the response is "very distant" the student should have gotten no points. Similarly, if the LLM rates a student answer in general for a given exam question as "extremely good" then the student should have gotten maximal points. If it is rated "extremely bad" then the student should have gotten no points. Inbetween, we scale accordingly, i.e., we linearly interpolate to obtain the points an LLM would assign for other categories. We also scale by _maxPrts_ to have a normalized value between 0 and 1 for the LLM. We denote the scaled points by \(p_{L}\). We then sort outcomes based on the gap between the human's and LLM's scores \(\mid p_{h}\cdot p_{L}\mid\). This allows to focus on responses where the educator and LLM disagree the most, i.e., we sort the LLM responses from largest to lowest differences. For easier comparison, in our implementation, we show both the educator's (or student's) response and the LLM's response. This enables us to read the answer of the human, i.e., student
or educator, and then the reply of the LLM. If the two responses disagree significantly, and indeed the LLM had a valid point, then an educator should alter his/her grading.
An example output of the LLM is:
_Gap: 0.9 LLM Pts p_l: 0.1 Human Pts p_h: 1.0_
_Answer Human: When using single linkage in hierarchical clustering, the clusters are made with the shortest distance between the data point the closest cluster. Average linkage on the other hand takes the maximum distance and the minimum distance to each cluster, evaluates the average and then builds the clusters upon these average distances._
_Answer LLM: Somewhat close. The answer provides a basic understanding of the differences between single linkage and average linkage in hierarchical clustering. It correctly states that single linkage considers the shortest distance between a data point and the closest cluster, whereas average linkage calculates the average distance to each cluster. However, it does not fully explain how average linkage clusters are formed based on these average distances._
## Assessment Setup
We assess two exams administered by two different educators. One exam was in English at the master level in data science with 21 participants. It consisted of 16 questions and had a total of 90 points. It lasted 90 minutes. The other was in German at the bachelor level in Information Systems introductory course with 34 participants. There was a multiple choice part that was not considered, i.e., we were left with 3 questions each worth 10 points resembling 10 minutes of processing time per question, i.e., the replies were expected to be longer than for the questions of the first exam, i.e., the data science exam. The types of questions covered all categories according to the revised Bloom's taxonomy of educational objectives[1]. The taxonomy ranges from simple, concrete to complex, abstract questions. Our exams contained some simple concrete questions related to understanding and remembering, such as providing definitions and examples. More complex, abstract questions consisted, for example, of evaluating different approaches for a practical problem.
If a question was not answered by the student, we did not send the "empty" response to ChatGPT for an assessment. We read through all of ChatGPT's responses.
## Findings
We first discuss overarching findings before elaborating on each of the three questions in Figure 1.
_ChatGPT replies generically._ ChatGPT tends to assess in a mechanistic generic manner rather than looking at content. It might respond like "There is not sufficient detail" rather than pointing to specific details that are missing.
_ChatGPT and human assessments differ strongly._ The correlation between human and LLM's judgments is small. Generally, the LLM's judgments have a strong tendency to the middle, e.g., most are "ok" or "good" despite strong variation in the quality of student replies.
_ChatGPT can help to make sense of hard to understand answers._ LLM's provided a more open view and less negative view on responses suffering from poor language. Thus, the assessment was particularly positive for students with poor (English) language skills, as ChatGPT tended to rate them comparatively better to an educator. That is, a human might rate them poorly because the answer is difficult to understand or remains unclear due to grammatical ambiguities or poor wording. We also found that it was sometimes easier to make sense of a student reply after reading ChatGPT's assessment. Furthermore, commonly specific concepts tied to a set of keywords are accepted or looked for. If students do not provide any of these but rather a lengthy and verbose reply, there is a higher risk that possibly correct though convoluted arguments are overlooked. We found that ChatGPT's assessment can be helpful, since it can transform hard to grasp answers into a more concise phrasing and its responses follow an expected structure, which is fast to process for an educator.
_ChatGPT can drastically change its assessment due to minor changes in answers._ Additional content that is strikingly wrong though not related to the question (or answer) can lead to dramatic changes in judgements by the LLM. For illustration, we appended to the answer of the student used in the example prompt (1) either of the following three options:
(i) _3*5=7,_
(ii) _the cat sits on the mattress,_
(iii) _3*5=7, the cat sits on the mattress;_
ChatGPT judged two of them equivalently as the original prompt (1), i.e. as good. For (ii) and (iii) it would mention that the answers contain irrelevant information, but (ii) is still judged as good by the LLM, while the LLM judged the response (iii) as "very bad".
_ChatGPT favors vague content and fails to recognize contradictions._ Generally, replies with vaguely related content, which might be deemed irrelevant or even incorrect by a human grader, is rated more favorably by ChatGPT than by human graders. We also found that ChatGPT can fail to distinguish contradicting statements. We appended to prompt (1) either of the following:
(i) _Complete linkage uses the minimum distance between any two points in clusters. Density based clustering relies on computing the number of points, possibly organizing them in a search tree or a list._
(ii) _Complete linkage uses the maximum distance between any two points in clusters. Density based clustering relies on computing point densities, e.g. points for a fixed volume, for the volume for a fixed set of points._
Note, the words minimum and maximum are switched in (i) and (ii). The LLM judged (i) and (ii) equally, although they obviously contain contradicting statements and information not being asked for.
_ChatGPT misunderstands questions_. ChatGPT can suggest to provide information that can obviously be ruled out as being asked for. For the question "What are advantages of a decision tree?" (and a student's answer) the LLM's reply included "However, what is missing from the answer is a mention of some potential disadvantages or limitations of decision trees."
_ChatGPT's grading criteria are language sensitive_. We applied the same prompt patterns for both exams, i.e., we utilized the English prompt pattern for the German exam. While at first, this did not seem to pose a problem, we found that ChatGPT would occasionally provide a lower rating giving as reason that (German) texts contain grammar and spelling issues, but this would not happen for answers in English.
### Findings on assessing the educator's answer by the LLM
We read through all of ChatGPT's responses. None of them led to any changes of the human crafted responses. Most suggested improvements were generic, e.g., related to giving more details, an example and sometimes visualization or limitations. ChatGPT's responses were quite sensitive to the phrasing (and potentially other factors). For example, omitting the term "Explain also what is missing." changed the LLM's response for one reply from "very bad" (see Figure 2) to "good", while still giving mostly the same reasoning. Overall, Figure 2 suggests that the educator provided mostly "very good" answers and no answer was below "good" (at least when slightly changing the prompt as mentioned before).
### Findings on assessing the student's answer in general and relative to the educator's answer
Here, LLM's had more impact on the grading. That is, we made minor adjustments after reading through the LLM's assessment. The adjustments were made due to two types of replies: First the LLM would rate student answers (more) positively that were not part of the lecture material and also not directly being asked for. For example, we asked "Which two of the factors 'data, compute and algorithms' are most important for the rise of GPT-3 (around 2020) and ChatGPT in 2022 and other generative AI models?" Some students responded that one factor was media coverage and accessibility due to its public release. ChatGPT rated the responses of these students positively, although (1) the question explicitly restricts the factors to be discussed and (ii) the importance of media coverage is debatable - at least for GPT-3. That is, in the lecture, it was mentioned for ChatGPT that its public release led to widespread adoption and a surge in media coverage, but not so much for GPT-3. GPT-3 was covered less in the media, and it was less accessible, i.e., only (some) scientists got access. Still, in the end we decided to provide some recognition for mentioning "media coverage".
Second, the LLM would more positively rate replies with poor English. That is, the LLM's interpretation of the answer made the student answer more understandable. For an educator, the quality of the answer needs to exceed a minimum level of proficiency to be understood. Comprehensibility is generally a factor influencing grading. Educators are not supposed to make too many assumptions about what a student wants to say (i.e. interpret) but they have to stick with the incomprehensible answer and grade accordingly.
Overall, we found that any judgement of the grading after consulting the LLM was preceded by considerable reflection and debates, and it was not evident whether the differences of the LLM should really be considered.
Interestingly, using an answer set of an exam conducted in German, the LLM incorporated errors in spelling and grammar in the feedback and downgraded answers of poor language quality.
Figure 1: Ratings of educator’s answers by LLM
Figure 2: Ratings of students’ answers by LLM
The LLM tended to rate most student responses as "good" or "very good" (Figure 2), i.e., there was little differentiation. This is in stark contrast to the rating of the educator (Figure 3). The educator scored many answers with maximum or minimum points but he/she also assigned commonly points in-between the two extremes. The extremes were mostly common for short and easy answers with few points.
When it comes to assessing similarity between the educator's and the students' answers the LLM gave somewhat more diverse replies. However, overall alignment was poor. The pearson correlation between the LLM's similarity assessment \(p_{L}\) and educator's grading \(p_{h}\) was close to 0.
## Discussion
We set out to assess LLMs for autograding, primarily as a second opinion. Specifics of the course have not been provided to the LLM, e.g., the course material for which the exam was made for. That is, the LLM lacked any context that is lecture specific but relied more on world knowledge incorporated in its training data. Thus, discrepancies between the judgments of the LLM and lecturer are expected, e.g., as many terms are defined differently in different contexts and fields. Such contextualization of terms seems to be an essential part of teaching and learning and allows students to establish different perspectives on issues. However, for grading, we believe that the lack of context by the LLM can be highly valuable, as it provides a strongly complementary view that aims to avoid strong biases of a lecturer towards the lecture material. Still, this also hints that grading (or possibly even exam questions) derived by an LLM from the course material could create a more neutral and additional testing environment as the exam content would be grounded in the lecture material only.
We also faced a number of practical issues when using LLMs. For example, LLM's replies would not always follow the given structure, i.e., ChatGPT would reply with any of the asked for words "Very good", "Good" etc. but started the reply with some other sentence. This problem can often be mitigated by providing a few examples of inputs and desired outputs (few-shot learning). However, doing so means additional work for the educator, increases response time of the LLM and also costs, i.e., longer input prompts imply higher costs for commercial LLMs such as ChatGPT.
We also experimented with prompting and often found that there are trade-offs. For example, when comparing the student's answer and the educator's answer, we tested prompts that included the question (as well as the answers of the student and educator) and prompts that did not. We found that without the question, ChatGPT's assessment sometimes tended to include aspects that were rather unrelated to the question. If the question was added, the assessment of ChatGPT sometimes did not consider the educator's answer. While through experimenting, this problem could be reduced to some extent, it was still prevalent.
It is also tempting to use LLMs for fully automatic grading. However, from our experience this should not be undertaken at the current point in time since there is very strong disagreement between gradings of educators and LLMs. That is, they perform significantly worse in judging questions than in providing responses. This might be improved using techniques such as few shot learning, i.e., providing examples on how answers should be graded. However, first experimentation did not yield the hoped performance boost and also the number of examples that are possible to be added through prompting is limited as prompts sizes are restricted. In general, finding the best prompts for grading is non-trivial and responses could be sensitive to the slightest changes in phrasing. Grading should be robust, fair, and consistent. Accordingly, the achievement of competency levels of students should be assessed as independently as possible of individual course delivery, of lecturers and examiners, and of the performance of other students in an exam. ChatGPT did not (yet) meet these requirements in our evaluation.
We employed the idea to focus on answers where the LLM and the human showed largest discrepancies. However, unfortunately, ChatGPT's rating was not too well-aligned with that of the educator. Furthermore, if not all answers are
Figure 4: Distribution of frequency (y-axis) of normalized points by educator (x-axis)
Figure 3: Comparison of students and educator’s answers by LLM
checked (but only those with large differences), biases in the LLM might further impact the grading by leading to a bias in which answers are looked at (again) by a human grader. Furthermore, biases also appear as "misleading clues". That is, if for example, the LLM judges a particular argument An identical to the educator but another argument B is considered irrelevant by the LLM, students using B might be judged worse than they are by the LLM.
One assessment within our work assumed that an educator provides a single answer to a question. In principle, a question might permit fairly different. However, it is not hard to allow for multiple responses, i.e., an educator could define various answers that might even be contradictory. We could then compare a student's answer with all of the educator's responses and focus on the educator's response that is deemed closest by the LLM. However, specifying answers becomes more difficult the more open-ended a question is, i.e., the more knowledge should be applied and transferred, as opposed to simply replicating knowledge.
From an ethical point of view, one might also debate whether changes due to large language models should only improve grades. That is, LLMs cannot fail any student but only help them getting better, as punishing innocent people can be seen as worse than rewarding people not having deserved it. Furthermore, using an LLM as "a second opinion" might also provide a false sense of security.
## 5 Future Work
We might add a more explicit grading scheme that aims to identify specific aspects in the answer, i.e., "Is this concept in the answer?" (If not deduct x points).
Furthermore, a fine-tuned LLM towards grading might lead to better outcomes than relaying on prompting. To this end, large number of graded exams would be needed. While graded exams already exist, sharing them is non-trivial as responses might have to be anonymized to comply with privacy regulations.
LLMs might also be useful for exam development, i.e., assessing questions prior to posting an exam. One might also provide access to the lecture material to the LLM to assess gradings. This might uncover more minor issues in the grading scheme, but might not help so much in uncovering general issues. In this study, we used LLM only in grading answers on questions that have been formulated by lecturers. It would be interesting to test the end-to-end support by LLMs in designing a lecture, including the selection of topic areas, creating the lecture material, and preparing and assessing the exam.
## 6 Related Work
The manual grading process involves a labor-intensive evaluation of students' responses, requiring expertise and careful judgment to assign appropriate scores. Thus, to assist educators in reducing the time and effort spent on grading, there is a growing interest in leveraging AI-driven correction aids (Basu et al., 2013; Condor et al., 2021). When comparing the conventional teacher's judgement ("human scoring") to the capabilities auf automatic feedback and assessment tools ("machine scoring"), we encounter distinct strengths along various quality criteria (Seufert et al., 2022), i.e., AI can support objectivity, reliability, validity and comparative values and standards.
The evolution of assessment methodologies is currently exploring hybrid solutions that harness the strengths of both mechanisms. These developments, such as AI-based assistants for assessment and learner feedback, hold promise for the future education, offering more efficient and objective evaluation processes while maintaining the depth of understanding provided by human judgement (Saha et al., 2018; Schneider et al., 2023). A few works have also assessed the value of feedback through autograding, e.g., (Vittorini et al., 2020) also assesses the value of feedback provided by the autograder for students.
We concentrate on the field of Automatic Short Answer Grading (ASAG) (Burrows et al., 2015). It deals with grading student answers, typically ranging from a phrase to a paragraph. ASAG also covers the grading of open-ended answers (Baral et al., 2021). The primary focus in ASAG is on content quality rather than the writing style and structure emphasized as in automatic essay scoring (AES) (Dikli, 2010).
For ASAG, prior work has mostly relied on BERT as a large language model (Baral et al., 2021; Schneider et al., 2023; Sung et al., 2019). Schneider et al. (2023) investigated how LLMs such as BERT suffer from trust issues that might be partially mitigated by only automatically grading answers, if the LLM is certain about its grading.
While LLMs can provide justifications for their decisions without any additional work, dedicated methods for enhancing explainability have been evaluated in the realm of automatic grading systems (Kumar and Boulanger, 2020). Efforts have also been made to tackle the limitations of AI in terms of fairness in the context of automatic grading (Madhani et al., 2017) and, more generally, ethical issues related to LLMs in education (Yan et al., 2023).
## 7 Conclusions
The integration of LLMs into academic settings have become an undeniable reality. These models possess remarkable linguistic capabilities, coupled with unexpected reasoning abilities. Yet, using LLMs such as ChatGPT "out-of-the
box" to support grading requires great care due to a multitude of issues such as sensitivity to minor changes in answers and lack of concise reasoning, which is also reflected in poor alignment with human graders. Despite these limitations, LLMs currently offer a valuable resource that provides a supplementary viewpoint with vm minimal effort.
|
2309.03859 | Classification of Killing Magnetic Curves In H^3 | In this paper, we study classification of magnetic curves corresponding to
Killing vector fields of H^3 (hyperbolic 3-space). First, we solve the geodesic
equation analytically. Then we calculate the trajectories generated by all the
six Killing vector fields, which are considered as magnetic field vectors, by
using perturbation method up to first order with respect to the strength of the
magnetic field. We present a comparison of our solution with the numerical
solution for one case. We also prove that 3-dimensional ({\alpha})-Kenmotsu
manifolds cannot have any magnetic vector field in the direction of their Reeb
vector fields. | Özgür Kelekçi, Furkan Semih Dündar, Gülhan Ayar | 2023-09-07T17:23:54Z | http://arxiv.org/abs/2309.03859v1 | # Classification of Killing Magnetic Curves In \(\mathbb{H}^{3}\)
###### Abstract
In this paper, we study classification of magnetic curves corresponding to Killing vector fields of \(\mathbb{H}^{3}\). First, we solve the geodesic equation analytically. Then we calculate the trajectories generated by all the six Killing vector fields, which are considered as magnetic field vectors, by using perturbation method up to first order with respect to the strength of the magnetic field. We present a comparison of our solution with the numerical solution for one case. We also prove that 3-dimensional \((\alpha)\)-Kenmotsu manifolds cannot have any magnetic vector field in the direction of their Reeb vector fields.
_Keywords_:Almost contact manifolds; Killing magnetic curves; Hyperbolic spaces.
## 1 Introduction
One of the key areas of research in differential geometry and physics is the study of magnetic fields and the magnetic curves that correspond to them on various manifolds. Charged particles travelling along a magnetic field produce magnetic
curves on Riemannian manifolds.Various magnetic fields were also extended to various ambient spaces [7, 27, 1, 5, 13, 12]. Killing magnetic trajectories have been derived on some \(3-\)dimensional warped product spaces [21].
A closed \(2-\)form defines a magnetic field on a Riemannian manifold. This definition is derived from the fact that static magnetic fields on a Euclidean \(3-\)space can be thought of as a generalization of a closed \(2-\)form on a Riemannian manifold, see e.g. [7, 27]. A magnetic curve is the path taken by a charged particle on which a magnetic field exerts a force. It is the result of solving the Lorentz equation, a second order differential equation related to the magnetic field. Geodesics under arclength parameterization equation is generalized by the Lorentz equation.
A geometrical method for studying magnetic fields in three-dimensional Riemannian manifolds has been developed where the relation between the vector fields and \(2\)-forms was utilized [6]. Divergence-free vector fields define magnetic fields on three-dimensional Riemannian manifolds.
There has been a growing interest for research of magnetic curves on various geometric structures in the last two decades. Here we highlight the most relevant works for our study which is not an exhaustive list of all magnetic curve studies. Druta and Munteanu have investigated Killing magnetic curves in Minkowski \(3\)-space [13]. Cabrerizo et al. have studied the contact magnetic flow in \(3\)D Sasakian manifolds [5]. Inoguchi and Munteanu have investigated contact magnetic curves in the real special linear group of degree \(2\) (\(\mathrm{SL}_{2}\mathbb{R}\)) [20]. Jiang and Sun have studied the local differential geometrical properties of the lightlike Killing magnetic curves in de Sitter \(3\)-space [22]. Munteanu and Nistor have provided the classification of Killing magnetic curves in \(\mathsf{S}^{2}\times\mathbb{R}\)[24]. Aydin have classified the magnetic curves with constant curvature in a Galilean \(3\)-space [3]. Magnetic curves with respect to the canonical contact structure of the space \(\mathrm{Sol}_{3}\) have been investigated by Erjavec [15] and Altunbas have obtained some explicit formulas for Killing magnetic curves in non-flat Lorentzian-Heisenberg spaces [2]. Inoguchi studied some special curves in \(3\)-dimensional hyperbolic geometry and solvgeometry [18]. Magnetic curves and linking numbers in the \(3\)-sphere (\(\mathsf{S}^{3}\)) and hyperbolic \(3\)-space (\(\mathbb{H}^{3}\)) have been studied by De Turck and Gluck [9, 8]. However, their studies mostly deal with topological aspects of the subject and they do not provide explicit solutions for Killing magnetic curves.
The organization of the paper is as follows. In Section 2 we give fundamental definitions that we will use in the subsequent parts of the paper. In Section 3 we highlight basic properties of the \(\mathbb{H}^{3}\) manifold. In Section 4 we derive and solve the geodesic equation. In Section 5 we list six Killing vectors of the \(\mathbb{H}^{3}\) manifold, which we multiply by constants \(B_{i}\) where \(i=1,2,3,4,5,6\) in order to control the strength of the magnetic field. We have given analytical solutions for each case up to first order in \(B_{i}\). Finally in Section 6 we conclude the paper. We use units where the mass (\(m\)) of the particle, its electrical charge (\(q\)) is related by \(q/m=-1\)
Preliminaries
Let \((M,\phi,\xi,\eta,g)\) be an \(n\)-dimensional differentiable manifold, \((n=2m+1)\). \(M\) is called an almost contact Riemannian manifold, where \(\phi\) is a \((1,1)-\)tensor field, \(\xi\) is the Reeb vector field, \(\eta\) is a \(1-\)form and \(g\) is the Riemannian metric. The linear frame bundle's structural group \(\mathrm{GL}_{m}\mathbb{R}\) in an almost contact manifold \(M\) is reducible to \(U(n)\times\{1\}\). Moreover \((\phi,\xi,\eta,g)\) -structure satisfies the following conditions [4],
\[\phi^{\mu}{}_{\alpha}\phi^{\kappa}{}_{\nu}=-\delta^{\mu}_{\nu}+\xi^{\mu}\eta_{ \nu}, \tag{2.1}\]
and
\[\eta_{\mu}\xi^{\mu}=1,\quad\phi^{\mu}{}_{\nu}\xi^{\nu}=0,\quad\eta_{\mu}\phi^{ \mu}{}_{\nu}=0. \tag{2.2}\]
Additionally, because \(U(n)\subset SO(2m+1)\), \(M\) admits a Riemannian metric \(g\) satisfying
\[g_{\alpha\beta}\phi^{\alpha}{}_{\mu}\phi^{\beta}{}_{\nu}=g_{\mu\nu}-\eta_{\mu} \eta_{\nu}. \tag{2.3}\]
A metric of this type is known as an associated metric of the almost contact manifold \((M,\phi,\xi,\eta,g)\). The \((1,1)-\)tensor field \(\phi\) is anti-symmetric and \(\eta\) is metric dual of \(\xi\) so we have
\[\phi^{\mu}{}_{\nu}=-\phi_{\nu}{}^{\mu},\quad\text{and, }g_{\mu\nu}\xi^{\nu}=\eta_{\mu}. \tag{2.4}\]
Assume \(M\) is an oriented \(n\)-dimensional Riemannian manifold \((n\geq 2)\). A charged particle travelling across a manifold while being affected by a magnetic field is represented as a magnetic curve. A closed \(2-\)form \(F\) is a magnetic field in \((M,g)\). The \((1,1)-\)tensor field \(\phi\) that corresponds to the Lorentz force of a magnetic field \(F\) on \((M,g)\) is defined by [5]:
\[g_{\mu\alpha}\phi^{\alpha}{}_{\nu}=F_{\mu\nu}. \tag{2.5}\]
**Definition 1**.: _An \(\alpha\)-Kenmotsu manifold is an almost contact manifold satisfying the following conditions :_
1. \(d\eta=0\)_._
2. \(dF=2\alpha\ \eta\wedge F\ (\alpha\in\mathbb{R}-\{0\})\)_._
3. _The Nijenhuis tensor_ \(N_{\phi}(X,Y)\) _given by the following relation vanishes for any_ \(X,Y\in\Gamma(TM)\)_._ \[N_{\phi}(X,Y)=[\phi X,\phi Y]-\phi[\phi X,Y]-\phi[X,\phi Y]+\phi^{2}[X,Y]\]
Moreover, the following relation holds for an \(\alpha\)-Kenmotsu manifold for any vector field \(X\)[10]
\[\nabla_{X}\tilde{\varsigma}=\alpha(X+h(X)-\eta(X)\tilde{\varsigma})= \alpha(h(X)-\phi^{2}(X))\] \[\text{or}\ \ \nabla_{\mu}\tilde{\varsigma}^{\nu}=\alpha\ (h_{\mu} {}^{\nu}-\phi_{\mu}{}^{\sigma}\phi_{\sigma}{}^{\nu}) \tag{2.6}\]
where \(h\) is a trace-free (1,1)-tensor field defined as \(h:=\frac{1}{2\alpha}\phi(\mathcal{L}_{\tilde{\varsigma}}\phi)\), \(\mathcal{L}\) denoting Lie derivative operator1.
Footnote 1: Note that all conditions stated above apply to \((-\alpha)\)-Kenmotsu manifolds by changing \(\alpha\) to \(-\alpha\).
A curve \(\gamma(t)\) on \(M\) is called a magnetic curve if it satisfies the Lorentz equation:
\[\nabla_{\gamma^{\prime}}\gamma^{\prime}=V\times\gamma^{\prime}=\phi(\gamma^{ \prime}) \tag{2.7}\]
or in index notation
\[\frac{\mathrm{d}x^{\nu}}{\mathrm{d}t}\nabla_{\nu}\frac{\mathrm{d}x^{\mu}}{ \mathrm{d}t}=\phi^{\mu}{}_{\alpha}\frac{\mathrm{d}x^{\alpha}}{\mathrm{d}t}\]
where \(V\) is a vector field on \(M\) associated with \(F\) such that \(F_{V}=i_{V}dv_{g}\) (\(i\) is denoting the interior product on \(M\) and \(dv_{g}\) is the volume form of \(M\)), and \(\nabla\) is the Levi-Civita connection on \(M\)[6]. A normal magnetic curve is a magnetic curve whose arclength parameterization satisfies \(|\gamma^{\prime}(t)|=1\). If a magnetic curve \(\gamma(t)\) fulfills the equation \(\nabla_{\gamma^{\prime}}\gamma^{\prime}=0\), it is referred to as a geodesic curve.
A vector field \(K=K^{\mu}\partial_{\mu}\) on \(M\) is said to be Killing vector field if it satisfies the following Killing equation which is written in covariant form.
\[\nabla_{\mu}K_{\nu}+\nabla_{\nu}K_{\mu}=0, \tag{2.8}\]
Magnetic fields \(F\) obtained from Killing vector fields are called Killing magnetic fields and the trajectories corresponding to the Killing magnetic fields are called the Killing magnetic curves. Hence, Killing magnetic curves can be viewed as a sub-class of general magnetic curves which require only the existence of a divergence-free vector field2. A Killing magnetic field is defined by the closed 2-form \(F_{K}=i_{K}dv_{g}\) for a Killing vector field \(K\) on \(M\). Here the closeness of \(F_{K}\) is guaranteed by Killing vectors being divergence-free. Then the Lorentz force \(\phi_{K}\) corresponding to the Killing magnetic field \(F_{K}\), and the Lorentz equation become the following
Footnote 2: For instance, \(B=c_{1}\ \cos(\alpha x)\sin(\alpha y)\partial_{x}-c_{1}\ \sin(\alpha x)\cos( \alpha y)\partial_{y}+c_{2}\ e^{2\alpha z}\partial_{z}\) is a divergence-free vector field and naturally defines a magnetic field on \(\mathrm{H}^{3}\) but it does not satisfy Killing equation (2.8), thus it is not a Killing vector field.
\[\phi_{K}(X)=K\times X\,\quad\nabla_{\gamma^{\prime}}\gamma^{\prime}=K\times \gamma^{\prime}. \tag{2.9}\]
Note that \(\phi_{K}\) in (2.9) is not the same \(\phi\) of the original contact structure which has to be compatible with Reeb vector field \(\xi\) and dual 1-form \(\eta\)[14]. Here the index \(K\) has been used to emphasize the fact that the chosen Killing vector field defines \(\phi_{K}\).
## 3 Geometric Structure of \(\mathbb{H}^{3}\)
We recall some relevant geometric properties for the hyperbolic 3-space \(\mathbb{H}^{3}\) in this section. \(\mathbb{H}^{3}(-\alpha^{2})\) is isometric to a solvable Lie group \(G_{\alpha}\)[23]:
\[G_{\alpha}=\left\{\begin{pmatrix}1&0&0&z\\ 0&e^{\alpha z}&0&x\\ 0&0&e^{\alpha z}&y\\ 0&0&0&1\end{pmatrix}\Big{|}(x,y,z)\in\mathbb{R}^{3}\ \right\}\subset\mathrm{GL}_{4} \mathbb{R}\]
equipped with left-invariant metric
\[g=e^{-2\alpha z}\mathrm{d}x^{2}\ +e^{-2\alpha z}\mathrm{d}y^{2}\ +\mathrm{d}z^{2} \tag{3.1}\]
It has also been shown that \(\mathbb{H}^{3}\) can be represented by the quotient group \(\mathrm{SL}_{2}\mathbb{C}/\mathrm{SU}_{2}\) as a Riemannian symmetric space in [11]. In this study we are using differential geometric approach rather than group theoretic approach. We adopt the following global orthonormal frame and the corresponding dual co-frame on \(\mathbb{H}^{3}(-\alpha^{2})\)
\[\mathbf{e_{1}}=e^{\alpha z}\frac{\partial}{\partial x},\quad \mathbf{e_{2}}=e^{\alpha z}\frac{\partial}{\partial y},\quad\mathbf{e_{3}}= \frac{\partial}{\partial z} \tag{3.2}\] \[\mathbf{e^{1}}=e^{-\alpha z}\mathrm{d}x,\quad\mathbf{e^{2}}=e^{- \alpha z}\mathrm{d}y,\quad\mathbf{e^{3}}=\mathrm{d}z\]
The Christoffel coefficients in the coordinate basis were obtained by direct calculation (lower two indices are symmetric, and we list only the nonzero terms):
\[\Gamma^{x}_{xz}=-\alpha,\quad\Gamma^{y}_{yz}=-\alpha,\quad\Gamma^{z}_{xx}= \alpha e^{-2\alpha z},\quad\Gamma^{z}_{yy}=\alpha e^{-2\alpha z}. \tag{3.3}\]
We use this information to calculate the the following table of covariant derivatives (\(i\) denotes the rows, and \(j\) denotes the columns)
\[\nabla_{\mathbf{e_{i}}}\mathbf{e_{j}}=\begin{pmatrix}\alpha\mathbf{e_{3}}&0& -\alpha\mathbf{e_{1}}\\ 0&\alpha\mathbf{e_{3}}&-\alpha\mathbf{e_{2}}\\ 0&0&0\end{pmatrix},\quad i,j\ \in\{1,2,3\} \tag{3.4}\]
and Lie brackets
\[\left[\mathbf{e_{i}},\mathbf{e_{j}}\right]=\begin{pmatrix}0&0&-\alpha\mathbf{e_{1}} \\ 0&0&-\alpha\mathbf{e_{2}}\\ \alpha\mathbf{e_{1}}&\alpha\mathbf{e_{2}}&0\end{pmatrix},\quad\ i,j\in\{1,2,3\}\] (3:5)
Killing vector fields of \(\mathbb{H}^{3}\) is obtained by solving (2.8) (see Appendix B for details)
\[\mathbf{K_{1}}=\partial_{x},\quad\mathbf{K_{2}}=\partial_{y}, \quad\mathbf{K_{3}}=y\partial_{x}-x\partial_{y},\quad\mathbf{K_{4}}=\alpha x \partial_{x}+\alpha y\partial_{y}+\partial_{z},\] \[\mathbf{K_{5}}=\left(\frac{\alpha}{2}\left(x^{2}-y^{2}\right)- \frac{e^{2\alpha z}}{2\alpha}\right)\partial_{x}+\alpha xy\partial_{y}+x \partial_{z},\] \[\mathbf{K_{6}}=\alpha xy\partial_{x}+\left(\frac{\alpha}{2} \left(y^{2}-x^{2}\right)-\frac{e^{2\alpha z}}{2\alpha}\right)\partial_{y}+y \partial_{z} \tag{3.6}\]
These vector fields form a basis of a Lie algebra of Killing vector fields whose Lie brackets are obtained as
\[\left[\mathbf{K_{1}},\mathbf{K_{2}}\right]=0,\ \ \left[\mathbf{K_{1}}, \mathbf{K_{3}}\right]=-\mathbf{K_{2}},\ \ \left[\mathbf{K_{1}},\mathbf{K_{4}}\right]=\alpha\mathbf{K_{1}},\ \ \left[\mathbf{K_{1}},\mathbf{K_{5}}\right]=\mathbf{K_{4}},\ \ \left[\mathbf{K_{1}}, \mathbf{K_{6}}\right]=\alpha\mathbf{K_{3}}\] \[\left[\mathbf{K_{2}},\mathbf{K_{3}}\right]=\mathbf{K_{1}},\quad \left[\mathbf{K_{2}},\mathbf{K_{4}}\right]=\alpha\mathbf{K_{2}},\quad\left[ \mathbf{K_{2}},\mathbf{K_{5}}\right]=-\alpha\mathbf{K_{3}},\quad\left[\mathbf{K _{2}},\mathbf{K_{6}}\right]=\mathbf{K_{4}}\] \[\left[\mathbf{K_{3}},\mathbf{K_{4}}\right]=0,\quad\left[\mathbf{K _{3}},\mathbf{K_{5}}\right]=\mathbf{K_{6}},\quad\left[\mathbf{K_{3}},\mathbf{K _{6}}\right]=-\mathbf{K_{5}}\] \[\left[\mathbf{K_{4}},\mathbf{K_{5}}\right]=\alpha\mathbf{K_{5}}, \quad\left[\mathbf{K_{4}},\mathbf{K_{6}}\right]=\alpha\mathbf{K_{6}},\quad \left[\mathbf{K_{5}},\mathbf{K_{6}}\right]=0\] (3:7)
It is known that \(\mathbb{H}^{3}(-\alpha^{2})\) is a \((-\alpha)\)-Kenmotsu manifold [19]. Here we summarize its almost contact structure. (1,1)-tensor \(\phi\) in Section 2 can be chosen such that it satisfies the following
\[\phi(\mathbf{e_{1}})=\mathbf{e_{2}}\,\quad\phi(\mathbf{e_{2}})=- \mathbf{e_{1}}\,\quad\phi(\mathbf{e_{3}})=0\quad\text{or}\] \[\phi(\partial_{x})=\partial_{y}\,\quad\phi(\partial_{y})=- \partial_{x}\,\quad\phi(\partial_{z})=0 \tag{3.8}\]
Equations (2.1)-(2.4) are satisfied for \(\xi=\partial_{z}\) and \(\eta=\mathrm{d}z\). Moreover, the coefficients of the 2-form \(F_{\mu\nu}\) can be computed as a matrix by using (2.5)
\[F_{\mu\nu}=\begin{pmatrix}0&e^{-2\alpha z}&0\\ -e^{-2\alpha z}&0&0\\ 0&0&0\end{pmatrix} \tag{3.9}\]
Consequently, 2-form \(F\) is obtained from (3.9) as
\[F=\frac{1}{2}F_{\mu\nu}dx^{\mu}\wedge dx^{\nu}=e^{-2\alpha z}dx\wedge dy \tag{3.10}\]
It is easy to check that the 2-form \(\mathbb{F}\) and \(\eta\) satisfies the following relation which is one of the conditions of \((-\alpha)\)-Kenmotsu manifolds.
\[dF=-2\alpha\ \eta\wedge F=-2\alpha e^{-2\alpha z}dx\wedge dy\wedge dz \tag{3.11}\]
Finally, one needs to check the third condition of Definition-1 for \(\mathbb{H}^{3}\). We show one example calculation for \(N_{\phi}(e_{1},e_{2})\), other combinations of basis vectors can be computed similarly. Thus, vanishing of Nijenhuis tensor ensures that \(\mathbb{H}^{3}\) is a \((-\alpha)\)-Kenmotsu manifold.
\[N_{\phi}(e_{1},e_{2}) =[\phi(e_{1}),\phi(e_{2})]-\phi[\phi(e_{1}),e_{2}]-\phi[e_{1},\phi (e_{2})]+\phi[\phi[e_{1},e_{2}]]\] \[=[e_{2},-e_{1}]-\phi[e_{2},e_{2}]-\phi[e_{1},-e_{1}]+\phi[\phi(0)]=0 \tag{3.12}\]
A remark should be made here about special case of magnetic curves on \(\mathbb{H}^{3}\). Unlike other almost contact manifolds, fundamental 2-form \(F\) coming from the contact structure is not closed on \(\mathbb{H}^{3}\) hence it does not correspond to a magnetic field. In addition, Reeb vector field \(\xi\) of \(\mathbb{H}^{3}\) is not Killing. Similar situation appears in \(Sol_{3}\) space, its Reeb vector field is not Killing but divergence-free (\(\nabla\cdot\xi=0\)) and there exists a closed 2-form corresponding to a magnetic field for the Reeb vector of \(Sol_{3}\) space [16]. We construct and utilize the following lemma for \(\alpha\)-Kenmotsu manifolds.
**Lemma 1**.: _Let \((M,\phi,\xi,\eta,g)\) be a 3-dimensional \(\alpha\)-Kenmotsu manifold. Then there does not exist any magnetic curve \(\gamma\) associated with the Reeb vector field \(\xi\) of the \(\alpha\)-Kenmotsu manifold._
Proof.: For any \(\alpha\)-Kenmotsu manifold we have the following relations.
\[\phi_{\beta}{}^{\mu}\ \phi_{\mu}{}^{\sigma}=-\delta\ _{\beta}^{\sigma}+\xi^{ \sigma}\eta_{\beta}\,\quad\nabla_{\mu}\xi^{\nu}=\alpha\ (h_{\mu}{}^{\nu}-\phi_{\mu}{}^{ \sigma}\phi_{\sigma}{}^{\nu})\]
\[\text{Tr}(\nabla_{\mu}\xi^{\nu})=\alpha\ \text{Tr}(h_{\mu}{}^{\nu}-\phi_{\mu}{}^{ \sigma}\phi_{\sigma}{}^{\nu})\ \Rightarrow\nabla_{\mu}\xi^{\mu}=-\alpha\ \phi_{\mu}{}^{\sigma}\phi_{\sigma}{}^{\mu}\]
\[\text{Tr}(\phi_{\mu}{}^{\sigma}\phi_{\beta}{}^{\mu})=\text{Tr}(-\delta\ _{\beta}^{ \sigma}+\xi^{\sigma}\eta_{\beta}\ )\ \Rightarrow\phi_{\mu}{}^{\sigma}\phi_{\sigma}{}^{\mu}=-\delta\ _{\sigma}^{\sigma}+\xi^{\sigma}\eta_{\sigma}=-3+1=-2\]
Thus,
\[\nabla_{\mu}\xi^{\mu}=-\alpha\ \phi_{\mu}{}^{\sigma}\phi_{\sigma}{}^{\mu}=2\ \alpha\ \Rightarrow\ \nabla\cdot\xi\neq 0\]
By definition, a magnetic curve requires the existence of a divergence free vector field in 3-dimensions. Since Reeb vector field of \(\alpha\)-Kenmotsu manifold \(\xi\) will always have a non-vanishing divergence, a magnetic curve can not exist associated with \(\xi\).
**Corollary 1**.: _Lemma 1 also applies to \((-\alpha)\)-Kenmotsu manifolds with \(\alpha\to-\alpha\) which gives \(\nabla\cdot\xi=-2\alpha\). There does not exist any magnetic curve \(\gamma\) associated with the Reeb vector field \(\xi\) of \(\mathbb{H}^{3}\) since it is a 3-dimensional \((-\alpha)\)-Kenmotsu manifold._
We also note that Killing vector fields of \(\mathbb{H}^{3}\) are not compatible with the contact structure of \((-\alpha)\)-Kenmotsu manifolds. Their metric duals are always non-closed (\(\mathrm{d}\eta_{K}\neq 0\)) which violates one of the conditions of \((-\alpha)\)-Kenmotsu manifolds. However, this situation poses no obstruction in obtaining Killing magnetic curves which is only related with manifold's intrinsic geometric structure and solution of Lorentz equation.
## 4 The Geodesic Equation and its Solution
Here we define the geodesic equation in Subsection 4.1 then solve it in Subsection 4.2.
### Derivation of the Geodesic Equation
Let \(\gamma^{\prime}(t)\) be a curve in \(\mathbb{H}^{3}\). If it satisfies the equation, \(\nabla_{\gamma^{\prime}}\gamma^{\prime}=0\), it is said to be a _geodesic curve_. In the orthonormal basis, \(\{\mathbf{e_{1}},\mathbf{e_{2}},\mathbf{e_{3}}\}\) we can write \(\gamma^{\prime}=\dot{\gamma}^{1}\mathbf{e_{1}}+\dot{\gamma}^{2}\mathbf{e_{2}} +\dot{\gamma}^{3}\mathbf{e_{3}}\) (a 'dot' means derivative with respect to time) and the geodesic equation becomes:
\[\nabla_{\gamma^{\prime}}\gamma^{\prime} =(\gamma^{\prime}\cdot\nabla)\gamma^{\prime}, \tag{4.1}\] \[=\dot{\gamma}^{i}\nabla_{\mathbf{e_{i}}}(\dot{\gamma}^{j}\mathbf{ e_{j}}),\] (4.2) \[=\dot{\gamma}^{i}(\nabla_{\mathbf{e_{i}}}\dot{\gamma}^{j}) \mathbf{e_{j}}+\dot{\gamma}^{i}\dot{\gamma}^{j}\nabla_{\mathbf{e_{i}}} \mathbf{e_{j}},\] (4.3) \[=\partial_{t}\dot{\gamma}^{j}\mathbf{e_{j}}+\dot{\gamma}^{i}\dot {\gamma}^{j}\nabla_{\mathbf{e_{i}}}\mathbf{e_{j}},\] (4.4) \[=0. \tag{4.5}\]
When the covariant derivatives (\(\nabla_{\mathbf{e_{i}}}\mathbf{e_{j}}\)) are taken into account, we find the following three equations:
\[\ddot{\gamma}^{1}-\alpha\dot{\gamma}^{1}\dot{\gamma}^{3} =0, \tag{4.6}\] \[\ddot{\gamma}^{2}-\alpha\dot{\gamma}^{2}\dot{\gamma}^{3} =0,\] (4.7) \[\ddot{\gamma}^{3}+\alpha\left(\dot{\gamma}^{1}\right)^{2}+\alpha \left(\dot{\gamma}^{2}\right)^{2} =0. \tag{4.8}\]
### Solution of the Geodesic Equation
The orthonormal basis has been useful for obtaining the geodesic equation. The relation between the components of \(\gamma^{\prime}\) in the orthonormal basis (\(\dot{\gamma}^{i}\)) and the coordinate basis (\(\dot{x}^{i}\)) is simply: \(\dot{\gamma}^{1}=\dot{x}e^{-\alpha z},\dot{\gamma}^{2}=\dot{y}e^{-\alpha z}, \dot{\gamma}^{3}=\dot{z}\). Let us re-write the geodesic equations of motion, Equations (4.6,4.7,4.8), using the coordinate basis (where \(\gamma^{\prime}=\dot{x}\partial_{x}+\dot{y}\partial_{y}+\dot{z}\partial_{z}\)) and write these equations as:
\[\dot{x}-2\alpha\dot{x}\dot{z} =0, \tag{4.9}\] \[\dot{y}-2\alpha\dot{y}\dot{z} =0,\] (4.10) \[\dot{z}+\alpha(\dot{x}^{2}+\dot{y}^{2})e^{-2\alpha z} =0. \tag{4.11}\]
The first two equations are of first order in \(\dot{x}\) and \(\dot{y}\) and can be easily integrated:
\[\dot{x} =c_{1}e^{2\alpha z}, \tag{4.12}\] \[\dot{y} =c_{2}e^{2\alpha z}, \tag{4.13}\]
where \(c_{1},c_{2}\) are constants of integration. Physically, these constants are proportional to initial velocity in \(x\) and \(y\) directions. For example, if the initial time is \(t_{0}\), \(c_{1}=v_{0x}e^{-2\alpha z(t_{0})}\) and \(c_{2}=v_{0y}e^{-2\alpha z(t_{0})}\). \(c_{1},c_{2}\) are linearly related to velocities in \(x,y\) directions on the \(z=z(t_{0})\) plane. Actually, since the differential equations are autonomous in \(t\), instead of \(t_{0}\) one may choose any other \(t_{1}\) in order to specify the velocities. As we shall see later, this is the reason why we kept the constant \(c_{4}\) (which appears in the form of \(t-c_{4}\)) in our solutions. When we put these results in Equation (4.11) we obtain:
\[\ddot{z}+\alpha(c_{1}^{2}+c_{2}^{2})e^{2\alpha z}=0. \tag{4.14}\]
We multiply this equation with \(2\dot{z}\) and obtain:
\[\partial_{t}(\dot{z}^{2})+(c_{1}^{2}+c_{2}^{2})\partial_{t}e^{2\alpha z}=0, \tag{4.15}\]
we integrate with respect to time and obtain:
\[\dot{z}^{2}+(c_{1}^{2}+c_{2}^{2})e^{2\alpha z}=c_{3}^{2}, \tag{4.16}\]
where \(c_{3}>0\) or equals to zero if the particle stands still, that is \(\dot{x}=\dot{y}=\dot{z}=0\). The right hand side of (4.16) must be non-negative because the expression on the left hand side is non-negative. This is why we chose to denote the right hand side as \(c_{3}^{2}\). As a matter of convention we chose \(c_{3}\geq 0\). The solutions of the differential equations are invariant under \(c_{3}\leftrightarrow-c_{3}\) as can be seen in Equations (4.20-4.22). Moreover, \(c_{3}\) is related to the initial velocities through \(c_{3}^{2}=v_{0z}^{2}+(v_{0x}^{2}+v_{0y}^{2})e^{-2\alpha z(t_{0})}\). In essence \(c_{3}\) is the speed (in \(\mathbb{H}^{3}\)) of the geodesic curve which can be obtained through \(\mathrm{d}s^{2}/\mathrm{d}t^{2}\) where \(\mathrm{d}s^{2}\) is given by the \(\mathbb{H}^{3}\) metric. It is seen that \(|\gamma^{\prime}|^{2}=c_{3}^{2}\) and \(c_{3}\) should be constant in order \(\gamma^{\prime}\) to define a geodesic curve. In order to see that apply \(\nabla\) on both sides and obtain \(2\nabla_{\gamma^{\prime}}\gamma^{\prime}=\nabla c_{3}^{2}\). By scaling the time parameter it is possible to map \(c_{3}\to 1\), however we favored the
option to display each constant of integration. This is the reason why we kept the \(c_{3}\) dependence explicit. Since we are interested in _motion_, from now on we suppose \(c_{3}>0\). We can easily integrate Equation (4.16) and obtain (\(c_{4}\) is a constant of integration):
\[t-c_{4} =\int\mathrm{d}z\left(c_{3}^{2}-(c_{1}^{2}+c_{2}^{2})e^{2\alpha z }\right)^{-1/2}\] \[=\frac{1}{c_{3}}\int\mathrm{d}z\left(1-Ae^{2\alpha z}\right)^{-1 /2}. \tag{4.17}\]
where \(A=(c_{1}^{2}+c_{2}^{2})/c_{3}^{2}\). Let \(u=Ae^{2\alpha z}\), and obtain:
\[=\frac{1}{2\alpha c_{3}}\int\frac{\mathrm{d}u}{u\sqrt{1-u}}. \tag{4.18}\]
Evaluating the integral yields [28]:
\[=-\frac{1}{\alpha c_{3}}\tanh^{-1}(\sqrt{1-u}). \tag{4.19}\]
By inverting this equation and using the definition of \(u\) in terms of \(z\), we obtain:
\[e^{2\alpha z} =\frac{c_{3}^{2}}{c_{1}^{2}+c_{2}^{2}}\left[1-\tanh^{2}(\alpha c _{3}(t-c_{4}))\right]=\frac{c_{3}^{2}}{c_{1}^{2}+c_{2}^{2}}\operatorname{ sech}^{2}(\alpha c_{3}(t-c_{4}))\] \[z(t) =\frac{1}{2\alpha}\mathrm{Log}\left(\frac{{c_{3}}^{2}}{({c_{1}}^ {2}+{c_{2}}^{2})}\operatorname{sech}^{2}(\alpha c_{3}(t-c_{4}))\right) \tag{4.20}\]
Since we know \(z\) in terms of time, we can use this information to integrate \(\dot{x}\) and \(\dot{y}\) in Equation (4.12) and in Equation (4.13):
\[x(t) =\frac{1}{\alpha}\frac{c_{1}c_{3}}{c_{1}^{2}+c_{2}^{2}}\tanh( \alpha c_{3}(t-c_{4}))+c_{5}, \tag{4.21}\] \[y(t) =\frac{1}{\alpha}\frac{c_{2}c_{3}}{c_{1}^{2}+c_{2}^{2}}\tanh( \alpha c_{3}(t-c_{4}))+c_{6}, \tag{4.22}\]
where \(c_{5},c_{6}\) are constants of integration. In essence, Equations (4.20,4.21,4.22) gives the equations that define a geodesic curve. As it should be, there are six constants of integration.
## 5 Killing magnetic curves in \(\mathbb{H}^{3}\)
As we have given in Section 3 there are six Killing vectors in \(\mathbb{H}^{3}\)
\[K_{(1)} = B_{1}\partial_{x}, \tag{5.1}\] \[K_{(2)} = B_{2}\partial_{y},\] (5.2) \[K_{(3)} = B_{3}(y\partial_{x}-x\partial_{y}),\] (5.3) \[K_{(4)} = B_{4}(\alpha x\partial_{x}+\alpha y\partial_{y}+\partial_{z}),\] (5.4) \[K_{(5)} = B_{5}\left(\left(\frac{\alpha}{2}\left(x^{2}-y^{2}\right)-\frac {e^{2\alpha z}}{2\alpha}\right)\partial_{x}+\alpha xy\partial_{y}+x\partial_{ z}\right),\] (5.5) \[K_{(6)} = B_{6}\left(\alpha xy\partial_{x}+\left(\frac{\alpha}{2}\left(y^{ 2}-x^{2}\right)-\frac{e^{2\alpha z}}{2\alpha}\right)\partial_{y}+y\partial_{ z}\right). \tag{5.6}\]
where we put the constants \(B_{i}\) to control the _strength_ of the magnetic field (note that the _units_ of each \(B_{i}\) may vary.). We do this because we could not find analytic solutions for arbitrary \(B_{i}\) and will give equations up to first order in \(B_{i}\). However in all cases, we will give the full equations for arbitrary \(B_{i}\) and then do perturbation analysis.
The Killing magnetic curve generated by the \(i^{\text{th}}\) Killing vector field is given by Lorentz equation (2.9):
\[\nabla_{\gamma^{\prime}}\gamma^{\prime}=K_{(i)}\times\gamma^{\prime}, \tag{5.7}\]
where the vector product is calculated using the orthonormal basis in a tangent space of the underlying manifold. If we define \(F_{(i)}\equiv K_{(i)}\times\gamma^{\prime}\), we obtain the following expressions (remember \(\gamma^{\prime}=\dot{\gamma}^{i}\mathbf{e_{i}}\)):
\[\frac{F_{(1)}}{B_{1}} =e^{-\alpha z}(\dot{\gamma}^{2}\mathbf{e_{3}}-\dot{\gamma}^{3} \mathbf{e_{2}}), \tag{5.8}\] \[\frac{F_{(2)}}{B_{2}} =e^{-\alpha z}(\dot{\gamma}^{3}\mathbf{e_{1}}-\dot{\gamma}^{1} \mathbf{e_{3}}),\] (5.9) \[\frac{F_{(3)}}{B_{3}} =e^{-\alpha z}(-x\dot{\gamma}^{3}\mathbf{e_{1}}-y\dot{\gamma}^{3} \mathbf{e_{2}}+(x\dot{\gamma}^{1}+y\dot{\gamma}^{2})\mathbf{e_{3}}),\] (5.10) \[\frac{F_{(4)}}{B_{4}} =(\alpha y\dot{\gamma}^{3}e^{-\alpha z}-\dot{\gamma}^{2})\mathbf{ e_{1}}-(\alpha x\dot{\gamma}^{3}e^{-\alpha z}-\dot{\gamma}^{1})\mathbf{e_{2}}+ \alpha e^{-\alpha z}(x\dot{\gamma}^{2}-y\dot{\gamma}^{1})\mathbf{e_{3}}. \tag{5.11}\]
\[\frac{F_{(5)}}{B_{5}} = x(\alpha y\dot{\gamma}^{3}e^{-\alpha z}-\dot{\gamma}^{2})\mathbf{e_ {1}}+\left(x\dot{\gamma}^{1}+\frac{e^{\alpha z}}{2\alpha}\dot{\gamma}^{3}+\frac{ e^{-\alpha z}\alpha}{2}(y^{2}-x^{2})\dot{\gamma}^{3}\right)\mathbf{e_{2}} \tag{5.12}\] \[+\left(\frac{e^{-\alpha z}\alpha}{2}((x^{2}-y^{2})\dot{\gamma}^{2 }-2xy\dot{\gamma}^{1})-\frac{e^{\alpha z}}{2\alpha}\dot{\gamma}^{2}\right) \mathbf{e_{3}},\] \[\frac{F_{(6)}}{B_{6}} = \left(-y\dot{\gamma}^{2}-\frac{e^{\alpha z}}{2\alpha}\dot{\gamma} ^{3}+\frac{e^{-\alpha z}\alpha}{2}(y^{2}-x^{2})\dot{\gamma}^{3}\right) \mathbf{e_{1}}+y(\dot{\gamma}^{1}-\alpha x\dot{\gamma}^{3}e^{-\alpha z}) \mathbf{e_{2}}\] (5.13) \[+\left(\frac{e^{-\alpha z}\alpha}{2}((x^{2}-y^{2})\dot{\gamma}^{ 1}+2xy\dot{\gamma}^{2})+\frac{e^{\alpha z}}{2\alpha}\dot{\gamma}^{1}\right) \mathbf{e_{3}}\]
### Magnetic Trajectory by the First Killing Vector Field
Using the expression found for \(F_{(1)}\) in Equation (5.8) and Equation (5.7), then turning them into the coordinate basis we obtain the following set of equations:
\[\ddot{x}-2\alpha\dot{x}\dot{z} =0, \tag{5.14}\] \[\dot{y}-2\alpha\dot{y}\dot{z} =-B_{1}\dot{z},\] (5.15) \[\ddot{z}+\alpha(\dot{x}^{2}+\dot{y}^{2})e^{-2\alpha z} =B_{1}\dot{y}e^{-2\alpha z}. \tag{5.16}\]
The first and second equations are easily integrated (the results are exact):
\[\dot{x} =c_{1}e^{2\alpha z}, \tag{5.17}\] \[\dot{y} =c_{2}e^{2\alpha z}+\frac{B_{1}}{2\alpha}. \tag{5.18}\]
Using these two expressions in Equation (5.16) we obtain:
\[\dot{z}+\alpha(c_{1}^{2}+c_{2}^{2})e^{2\alpha z}-\frac{B_{1}^{2}}{4\alpha}e^{ -2\alpha z}=0. \tag{5.19}\]
Multiply with \(2\dot{z}\) then integrate to obtain (the result is exact):
\[\dot{z}^{2}+(c_{1}^{2}+c_{2}^{2})e^{2\alpha z}+\frac{B_{1}^{2}}{4\alpha^{2}}e ^{-2\alpha z}=c_{3}^{2}, \tag{5.20}\]
where \(c_{3}\geq 0\). In order to determine the Killing magnetic trajectories generated by \(B_{1}K_{(1)}\) one needs to solve these equations that are first order in \(\dot{x},\dot{y},\dot{z}\). Since we could not come up with analytical solution, we will do a perturbation analysis up to first order in \(B_{1}\). For that purpose we define \(x=x_{0}+B_{1}x_{1},y=y_{0}+B_{1}y_{1},z=z_{0}+B_{1}z_{1}\). Seemingly the zeroth order functions \(x_{0},y_{0},z_{0}\) are the ones that satisfies the geodesic equation of motion which we provided in Section 4.
After we expand the Equations (5.17,5.18,5.20) up to first order in \(B_{1}\) we obtain the following differential equations:
\[\dot{x}_{1} =2\alpha c_{1}e^{2\alpha z_{0}}z_{1}, \tag{5.21}\] \[\dot{y}_{1} =2\alpha c_{2}e^{2\alpha z_{0}}z_{1}+\frac{1}{2\alpha},\] (5.22) \[\frac{\dot{z}_{1}}{z_{1}} =-\alpha(c_{1}^{2}+c_{2}^{2})\frac{e^{2\alpha z_{0}}}{\dot{z}_{0}}. \tag{5.23}\]
By integrating the last equation, we obtain:
\[z_{1}=c_{7}\tanh(\alpha c_{3}(t-c_{4})), \tag{5.24}\]
where \(c_{7}\) is a new constant of integration. By using this expression in above Equations for \(\dot{x}_{1}\) and \(\dot{y}_{1}\) we obtain (via Mathematica [28]):
\[x_{1} =-\frac{c_{1}c_{3}c_{7}}{c_{1}^{2}+c_{2}^{2}}\operatorname{sech} ^{2}(\alpha c_{3}(t-c_{4}))+c_{8} \tag{5.25}\] \[y_{1} =-\frac{c_{2}c_{3}c_{7}}{c_{1}^{2}+c_{2}^{2}}\operatorname{sech} ^{2}(\alpha c_{3}(t-c_{4}))+\frac{t-c_{4}}{2\alpha}+c_{9}, \tag{5.26}\]
where \(c_{8},c_{9}\) are new constants of integration. We have found the zeroth order functions in Section 4.2, so the first order solution to magnetic trajectory produced by the first Killing vector is:
\[x =x_{0}-B_{1}\frac{c_{1}c_{3}c_{7}}{c_{1}^{2}+c_{2}^{2}} \operatorname{sech}^{2}(\alpha c_{3}(t-c_{4}))+\mathcal{O}(B_{1}^{2}) \tag{5.27}\] \[y =y_{0}-B_{1}\frac{c_{2}c_{3}c_{7}}{c_{1}^{2}+c_{2}^{2}} \operatorname{sech}^{2}(\alpha c_{3}(t-c_{4}))+\frac{B_{1}(t-c_{4})}{2\alpha} +\mathcal{O}(B_{1}^{2})\] (5.28) \[e^{2\alpha z} =e^{2\alpha z_{0}}(1+2\alpha B_{1}c_{7}\tanh(\alpha c_{3}(t-c_{4 })))+\mathcal{O}(B_{1})^{2}. \tag{5.29}\]
where we have absorbed \(c_{8},c_{9}\) into the constants \(c_{5},c_{6}\) that are found in \(x_{0},y_{0}\) respectively.
### Magnetic Trajectory by the Second Killing Vector Field
Using the expression found for \(F_{(2)}\) in Equation (5.9) and Equation (5.7), then turning them into the coordinate basis we obtain the following set of equations:
\[\ddot{x}-2\alpha\dot{x}\dot{z} =B_{2}\dot{z}, \tag{5.30}\] \[\ddot{y}-2\alpha\dot{y}\dot{z} =0,\] (5.31) \[\ddot{z}+\alpha(\dot{x}^{2}+\dot{y}^{2})e^{-2\alpha z} =-B_{2}\dot{x}e^{-2\alpha z}. \tag{5.32}\]
We will not explicitly deal with this case, because by mapping \(x\to y,y\to x,B_{1}\to-B_{2}\) in the case in the previous Section, we can obtain the solutions for this case, due to a symmetry between the equations of motion in two cases. Hence, we leave the reader to obtain the full (for arbitrary \(B_{2}\)) equations of motion and we only write down the first order solutions:
\[x =x_{0}+B_{2}\frac{c_{1}c_{3}c_{7}}{c_{1}^{2}+c_{2}^{2}}\operatorname {sech}^{2}(\alpha c_{3}(t-c_{4}))-\frac{B_{2}(t-c_{4})}{2\alpha}+\mathcal{O}(B _{2}^{2}) \tag{5.33}\] \[y =y_{0}+B_{2}\frac{c_{2}c_{3}c_{7}}{c_{1}^{2}+c_{2}^{2}}\operatorname {sech}^{2}(\alpha c_{3}(t-c_{4}))+\mathcal{O}(B_{2}^{2})\] (5.34) \[e^{2\alpha z} =e^{2\alpha z_{0}}(1-2\alpha B_{2}c_{7}\tanh(\alpha c_{3}(t-c_{4} )))+\mathcal{O}(B_{2})^{2}. \tag{5.35}\]
### Magnetic Trajectory by the Third Killing Vector Field
Using the expression found for \(F_{(3)}\) in Equation (5.10) and Equation (5.7), then turning them into the coordinate basis we obtain the following set of equations:
\[\ddot{x}-2\alpha\dot{x}\dot{z} =-B_{3}x\dot{z}, \tag{5.36}\] \[\ddot{y}-2\alpha\dot{y}\dot{z} =-B_{3}y\dot{z},\] (5.37) \[\dot{z}+\alpha(\dot{x}^{2}+\dot{y}^{2})e^{-2\alpha z} =B_{3}(x\dot{x}+y\dot{y})e^{-2\alpha z}. \tag{5.38}\]
We will solve these equations of motion using a perturbation analysis. We write down \(x=x_{0}+B_{3}x_{1},y=y_{0}+B_{3}y_{1},z=z_{0}+B_{3}z_{1}\) up to first order in \(B_{3}\) and note that the functions with zero indices denote solutions of the geodesic equation of motion that we have found in Section 4. The differential equations for \(x_{1},y_{1},z_{1}\) are then:
\[\ddot{x}_{1}-2\alpha\dot{z}_{0}\dot{x}_{1}-2\alpha\dot{x}_{0}\dot {z}_{1} =-x_{0}\dot{z}_{0}, \tag{5.39}\] \[\dot{y}_{1}-2\alpha\dot{z}_{0}\dot{y}_{1}-2\alpha\dot{y}_{0}\dot {z}_{1} =-y_{0}\dot{z}_{0},\] (5.40) \[e^{2\alpha z_{0}}\dot{z}_{1}-2\alpha^{2}(\dot{x}_{0}^{2}+\dot{y }_{0}^{2})z_{1}+2\alpha(\dot{x}_{0}\dot{x}_{1}+\dot{y}_{0}\dot{y}_{1}) =x_{0}\dot{x}_{0}+y_{0}\dot{y}_{0}. \tag{5.41}\]
Let us integrate the first two Equations. For that purpose we multiply Equation (5.39) by \(e^{-2\alpha z_{0}}\) and obtain:
\[e^{-2\alpha z_{0}}\ddot{x}_{1}-2\alpha\dot{z}_{0}e^{-2\alpha z_{0}}\dot{x}_{1} -2\alpha c_{1}\dot{z}_{1}=-x_{0}\dot{z}_{0}e^{-2\alpha z_{0}}, \tag{5.42}\]
and we can write this equation as:
\[\partial_{t}(e^{-2\alpha z_{0}}\dot{x}_{1}-2\alpha c_{1}z_{1})=\frac{1}{2 \alpha}x_{0}\partial_{t}e^{-2\alpha z_{0}}. \tag{5.43}\]
We easily integrate the left hand side of the equation, and use integration by parts on the right hand side, the obtain:
\[\dot{x}_{1}=2\alpha c_{1}e^{2\alpha z_{0}}z_{1}+\frac{x_{0}}{2\alpha}-\frac{c_{1 }(t-c_{4})}{2\alpha}e^{2\alpha z_{0}}+c_{7}e^{2\alpha z_{0}}. \tag{5.44}\]
The case for the \(\dot{y}_{1}\) is found by using the symmetry \(x\leftrightarrow y\), and we can immediately write down:
\[\dot{y}_{1}=2\alpha c_{2}e^{2\alpha z_{0}}z_{1}+\frac{y_{0}}{2\alpha}-\frac{c_ {2}(t-c_{4})}{2\alpha}e^{2\alpha z_{0}}+c_{8}e^{2\alpha z_{0}}, \tag{5.45}\]
where \(c_{7},c_{8}\) are constants of integration. We put these functions into the Equation (5.41) and obtain the following expression:
\[e^{-2\alpha z_{0}}\ddot{z}_{1}+2\alpha^{2}(c_{1}^{2}+c_{2}^{2})z_{1}=(c_{1}^{2 }+c_{2}^{2})(t-c_{4})-2\alpha(c_{1}c_{7}+c_{2}c_{8}). \tag{5.46}\]
Let us define \(\tau=\alpha c_{3}(t-c_{4})\) and rewrite the above equation as, after some algebraic manipulations:
\[\cosh^{2}(\tau)\partial_{\tau}^{2}z_{1}+2z_{1}=\frac{\tau}{\alpha^{3}c_{3}}- \frac{2(c_{1}c_{7}+c_{2}c_{8})}{\alpha(c_{1}^{2}+c_{2}^{2})}, \tag{5.47}\]
hence we can write:
\[z_{1}=z_{\rm 1h}+\frac{\tau}{2\alpha^{3}c_{3}}-\frac{c_{1}c_{7}+c_{2}c_{8}}{ \alpha(c_{1}^{2}+c_{2}^{2})}, \tag{5.48}\]
where \(z_{\rm 1h}\) satisfies the homogeneous equation. We find the homogeneous solution, with the help of Mathematica [28] to be:
\[z_{\rm 1h}=c_{9}\tanh(\tau)+c_{10}(1-\tau\tanh(\tau)), \tag{5.49}\]
So we can write:
\[z_{1}=\frac{\tau}{2\alpha^{3}c_{3}}-\frac{c_{1}c_{7}+c_{2}c_{8}}{\alpha(c_{1} ^{2}+c_{2}^{2})}+c_{9}\tanh(\tau)+c_{10}(1-\tau\tanh(\tau)). \tag{5.50}\]
We use this information in Equation (5.44) to find \(x_{1}\) by performing integration by Mathematica [28], and after a few algebraic manipulations we obtain:
\[x_{1}=-\frac{c_{1}\tau^{2}}{4\alpha^{3}c_{3}^{2}}-\frac{c_{1}\log( \cosh(\tau))}{2\alpha^{3}\left(c_{1}^{2}+c_{2}^{2}\right)}\\ +\tau\left(\frac{c_{5}}{2\alpha^{2}c_{3}}+\frac{c_{1}^{3}+c_{2}^{ 2}c_{1}}{\alpha^{3}\left(c_{1}^{2}+c_{2}^{2}\right){}^{2}}\tanh(\tau)+\frac{c_ {1}c_{3}c_{10}\text{sech}^{2}(\tau)}{c_{1}^{2}+c_{2}^{2}}\right)\\ +\frac{1}{\alpha(c_{1}^{2}+c_{2}^{2})}\left(\alpha c_{1}c_{3}c_{1 0}(c_{1}^{2}+c_{2}^{2})+\alpha c_{3}c_{7}-2c_{1}c_{3}(c_{1}c_{7}+c_{2}c_{8}) \right)\tanh(\tau)\\ -\frac{c_{1}c_{3}c_{9}\text{sech}^{2}(\tau)}{c_{1}^{2}+c_{2}^{2} }+c_{11}, \tag{5.51}\]
where \(c_{11}\) is a new integration constant. By using the symmetry between \(x_{1},y_{1}\) we can immediately write down the result for \(y_{1}\) as (here \(c_{12}\) is a new constant of integration):
\[y_{1}=-\frac{c_{2}\tau^{2}}{4\alpha^{3}c_{3}^{2}}-\frac{c_{2} \log(\cosh(\tau))}{2\alpha^{3}\left(c_{1}^{2}+c_{2}^{2}\right)}\\ +\tau\left(\frac{c_{6}}{2\alpha^{2}c_{3}}+\frac{c_{2}^{3}+c_{1}^{ 2}c_{2}}{\alpha^{3}\left(c_{1}^{2}+c_{2}^{2}\right){}^{2}}\tanh(\tau)+\frac{c_ {2}c_{3}c_{10}\text{sech}^{2}(\tau)}{c_{1}^{2}+c_{2}^{2}}\right)\\ +\frac{1}{\alpha(c_{1}^{2}+c_{2}^{2})}\left(\alpha c_{2}c_{3}c_{1 0}(c_{1}^{2}+c_{2}^{2})+\alpha c_{3}c_{8}-2c_{2}c_{3}(c_{1}c_{7}+c_{2}c_{8}) \right)\tanh(\tau)\\ -\frac{c_{1}c_{2}c_{9}\text{sech}^{2}(\tau)}{c_{1}^{2}+c_{2}^{2} }+c_{12}. \tag{5.52}\]
To summarize, the solution for the Killing magnetic curve, up to first order in \(B_{3}\) is as follows:
\[x=x_{0}+B_{3}x_{1}+\mathcal{O}(B_{3}^{2}), \tag{5.53}\] \[y=y_{0}+B_{3}y_{1}+\mathcal{O}(B_{3}^{2}),\] (5.54) \[z=z_{0}+B_{3}z_{1}+\mathcal{O}(B_{3}^{2}). \tag{5.55}\]
where \(x_{0},y_{0},z_{0}\) are solutions for the geodesic equation and \(x_{1},y_{1},z_{1}\) are given in Equations (5.51,5.52,5.50) respectively. Last but not least, wihle making use of \(x_{1},y_{1},z_{1}\), remember that \(\tau=\alpha c_{3}(t-c_{4})\).
### Magnetic Trajectory by the Fourth Killing Vector Field
Using the expression found for \(F_{(4)}\) in Equation (5.11) and Equation (5.7), then turning them into the coordinate basis we obtain the following set of equations:
\[\ddot{x}-2\alpha\dot{x}\dot{z} =\alpha B_{4}y\dot{z}-B_{4}\dot{y}, \tag{5.56}\] \[\ddot{y}-2\alpha\dot{y}\dot{z} =-\alpha B_{4}x\dot{z}+B_{4}\dot{x},\] (5.57) \[\ddot{z}+\alpha(\dot{x}^{2}+\dot{y}^{2})e^{-2\alpha z} =\alpha B_{4}(x\dot{y}-\dot{x}y)e^{-2\alpha z}. \tag{5.58}\]
We will solve these equations of motion using a perturbation analysis. We write down \(x=x_{0}+B_{4}x_{1},y=y_{0}+B_{4}y_{1},z=z_{0}+B_{4}z_{1}\) upto first order in \(B_{3}\) and note that the function with zero indices are solutions of the geodesic equation of motion that we have found in Section 4. For \(x_{1}\) we can write the following (by keeping the first order terms in \(B_{4}\) in the first differential equation above):
\[\ddot{x}_{1}-2\alpha\dot{z}_{0}\dot{x}_{1}-2\alpha\dot{x}_{0}\dot{z}_{1}= \alpha y_{0}\dot{z}_{0}-\dot{y}_{0}. \tag{5.59}\]
By multiplying with \(e^{-2\alpha z_{0}}\), we obtain:
\[\partial_{t}(\dot{x}_{1}e^{-2\alpha z_{0}}) =2\alpha c_{1}\dot{z}_{1}-c_{2}+\alpha\dot{z}_{0}y_{0}e^{-2\alpha z _{0}},\] \[=2\alpha c_{1}\dot{z}_{1}-\frac{1}{2}y_{0}\partial_{t}e^{-2 \alpha z_{0}}, \tag{5.60}\]
We integrate both sides and use integration by parts on the right hand side and obtain:
\[\dot{x}_{1}e^{-2\alpha z_{0}}=2\alpha c_{1}z_{1}-c_{2}t-\frac{1}{2}y_{0}e^{-2 \alpha z_{0}}+\frac{c_{2}t}{2}+c_{7}, \tag{5.61}\]
where \(c_{7}\) is a constant of integration. After a few algebraic manipulations, we obtain:
\[\dot{x}_{1}=2\alpha c_{1}z_{1}e^{2\alpha z_{0}}-\frac{1}{2}c_{2}(t-c_{4})e^{2 \alpha z_{0}}-\frac{y_{0}}{2}+c_{7}e^{2\alpha z_{0}}. \tag{5.62}\]
Here we absorbed a multiple of \(c_{4}\) in \(c_{7}\) in order to make the change of variables easier later on. The case for the calculation of \(\dot{y}_{1}\) follows along similar lines, and the results reads:
\[\dot{y}_{1}=2\alpha c_{2}z_{1}e^{2\alpha z_{0}}+\frac{1}{2}c_{1}(t-c_{4})e^{2 \alpha z_{0}}+\frac{x_{0}}{2}+c_{8}e^{2\alpha z_{0}}, \tag{5.63}\]
where \(c_{8}\) is a constant of integration. Using the information we have obtained through Equations (5.62,5.63) in Equation (5.58) and by regarding the first order terms in \(B_{4}\) only, we obtain:
\[e^{2\alpha z_{0}}\ddot{z}_{1}-2\alpha^{2}(\dot{x}_{0}^{2}+\dot{y}_{0}^{2})z_{1 }=-2\alpha(\dot{x}_{0}\dot{x}_{1}+\dot{y}_{0}\dot{y}_{1})+\alpha(x_{0}\dot{y}_ {0}-\dot{x}_{0}y_{0}.) \tag{5.64}\]
When we put the values of \(x_{0},\dot{x}_{1},y_{0},\dot{y}_{1}\) we obtain:
\[e^{-2\alpha z_{0}}\dot{z}_{1}+2\alpha^{2}(c_{1}^{2}+c_{2}^{2})z_{1}=-2\alpha(c_{ 1}c_{7}+c_{2}c_{8}). \tag{5.65}\]
The particular solution is easy to find:
\[z_{1\text{p}}=-\frac{c_{1}c_{7}+c_{2}c_{8}}{\alpha(c_{1}^{2}+c_{2}^{2})}. \tag{5.66}\]
On the other hand, the homogenous solution \(z_{1\text{h}}\) satisfies:
\[e^{-2\alpha z_{0}}\dot{z}_{1\text{h}}+2\alpha^{2}(c_{1}^{2}+c_{2}^{2})z_{1 \text{h}}=0, \tag{5.67}\]
and when we put the value of \(e^{-2\alpha z_{0}}\) we obtain:
\[\cosh^{2}(\alpha c_{3}(t-c_{4}))\dot{z}_{1\text{h}}+2\alpha^{2}c_{3}^{2}z_{1 \text{h}}=0. \tag{5.68}\]
With the help of Mathematica [28] we find the homogenous solution as:
\[z_{1\text{h}}=-c_{10}+(c_{9}+\alpha c_{3}c_{10}(t-c_{4}))\tanh(\alpha c_{3}(t- c_{4})), \tag{5.69}\]
where \(c_{9},c_{10}\) are new constants of integration. All in all, the solution we have found for \(z_{1}\) reads as:
\[z_{1}=-c_{10}+(c_{9}+\alpha c_{3}c_{10}(t-c_{4}))\tanh(\alpha c_{3}(t-c_{4})) -\frac{c_{1}c_{7}+c_{2}c_{8}}{\alpha(c_{1}^{2}+c_{2}^{2})}. \tag{5.70}\]
We use the obtained expression for \(z_{1}\) in Equations (5.62,5.63). After the integrations take place, we obtain:
\[x_{1}=-\frac{\text{sech}^{2}\left(\alpha c_{3}\left(t-c_{4}\right) \right)}{8\alpha c_{2}^{2}}\Bigg{[}c_{3}c_{2}\left(t-c_{4}\right)\sinh\left(2 \alpha c_{3}\left(t-c_{4}\right)\right)\] \[+2c_{3}\left(\alpha c_{1}\left(c_{10}\left(2\alpha c_{3}\left(t -c_{4}\right)+\sinh\left(2\alpha c_{3}\left(t-c_{4}\right)\right)\right)+2c_{ 9}\right)-c_{7}\sinh\left(2\alpha c_{3}\left(t-c_{4}\right)\right)\right)\] \[+4\alpha c_{6}c_{2}^{2}t\cosh^{2}\left(\alpha c_{3}\left(t-c_{4} \right)\right)\Bigg{]}+c_{11}. \tag{5.71}\]
and
\[y_{1}=\frac{1}{8\alpha c_{2}^{2}}\left[4\alpha c_{2}^{2}c_{5}\left(t-c_{4}\right)+2 c_{3}\left(c_{1}\left(t-c_{4}\right)+2c_{8}\right)\tanh\left(\alpha c_{3}\left(t-c_{4} \right)\right)\right.\]
\[-2\alpha c_{2}c_{3}\left(c_{10}\left(2\alpha c_{3}\left(t-c_{4}\right)+\sinh \left(2\alpha c_{3}\left(t-c_{4}\right)\right)\right)+2c_{9}\right)\text{sech}^ {2}\left(\alpha c_{3}\left(t-c_{4}\right)\right)\Bigg{]}+c_{12} \tag{5.72}\]
where \(c_{11},c_{12}\) are new constants of integration. To summarize this section, the solution for the Killing magnetic curve, up to first order in \(B_{4}\) is as follows:
\[x=x_{0}+B_{4}x_{1}+\mathcal{O}(B_{4}^{2}), \tag{5.73}\] \[y=y_{0}+B_{4}y_{1}+\mathcal{O}(B_{4}^{2}),\] (5.74) \[z=z_{0}+B_{4}z_{1}+\mathcal{O}(B_{4}^{2}). \tag{5.75}\]
where \(x_{0},y_{0},z_{0}\) are solutions for the geodesic equation and \(x_{1},y_{1},z_{1}\) are given in Equations (5.71,5.72,5.70) respectively.
### Magnetic Trajectory by the Fifth Killing Vector Field
Using the expression found for \(F_{(5)}\) in Equation (5.12) and Equation (5.7), then turning them into the coordinate basis we obtain the following set of equations:
\[\ddot{x}-2\alpha\dot{x}\dot{z} =B_{5}x(-\dot{y}+\alpha y\dot{z}) \tag{5.76}\] \[\ddot{y}-2\alpha\dot{y}\dot{z} =\frac{B_{5}}{2\alpha}(2x\alpha\dot{x}+e^{2\alpha z}\dot{z}+ \alpha^{2}(y^{2}-x^{2})\dot{z}),\] (5.77) \[\ddot{z}+\alpha(\dot{x}^{2}+\dot{y}^{2})e^{-2\alpha z} =-\frac{B_{5}}{2\alpha}\left(\alpha^{2}e^{-2\alpha z}((y^{2}-x^{2 })\dot{y}+2xy\dot{x})+\dot{y}\right) \tag{5.78}\]
We will solve these equations of motion using a perturbation analysis. We write down \(x=x_{0}+B_{5}x_{1},y=y_{0}+B_{5}y_{1},z=z_{0}+B_{5}z_{1}\) upto first order in \(B_{5}\) and note that the function with zero indices are solutions of the geodesic equation of motion that we have found in Section 4. The differential equation for \(x_{1}\) is found as follows:
\[\ddot{x}_{1}-2\alpha\dot{z}_{0}\dot{x}_{1}=2\alpha\dot{x}_{0}\dot{z}_{1}-x_{0} \dot{y}_{0}+\alpha x_{0}y_{0}\dot{z}_{0}, \tag{5.79}\]
we multiply this equation by \(e^{-2\alpha z_{0}}\) and then integrate to find,
\[\dot{x}_{1}=2\alpha c_{1}z_{1}e^{2\alpha z_{0}}-c_{2}e^{2\alpha z_{0}}\int x_{0 }\text{d}t-\frac{1}{2}e^{2\alpha z_{0}}\int x_{0}y_{0}\partial_{t}e^{-2\alpha z _{0}}\text{d}t+c_{7}e^{2\alpha z_{0}} \tag{5.80}\]
where \(c_{7}\) is a constant of integration. Now, let us turn our focus to \(y_{1}\) whose differential equation reads as:
\[\ddot{y}_{1}-2\alpha\dot{z}_{0}\dot{y}_{1}=2\alpha\dot{y}_{0}\dot{z}_{1}+x_{0} \dot{x}_{0}+\frac{1}{2\alpha}\dot{z}_{0}e^{2\alpha z_{0}}+\frac{\alpha}{2}(y_{0} ^{2}-x_{0}^{2})\dot{z}_{0}, \tag{5.81}\]
we multiply with \(e^{-2\alpha z_{0}}\) and integrate to find,
\[\dot{y}_{1}=2\alpha c_{2}z_{1}e^{2\alpha z_{0}}+c_{1}e^{2\alpha z _{0}}\int x_{0}\mathrm{d}t\\ +\frac{1}{2\alpha}z_{0}e^{2\alpha z_{0}}-\frac{e^{2\alpha z_{0}}} {4}\int(y_{0}^{2}-x_{0}^{2})\partial_{t}e^{-2\alpha z_{0}}\mathrm{d}t+c_{8}e ^{2\alpha z_{0}}, \tag{5.82}\]
where \(c_{8}\) is a constant of integration. Lastly we write down the differential equation for \(z_{1}\):
\[\ddot{z}_{1}-2\alpha^{2}e^{2\alpha z_{0}}(c_{1}^{2}+c_{2}^{2})z_{1}+2\alpha(c _{1}\dot{x}_{1}+c_{2}\dot{y}_{1})=-\frac{\dot{y}_{0}}{2\alpha}-\frac{\alpha}{ 2}\Big{[}(y_{0}^{2}-x_{0}^{2})c_{2}+2c_{1}x_{0}y_{0}\Big{]}. \tag{5.83}\]
When we use the forms of \(\dot{x}_{1},\dot{y}_{1}\) in Equations (5.80,5.82) we obtain:
\[e^{-2\alpha z_{0}}\ddot{z}_{1}+2\alpha^{2}(c_{1}^{2}+c_{2}^{2})z _{1}=-\frac{c_{2}}{2\alpha}-\frac{\alpha}{2}e^{-2\alpha z_{0}}[(y_{0}^{2}-x_ {0}^{2})c_{2}+2c_{1}x_{0}y_{0}]\\ +\alpha c_{1}\int x_{0}y_{0}\partial_{t}e^{-2\alpha z_{0}}\mathrm{ d}t-2\alpha(c_{1}c_{7}+c_{2}c_{8})\\ -c_{2}z_{0}+\frac{\alpha}{2}c_{2}\int(y_{0}^{2}-x_{0}^{2}) \partial_{t}e^{-2\alpha z_{0}}. \tag{5.84}\]
We already calculated the homogeneous solution (see Equation (5.69)):
\[z_{\mathrm{1h}}=-c_{10}+(c_{9}+\alpha c_{3}c_{10}(t-c_{4}))\tanh(\alpha c_{3} (t-c_{4})), \tag{5.85}\]
where \(c_{9},c_{10}\) are constants of integration. We can calculate a part of particular solution, it is given as follows:
\[z_{\mathrm{1p}}=-\frac{c_{2}}{4\alpha^{3}(c_{1}^{2}+c_{2}^{2})}-\frac{1}{ \alpha}\frac{c_{1}c_{7}+c_{2}c_{8}}{c_{1}^{2}+c_{2}^{2}}+f(t), \tag{5.86}\]
where \(f(t)\) satisfies:
\[e^{-2\alpha z_{0}}\ddot{f}+2\alpha^{2}(c_{1}^{2}+c_{2}^{2})f=-\frac{ \alpha}{2}e^{-2\alpha z_{0}}[(y_{0}^{2}-x_{0}^{2})c_{2}+2c_{1}x_{0}y_{0}]\\ -c_{2}z_{0}+\alpha c_{1}\int x_{0}y_{0}\partial_{t}e^{-2\alpha z_ {0}}\mathrm{d}t+\frac{\alpha}{2}c_{2}\int(y_{0}^{2}-x_{0}^{2})\partial_{t}e^{- 2\alpha z_{0}}\mathrm{d}t. \tag{5.87}\]
We simplify the equation and find:
\[e^{-2\alpha z_{0}}\ddot{f}+2\alpha^{2}(c_{1}^{2}+c_{2}^{2})f=-c_{2}z_{0}+\frac {\alpha}{2}c_{1}c_{2}\int x_{0}\mathrm{d}t-\alpha\left(c_{1}^{2}+\frac{c_{2}^ {2}}{2}\right)\int y_{0}\mathrm{d}t. \tag{5.88}\]
When we do the integrals and put the values of functions \(x_{0},y_{0},z_{0}\) of the geodesic equation, we find the solutions as:
\[f =\frac{A_{5}}{4}[2\log\cosh(\tau)-\tau\tanh(\tau)]\] \[\quad+\frac{1}{2\alpha^{2}(c_{1}^{2}+c_{2}^{2})}\left[\left(\frac {c_{1}c_{2}c_{5}}{2c_{3}}+\frac{c_{6}(2c_{1}^{2}+c_{2}^{2})}{2}\right)\tau- \frac{c_{2}}{2\alpha}\log\left(\frac{c_{3}^{2}}{c_{1}^{2}+c_{2}^{2}}\right)\right] \tag{5.89}\]
where
\[A_{5}=\frac{1}{\alpha^{3}(c_{1}^{2}+c_{2}^{2})}\left(c_{2}+\frac{c_{1}^{2}c_{ 2}}{2(c_{1}^{2}+c_{2}^{2})}+\frac{2c_{2}c_{3}(2c_{1}^{2}+c_{2}^{2})}{2(c_{1}^ {2}+c_{2}^{2})}\right), \tag{5.90}\]
and remember that \(\tau=\alpha c_{3}(t-c_{4})\). In the end, we find \(z_{1}\) as \(z_{1}=z_{1\mathrm{h}}+z_{1\mathrm{p}}\) where
\[z_{1\mathrm{h}} =-c_{10}+(c_{9}+c_{10}\tau)\tanh(\tau), \tag{5.91}\] \[z_{1\mathrm{p}} =-\frac{c_{2}}{4\alpha^{3}(c_{1}^{2}+c_{2}^{2})}-\frac{1}{\alpha }\frac{c_{1}c_{7}+c_{2}c_{8}}{c_{1}^{2}+c_{2}^{2}}\] \[\quad+\frac{A_{5}}{4}[2\log\cosh(\tau)-\tau\tanh(\tau)]\] \[\quad+\frac{1}{2\alpha^{2}(c_{1}^{2}+c_{2}^{2})}\left[\left(\frac {c_{1}c_{2}c_{5}}{2c_{3}}+\frac{c_{6}(2c_{1}^{2}+c_{2}^{2})}{2}\right)\tau- \frac{c_{2}}{2\alpha}\log\left(\frac{c_{3}^{2}}{c_{1}^{2}+c_{2}^{2}}\right) \right]. \tag{5.92}\]
We would like to write \(z_{1}\) as:
\[z_{1} =-c_{10}+(c_{9}+c_{10}\tau)\tanh(\tau)+A_{1}+A_{2}\tau+\frac{A_{5}}{4}[2 \log\cosh(\tau)-\tau\tanh(\tau)], \tag{5.93}\]
where
\[A_{1}=-\frac{c_{2}}{4\alpha^{3}(c_{1}^{2}+c_{2}^{2})}-\frac{1}{ \alpha}\frac{c_{1}c_{7}+c_{2}c_{8}}{c_{1}^{2}+c_{2}^{2}}-\frac{1}{2\alpha^{2}(c_ {1}^{2}+c_{2}^{2})}\frac{c_{2}}{2\alpha}\log\left(\frac{c_{3}^{2}}{c_{1}^{2}+c _{2}^{2}}\right), \tag{5.94}\] \[A_{2}=\frac{1}{2\alpha^{2}(c_{1}^{2}+c_{2}^{2})}\left(\frac{c_{1 }c_{2}c_{5}}{2c_{3}}+\frac{c_{6}(2c_{1}^{2}+c_{2}^{2})}{2}\right). \tag{5.95}\]
All we need to now to finish this section, is to put the value of \(z_{1}\) we have just provided into Equations (5.80,5.82) (for \(\dot{x}_{1}\) and \(\dot{y}_{1}\)) and integrate them. After a few algebraic manipulations we obtain \(\dot{x}_{1},\dot{y}_{1}\) from Equations (5.80,5.82) as follows:
\[\dot{x}_{1}=2\alpha c_{1}z_{1}e^{2\alpha z_{0}}-\frac{1}{2}x_{0}y_{0}+\frac{1 }{2}e^{2\alpha z_{0}}(c_{1}c_{6}-c_{2}c_{5})(t-c_{4})+c_{7}e^{2\alpha z_{0}}, \tag{5.96}\]
and
\[\dot{y}_{1}=2\alpha c_{2}z_{1}e^{2\alpha z_{0}}+\frac{1}{2}\alpha z _{0}e^{2\alpha z_{0}}\\ +\frac{3c_{1}}{4}e^{2\alpha z_{0}}\int x_{0}\mathrm{d}t+\frac{c_{ 2}}{4}e^{2\alpha z_{0}}\int y_{0}\mathrm{d}t-\frac{y_{0}^{2}-x_{0}^{2}}{4}. \tag{5.97}\]
where we absorbed a constant in \(c_{7}\). When we perform the integrations via Mathematica [28] we obtain:
\[x_{1} =\frac{1}{4\left(c_{1}^{2}+c_{2}^{2}\right)^{2}}\Bigg{[}-4\alpha A_{ 5}c_{1}\left(c_{1}^{2}+c_{2}^{2}\right)c_{3}^{2}\left(t-c_{4}\right)\] \[\quad+c_{1}\left(c_{1}^{2}+c_{2}^{2}\right)c_{3}\left(8\alpha A_{ 2}c_{3}\left(t-c_{4}\right)+8A_{1}+3A_{5}-4c_{10}\right)\tanh\left(\alpha c_{3 }\left(t-c_{4}\right)\right)\] \[\quad+c_{1}\left(c_{1}^{2}+c_{2}^{2}\right)c_{3}\left(\alpha c_{ 3}\left(A_{5}-4c_{10}\right)\left(t-c_{4}\right)-4c_{9}\right)\text{sech}^{2} \left(\alpha c_{3}\left(t-c_{4}\right)\right)\] \[\quad\qquad\qquad\qquad-8A_{2}c_{1}\left(c_{1}^{2}+c_{2}^{2} \right)c_{3}\log\left(\cosh\left(\alpha c_{3}\left(t-c_{4}\right)\right)\right)\] \[\quad+4A_{5}c_{1}\left(c_{1}^{2}+c_{2}^{2}\right)c_{3}\tanh \left(\alpha c_{3}\left(t-c_{4}\right)\right)\log\left(\cosh\left(\alpha c_{3 }\left(t-c_{4}\right)\right)\right)\] \[\quad\qquad\qquad\qquad\qquad\qquad+\frac{2c_{1}c_{2}c_{3}\tanh \left(\alpha c_{3}\left(t-c_{4}\right)\right)}{\alpha^{3}}\] \[\quad-\frac{\left(\alpha\left(c_{1}^{2}+c_{2}^{2}\right)c_{5}-c_ {1}c_{3}\right)\left(\alpha\left(c_{1}^{2}+c_{2}^{2}\right)c_{6}-c_{2}c_{3} \right)\log\left(\tanh\left(\alpha c_{3}\left(t-c_{4}\right)\right)+1\right)} {\alpha^{3}c_{3}}\] \[\quad+\frac{\left(\alpha\left(c_{1}^{2}+c_{2}^{2}\right)c_{5}+c_ {1}c_{3}\right)\left(\alpha\left(c_{1}^{2}+c_{2}^{2}\right)c_{6}+c_{2}c_{3} \right)\log\left(1-\tanh\left(\alpha c_{3}\left(t-c_{4}\right)\right)\right)} {\alpha^{3}c_{3}}-\] \[\qquad\qquad\qquad\qquad\frac{2\left(c_{1}^{2}+c_{2}^{2}\right) \left(c_{1}c_{6}-c_{2}c_{5}\right)\log\left(\cosh\left(\alpha c_{3}\left(t-c_ {4}\right)\right)\right)}{\alpha^{2}}\] \[\quad+\frac{2\left(c_{1}^{2}+c_{2}^{2}\right)c_{3}\left(c_{1}c_{6 }-c_{2}c_{5}\right)\left(t-c_{4}\right)\tanh\left(\alpha c_{3}\left(t-c_{4} \right)\right)}{\alpha}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad+\frac{4\left(c_{1}^{2} +c_{2}^{2}\right)c_{3}c_{7}\tanh\left(\alpha c_{3}\left(t-c_{4}\right)\right)} {\alpha}\Bigg{]}+c_{11}, \tag{5.98}\]
and
\[y_{1} =\frac{1}{8\alpha^{3}\left(c_{1}^{2}+c_{2}^{2}\right){}^{2}c_{3}} \Bigg{[}-8\alpha^{4}A_{5}c_{2}\left(c_{1}^{2}+c_{2}^{2}\right)c_{3}^{3}\left(t- c_{4}\right)\] \[\quad+2\alpha^{3}c_{2}\left(c_{1}^{2}+c_{2}^{2}\right)c_{3}^{2} \left(\alpha c_{3}\left(A_{5}-4c_{10}\right)\left(t-c_{4}\right)-4c_{9}\right) \text{sech}^{2}\left(\alpha c_{3}\left(t-c_{4}\right)\right)\] \[\quad\qquad\qquad-16\alpha^{3}A_{2}c_{2}\left(c_{1}^{2}+c_{2}^{2 }\right)c_{3}^{2}\log\left(\cosh\left(\alpha c_{3}\left(t-c_{4}\right)\right)\right)\] \[+2c_{3}^{2}\tanh\left(\alpha c_{3}\left(t-c_{4}\right)\right) \left(\alpha^{3}c_{2}\left(c_{1}^{2}+c_{2}^{2}\right)\left(8\alpha A_{2}c_{3} \left(t-c_{4}\right)+8A_{1}+3A_{5}-4c_{10}\right)\] \[\qquad\qquad\qquad\qquad+4\alpha^{3}A_{5}c_{2}\left(c_{1}^{2}+c_ {2}^{2}\right)\log\left(\cosh\left(\alpha c_{3}\left(t-c_{4}\right)\right)\right)\] \[\quad-2\alpha^{2}\left(c_{1}^{2}+c_{2}^{2}\right)+3c_{1}\left( \alpha^{2}\left(c_{1}^{2}+c_{2}^{2}\right)c_{5}t+c_{1}\right)+c_{2}\left( \alpha^{2}\left(c_{1}^{2}+c_{2}^{2}\right)c_{6}t+c_{2}\right)\] \[+\alpha^{2}\left(c_{1}^{2}+c_{2}^{2}\right)\log\left(\frac{c_{3}^ {2}\text{sech}^{2}\left(\alpha c_{3}\left(t-c_{4}\right)\right)}{c_{1}^{2}+c_ {2}^{2}}\right)+3c_{1}^{2}\log\left(\cosh\left(\alpha c_{3}\left(t-c_{4}\right) \right)\right)\] \[\qquad\qquad\qquad+c_{2}^{2}\log\left(\cosh\left(\alpha c_{3} \left(t-c_{4}\right)\right)\right)-c_{1}^{2}+c_{2}^{2}\Big{)}\] \[+4\alpha^{3}\left(c_{1}^{2}+c_{2}^{2}\right)c_{3}^{3}\left(t-c_{ 4}\right)+6\alpha c_{1}^{2}c_{3}^{3}\left(c_{4}-t\right)+2\alpha c_{2}^{2}c_{3 }^{3}\left(c_{4}-t\right)\] \[\quad+\left(c_{1}c_{3}-\alpha\left(c_{1}^{2}+c_{2}^{2}\right)c_{5} \right){}^{2}\log\left(\tanh\left(\alpha c_{3}\left(t-c_{4}\right)\right)+1\right)\] \[\quad-\left(\alpha\left(c_{1}^{2}+c_{2}^{2}\right)c_{5}+c_{1}c_{3 }\right){}^{2}\log\left(1-\tanh\left(\alpha c_{3}\left(t-c_{4}\right)\right)\right)\] \[\quad-\left(c_{2}c_{3}-\alpha\left(c_{1}^{2}+c_{2}^{2}\right)c_{6 }\right){}^{2}\log\left(\tanh\left(\alpha c_{3}\left(t-c_{4}\right)\right)+1\right)\] \[\quad+\left(\alpha\left(c_{1}^{2}+c_{2}^{2}\right)c_{6}+c_{2}c_{3 }\right){}^{2}\log\left(1-\tanh\left(\alpha c_{3}\left(t-c_{4}\right)\right)\right)\] \[\qquad\qquad-6\alpha c_{1}\left(c_{1}^{2}+c_{2}^{2}\right)c_{3}c_ {5}\log\left(\cosh\left(\alpha c_{3}\left(t-c_{4}\right)\right)\right)\] \[\qquad\qquad-2\alpha c_{2}\left(c_{1}^{2}+c_{2}^{2}\right)c_{3}c_ {6}\log\left(\cosh\left(\alpha c_{3}\left(t-c_{4}\right)\right)\right)\Bigg{]}+c _{12}. \tag{5.99}\]
To summarize, the solution for the Killing magnetic curve, up to first order in \(B_{5}\) is as follows:
\[x =x_{0}+B_{5}x_{1}+\mathcal{O}(B_{5}^{2}), \tag{5.100}\] \[y =y_{0}+B_{5}y_{1}+\mathcal{O}(B_{5}^{2}),\] (5.101) \[z =z_{0}+B_{5}z_{1}+\mathcal{O}(B_{5}^{2}). \tag{5.102}\]
where \(x_{0},y_{0},z_{0}\) are solutions for the geodesic equation and \(x_{1},y_{1},z_{1}\) are given in Equations (5.98,5.99,5.93) respectively. Last but not least, wihle making use of \(x_{1},y_{1},z_{1}\), remember that \(\tau=\alpha c_{3}(t-c_{4})\).
where \(c_{11},c_{12}\) are constants of integration.
### Magnetic Trajectory by the Sixth Killing Vector Field
Using the expression found for \(F_{(6)}\) in Equation (5.13) and Equation (5.7), then turning them into the coordinate basis we obtain the following set of equations:
\[\ddot{x}-2\alpha\dot{x}\dot{z} =-\frac{B_{6}}{2\alpha}(2y\alpha\dot{y}+e^{2\alpha z}\dot{z}+\alpha ^{2}(x^{2}-y^{2})\dot{z}), \tag{5.103}\] \[\ddot{y}-2\alpha\dot{y}\dot{z} =B_{6}y(\dot{x}-\alpha x\dot{z}),\] (5.104) \[\ddot{z}+\alpha(\dot{x}^{2}+\dot{y}^{2})e^{-2\alpha z} =\frac{B_{6}}{2\alpha}\left(\alpha^{2}e^{-2\alpha z}((x^{2}-y^{2} )\dot{x}+2xy\dot{y})+\dot{x}\right) \tag{5.105}\]
We will solve these equations of motion using a perturbation analysis. We write down \(x=x_{0}+B_{6}x_{1},y=y_{0}+B_{6}y_{1},z=z_{0}+B_{6}z_{1}\) upto first order in \(B_{6}\) and note that the function with zero indices are solutions of the geodesic equation of motion that we have found in Section 4. Because of the symmetry between \(K_{5}\) and \(K_{6}\) we do not need to solve the equations for this case explicitly. Just map \(x\leftrightarrow y\) and \(B_{5}\leftrightarrow-B_{6}\). So the perturbative solution found in the previous section is valid for this case when the symmetry transformations are done.
## 6 Conclusion
In this study, we investigated the \(\mathds{H}^{3}\) manifold which is a \((-\alpha)\)-Kenmotsu manifold admitting six Killing vector fields. We have solved the geodesic equation of motion analytically. We calculated the motion of a charged particle under the magnetic field \(B_{i}K_{(i)}\) upto first order in \(B_{i}\) analytically using perturbation theory for all Killing vector fields \(K_{(i)}\). We have put a scaling factor \(B_{i}\) in front of Killing vectors to manage strength of magnetic field and make it amenable to perturbative analysis. We used units where \(q/m=-1\), however notice that the unit of each \(B_{i}\) is not necessarily the unit of magnetic field. We also proved that 3-dimensional \((\alpha)\)-Kenmotsu manifolds cannot have any magnetic vector field in the direction of their Reeb vector fields. This result is interesting because most of the studies related with magnetic curves of almost contact manifolds in literature deals with Reeb magnetic curves.
In Appendix A we have plotted the geodesic solution, 1st order perturbative result, and numerical solution for the second Killing magnetic field (see Section 5.2) where the \(y\)-coordinate is suppressed, e.g. \(y=0\). In Appendix B we have shown calculation steps explicitly for determining Killing vector fields.
## Acknowledgements
We would like to thank the anonymous referee for providing constructive criticisms and clarifications.
## Appendix A Representation of a Solution in the Poincare Disk
In this section, we make connection with the hyperbolic manifold we used (\(\mathds{H}^{3}\)) and the Poincare Disk. The metric we have is:
\[\mathrm{d}s^{2}=e^{-2\alpha z}(\mathrm{d}x^{2}+\mathrm{d}y^{2})+\mathrm{d}z^{2}.\] (A.1)
By multiplying the metric with \(\alpha^{2}\) we obtain:
\[\mathrm{d}\sigma^{2}=e^{-2z}(\mathrm{d}x^{2}+\mathrm{d}y^{2})+\mathrm{d}z^{2},\] (A.2)
where we map \(\alpha x\to x,\alpha y\to y,\alpha z\to z\) (This is equivalent to setting \(\alpha=1\)). If we suppress the \(y\) coordinate, we obtain:
\[\mathrm{d}\Sigma^{2}=e^{-2z}\mathrm{d}x^{2}+\mathrm{d}z^{2}.\] (A.3)
Let us define \(y=e^{z}\) (this is _not_ the original '\(y\)' coordinate.) Then the metric becomes:
\[\mathrm{d}\Sigma^{2}=\frac{\mathrm{d}x^{2}+\mathrm{d}y^{2}}{y^{2}},\] (A.4)
which is the metric on the Poincare half plane [26]. Considering this plane defined by \(\{(x,y)\mid x\in\mathbb{R},y\in\mathbb{R}^{+}\}\) as the upper half complex plane with \(z=x+iy\), using the following Mobius transformation (where this particular form is known as Cayley transformation [17]), we map upper complex plane into a unit disk:
\[\omega(z)=\frac{z-i}{z+i}.\] (A.5)
The Cartesian coordinates (\(\omega_{x}\) is the real part, \(\omega_{y}\) is the imaginary part) that correspond to \(\omega(x+iy)\) is then calculated as:
\[\omega_{x}=\frac{x^{2}+y^{2}-1}{x^{2}+(y+1)^{2}},\quad\omega_{y}=-\frac{2x}{x ^{2}+(y+1)^{2}}.\] (A.6)
Representation of solutions in the Poincare Disk is useful, because it allows us to visualize the asymptotic behavior of solutions as \(t\rightarrow\pm\infty\). In a way, it may be seen similar to Penrose diagrams [25] in general relativity. See Figure 1 for a comparison of our 1st order result with the numerical solution.
## Appendix B Killing Vector Fields of \(\mathbb{H}^{3}\)
We show explicit calculation steps for Killing vector fields by using a general ansatz such as \(K=K^{x}(x,y,z)\partial_{x}+K^{y}(x,y,z)\partial_{y}+K^{z}(x,y,z)\partial_{z}\). Following partial differential equations were obtained from the Killing equation 3 (2.8)
Footnote 3: Note that general vector field can be written in index form as \(K=K^{\mu}\partial_{\mu}\) and we lower the indices by using the metric so that \(K_{\alpha}=g_{\mu\alpha}K^{\mu}\) can be used in (2.8). \(K^{\mu}\) coefficients correspond to the functions in our ansatz, \(K^{x}(x,y,z)\) etc.
\[\begin{array}{c}-\alpha K^{z}(x,y,z)+\partial_{x}K^{x}(x,y,z)=0\\ -\alpha K^{z}(x,y,z)+\partial_{y}K^{y}(x,y,z)=0\\ e^{-2\alpha z}\partial_{z}K^{y}(x,y,z)+\partial_{y}K^{z}(x,y,z)=0\\ \partial_{y}K^{x}(x,y,z)+\partial_{x}K^{y}(x,y,z)=0\\ e^{-2\alpha z}\partial_{z}K^{x}(x,y,z)+\partial_{x}K^{z}(x,y,z)=0\\ \partial_{z}K^{z}(x,y,z)=0\end{array}\] (B.1)
Figure 1: Here we see three plots. 1) The geodesic solution, 2) The 1st order solution, 3) The numerical solution. We see that the 1st order solution properly approximates the numerical solution when \(|t|\) is less than some upper-bound, but later the numerical solution takes over and displays complex behavior. In the plot, \(t\in(-30,30)\). The parameters are as follows: \(c_{1}=1,c_{2}=0,c_{3}=1,c_{4}=0,c_{5}=1,c_{6}=0,c_{7}=1,B_{2}=0.1\). and for the compatible numerical solution \(x(0)=1.1,x^{\prime}(0)=0.95,z(0)=0,z^{\prime}(0)=-0.1\).
From the last equation we have \(K^{2}\left(x,y,z\right)=c+K^{2}\left(x,y\right)\). Inserting this in equations (B.1) and writing with a more compact notation (i.e. assuming \(K^{i}\) s depend on \(x,y,z\)) we get,
\[-\alpha\left(c+K^{2}\right)+\partial_{x}K^{x}=0\] (B.2)
\[-\alpha\left(c+K^{2}\right)+\partial_{y}K^{y}=0\] (B.3)
\[\partial_{y}K^{z}+e^{-2\alpha z}\partial_{z}K^{y}=0\] (B.4)
\[\partial_{y}K^{x}+\partial_{x}K^{y}=0\] (B.5)
\[\partial_{x}K^{z}+e^{-2\alpha z}\partial_{z}K^{x}=0\] (B.6)
From (B.2) and (B.3)
\[\partial_{x}K^{x}=\partial_{y}K^{y}\] (B.7)
Taking \(\partial_{z}\) of (B.2) and (B.3)
\[\partial_{z}\partial_{x}K^{x}=0\,\quad\partial_{z}\partial_{y}K^{y}=0\] (B.8)
Taking \(\partial_{x}\) of (B.4) and \(\partial_{y}\) of (B.6)
\[\partial_{x}\partial_{y}K^{z}=-e^{-2\alpha z}\partial_{z}\partial_{x}K^{y}=- e^{-2\alpha z}\partial_{z}\partial_{y}K^{x}\] (B.9)
From (B.9) and (B.5)
\[\partial_{z}\partial_{x}K^{y}=\partial_{z}\partial_{y}K^{x}\,\quad \partial_{y}K^{x}=-\partial_{x}K^{y}\implies\partial_{z}\partial_{y}K^{x}=- \partial_{z}\partial_{x}K^{y}\] \[\implies\partial_{z}\partial_{x}K^{y}=-\partial_{z}\partial_{x}K ^{y}\implies\partial_{z}\partial_{x}K^{y}=0\,\quad\partial_{z}\partial_{y}K^{x}=0\] (B.10)
Equations (B.8) and (B.10) dictates separation of \(z\) variable for \(K^{x}\left(x,y,z\right)\) and \(K^{y}\left(x,y,z\right)\). Hence, the general solution of \(K^{x}\) and \(K^{y}\) should be in the following form
\[K^{x}\left(x,y,z\right)=h_{1}(z)+f_{1}(x,y)\quad,\quad K^{y}\left(x,y,z\right) =h_{2}(z)+f_{2}(x,y)\] (B.11)
Inserting these forms back into Killing equations (B.1) we get
\[-\alpha\left(c+K^{2}\right)+\partial_{x}f_{1}=0\] (B.12)
\[-\alpha\left(c+K^{2}\right)+\partial_{y}f_{2}=0\] (B.13)
\[e^{-2\alpha z}h_{2}^{\prime}(z)+\partial_{y}K^{2}=0\] (B.14)
\[\partial_{y}f_{1}+\partial_{x}f_{2}=0\] (B.15)
\[e^{-2\alpha z}h_{1}^{\prime}(z)+\partial_{x}K^{2}=0\] (B.16)
From (B.12) and (B.13) we have
\[\partial_{x}f_{1}=\partial_{y}f_{2}\] (B.17)
Using (B.17) and (B.15) we get the following,
\[\partial_{x}^{2}f_{1}=\partial_{x}\partial_{y}f_{2}\quad,\quad \partial_{y}^{2}f_{1}=-\partial_{x}\partial_{y}f_{2}\] \[\partial_{y}^{2}f_{2}=\partial_{x}\partial_{y}f_{1}\quad,\quad \partial_{x}^{2}f_{2}=-\partial_{x}\partial_{y}f_{1}\] \[\implies\partial_{x}^{2}f_{1}+\partial_{y}^{2}f_{1}=0\quad,\quad \partial_{x}^{2}f_{2}+\partial_{y}^{2}f_{2}=0\] (B.18)
The last two equations of (B.18) are Laplace equations in 2-dimensions. We can write a general solution for those equations as 4
Footnote 4: A general solution to the Laplace equation is given as \(f(x,y)=(A\cosh(\lambda x)+B\sinh(\lambda x))(C\cos(\lambda y)+D\sin(\lambda y))\). But this solution leads to inconsistencies in Killing equations. Therefore, we use a more simplified general solution which can potentially solve the Killing equations.
\[f_{1}\left(x,y\right)=c_{1}(x^{2}-y^{2})+c_{2}xy+c_{3}x+c_{4}y+c_{5}\] (B.19)
\[f_{2}\left(x,y\right)=c_{6}(x^{2}-y^{2})+c_{7}xy+c_{8}x+c_{9}y+c_{10}\] (B.20)
Inserting (B.19) and (B.20) in equations (B.12), (B.13) and (B.15)
\[2xc_{1}+yc_{2}+c_{3}-\alpha c-\alpha K^{2}=0\] (B.21)
\[-2yc_{6}+xc_{7}+c_{9}-\alpha c-\alpha K^{2}=0\] (B.22)
\[\left(c_{2}+2c_{6}\right)x+\left(c_{7}-2c_{1}\right)y+c_{4}+c_{8}=0\] (B.23)
From these equations we get
\[c_{3}=c_{9}=\alpha c\,\quad c_{2}=-2c_{6}\,\quad c_{7}=2c_{1}\,\quad c_{4}=-c_{8}\] (B.24)
Equations (B.21) and (B.22) gives information about the general form of \(K^{z}\left(x,y\right)\) which should be linear with respect to \(x\) and \(y\). Thus,
\[K^{z}\left(x,y\right)=c_{12}x+c_{13}y\] (B.25)
Using (B.25) for \(K^{z}\left(x,y\right)\) in Killing equations we get,
\[2xc_{1}+yc_{2}+c_{3}-\alpha c-x\alpha c_{12}-y\alpha c_{13}=0\] (B.26)
\[2yc_{6}-xc_{7}-c_{9}+\alpha c+x\alpha c_{12}+y\alpha c_{13}=0\] (B.27)
\[c_{13}+e^{-2\alpha z}h_{2}^{\prime}(z)=0\] (B.28)
\[c_{4}+x\left(c_{2}+2c_{6}\right)+y\left(-2c_{1}+c_{7}\right)+c_{8}=0\] (B.29)
\[c_{12}+e^{-2\alpha z}h_{1}^{\prime}(z)=0\] (B.30)
Solving these all together we get
\[c_{12}=\frac{c_{7}}{\alpha}=\frac{2c_{1}}{\alpha}\,\quad c_{13}=\frac{c_{2}} {\alpha}=\frac{-2c_{6}}{\alpha}\,\quad c_{3}=c_{9}=\alpha c\,\quad c_{8}=-c_{4}\] (B.31)
From (B.28) and (B.30)
\[h_{1}\left(z\right)=-\frac{c_{12}}{2\alpha}e^{2\alpha z}\,\quad h_{2}\left(z \right)=-\frac{c_{13}}{2\alpha}e^{2\alpha z}\] (B.32)
After relabeling the constants and using the relations obtained in (B.31) and (B.32) we reach the final general solution for \(K^{x},K^{y},K^{z}\)
\[K^{x} =\left(\frac{\alpha}{2}\left(x^{2}-y^{2}\right)-\frac{e^{2\alpha z }}{2\alpha}\right)c_{1}+\alpha xyc_{2}+\alpha xc_{3}+yc_{4}+c_{5}\] \[K^{y} =\left(\frac{\alpha}{2}\left(y^{2}-x^{2}\right)-\frac{e^{2\alpha z }}{2\alpha}\right)c_{2}+\alpha xyc_{1}+\alpha cy_{3}-xc_{4}+c_{6}\] (B.33) \[K^{z} =c_{1}x+c_{2}y+c_{3}\]
Recall that we have taken the general ansatz \(K=K^{x}\partial_{x}+K^{y}\partial_{y}+K^{z}\partial_{z}\) for Killing vector fields. Using the functions given in (B.33) and adjusting the constants accordingly (\(c_{i}=1\) and all \(\left.\begin{array}{c}c_{j}\end{array}\right|_{j\neq i}=0\)) we obtain 6 independent Killing vectors.
\[\mathbf{K_{1}} =\partial_{x},\quad\mathbf{K_{2}}=\partial_{y},\quad\mathbf{K_{3} }=y\partial_{x}-x\partial_{y},\quad\mathbf{K_{4}}=\alpha x\partial_{x}+\alpha y \partial_{y}+\partial_{z},\] \[\mathbf{K_{5}} =\left(\frac{\alpha}{2}\left(x^{2}-y^{2}\right)-\frac{e^{2\alpha z }}{2\alpha}\right)\partial_{x}+\alpha xy\partial_{y}+x\partial_{z},\] \[\mathbf{K_{6}} =\alpha xy\partial_{x}+\left(\frac{\alpha}{2}\left(y^{2}-x^{2} \right)-\frac{e^{2\alpha z}}{2\alpha}\right)\partial_{y}+y\partial_{z}\] (B.34)
Note that maximally symmetric spaces have \(n\left(n+1\right)/2\) independent Killing vectors, where \(n\) is the dimension of the space. Therefore, having \(6\) independent Killing vectors \(\mathbb{H}^{3}\) is a maximally symmetric space.
|
2309.13492 | Portrait Stylization: Artistic Style Transfer with Auxiliary Networks
for Human Face Stylization | Today's image style transfer methods have difficulty retaining humans face
individual features after the whole stylizing process. This occurs because the
features like face geometry and people's expressions are not captured by the
general-purpose image classifiers like the VGG-19 pre-trained models. This
paper proposes the use of embeddings from an auxiliary pre-trained face
recognition model to encourage the algorithm to propagate human face features
from the content image to the final stylized result. | Thiago Ambiel | 2023-09-23T23:02:32Z | http://arxiv.org/abs/2309.13492v1 | # Portrait Stylization: Artistic Style Transfer with Auxiliary Networks for Human Face Stylization
###### Abstract
Today's image style transfer methods have difficulty retaining humans face individual features after the whole stylizing process. This occurs because the features like face geometry and people's expressions are not captured by the general-purpose image classifiers like the VGG-19 pre-trained models. This paper proposes the use of embeddings from an auxiliary pre-trained face recognition model to encourage the algorithm to propagate human face features from the content image to the final stylized result.
## 1 Introduction
The style transfer technique is currently one of the most studied areas in deep learning applied to digital art. It consists in reconstructing a given _content image_ - the image that will seed the overall structure of the resulting image, e.g. an animal, a landscape or a human portrait - with the style of a given _style image_ - the image that will seed the texture of the resulting image, e.g. an oil painting or an abstract art - producing the final stylized _result image_ output.
This technique made possible new forms of digital art, like reproducing classical paintings with the content of modern days or creating amazing new effects that weren't previously possible. Despite that, actual state-of-the-art algorithms show limitations when applied to images with human faces. These methods yield face deformations in the final result image, which makes it difficult to recreate portrait paintings such as _The Mona Lisa from Leonardo da Vinci_ or _Self Portrait from Vincent Van Gogh_.
The main cause of this is that the human face geometry is not passed as an important content criterion, as the VGG-Network [Simonyan and Zisserman 2015] can't capture so much relevant features about the human face. This problem can be solved by simply using a face recognition model like FaceNet [Schroff et al. 2015] as an auxiliary model for content feature extraction. So, in that way, the human face geometry and other relevant facial features can be propagated to the final _result image_.
## 2 Background
This problem can be formulated as Finding the optimal changes for a _content image_ that minimizes the difference between its texture and the texture from a _style image_ without losing the high-level information contained in it. These content and style differences can be calculated through the internal representations of a pre-trained deep convolutional neural network like the VGG-Network given that, following the paper _A Neural Algorithm of Artistic Style_[Gatys et al. 2015], the higher layers from these networks can capture high-level information - like textures and color palettes - from its input images, at the same
time that its lower layers can capture low-level information like the object's geometry and its colors.
There are many ways of finding these optimal changes, but here the focus goes to the traditional optimization technique. This technique was first introduced by Gatys' 2015 paper and uses the quasi-Newton optimization algorithm _Limited-memory BFGS_ optimizer [10] to solve the style transfer problem by optimizing the pixels of an _initial image_, that can be a noise or the own _content image_, in the same way that the weights of a deep learning model are optimized, through the minimization of a criterion function (Eq. 1).
This criterion function is the weighted sum between a _content loss_ function (Eq. 3) and a _style loss_ function (Eq. 6), This means that the content and style weights can be controlled by \(\alpha\) and \(\beta\) factors respectively, giving the user more control over the _result image_. The function is defined as follows:
\[\mathcal{L}_{total}(\vec{c},\vec{s},\vec{x})=\alpha\mathcal{L}_{content}(\vec {c},\vec{x})+\beta\mathcal{L}_{style}(\vec{s},\vec{x}) \tag{1}\]
Where \(\vec{c}\) is the _content image_, \(\vec{s}\) the _style image_ and \(\vec{x}\) the _result image_.
### Content Loss
The content of a given image can be represented by the responses from convolutional filters of lower layers in a pre-trained VGG-Network that will be called here _feature representations_. The responses from a given layer \(l\) can be stored in a matrix \(F^{l}\in\mathbb{R}^{N_{l}\times M_{l}}\) where \(N_{l}\) represents the number of filters contained in layer \(l\), \(M_{l}\) represents the number of pixels - product between width and height - of the filters contained in layer \(l\), and \(F^{l}_{ij}\) are the output activations from the \(i^{th}\) filter at position \(j\) from layer \(l\).
In that way, the _content loss_ function for a given layer \(l\) can be defined as the mean squared error between the _content image feature representations_\(C^{l}\) and the _result image feature representations_\(X^{l}\):
\[\mathcal{L}_{content}(\vec{c},\vec{x},l)=\frac{1}{N}\sum_{i,j}^{N}(C^{l}_{ij} -X^{l}_{ij})^{2} \tag{2}\]
Then, being \(L\) the total number of content layers and \(W^{c}_{l}\) the relative weight of content layer \(l\), the final _content loss_ function can be calculated by the weighted sum of all the content layer losses:
\[\mathcal{L}_{content}(\vec{c},\vec{x})=\sum_{l=0}^{L}W^{c}_{l}\times\mathcal{ L}_{content}(\vec{c},\vec{x},l) \tag{3}\]
### Style Loss
Like the content _feature representations_, the style of a given image is also represented by the responses from convolutional filters of a pre-trained VGG-Network. Nevertheless, it is actually represented by higher layers of the network and not directly, but through
the correlations between the filter responses. These correlations are given by the Gram Matrix \(G^{l}\in\mathbb{R}^{N_{l}\times N_{l}}\), where:
\[G^{l}_{ij}=\sum_{k}F^{l}_{ik}\times F^{l}_{jk} \tag{4}\]
From that way, the _style loss_ function from a given layer \(l\) can be defined as the mean squared error between the Gram Matrix of layer \(l\) from the _style image_\(S^{l}\) and the Gram Matrix of layer \(l\) from the _result image_\(A^{l}\):
\[\mathcal{L}_{style}(\vec{s},\vec{x},l)=\frac{1}{4N_{l}^{2}M_{l}^{2}}\sum_{i,j}( S^{l}_{ij}-A^{l}_{ij})^{2} \tag{5}\]
Finally, being \(L\) the total number of style layers and \(W^{s}_{l}\) the relative weight of style layer \(l\), the final _style loss_ function can be calculated by the weighted sum of all the style layer losses:
\[\mathcal{L}_{style}(\vec{s},\vec{x})=\sum_{l=0}^{L}W^{s}_{l}\times\mathcal{L} _{style}(\vec{s},\vec{x},l) \tag{6}\]
### The Problem of High Resolution
Beautiful synthetic images can be generated by applying the Gatys' method to a given _content image_ and _style image_, like the ones at Figure 1.
Nevertheless, this method has limitations when applied to high resolution images (e.g. 1080px or 1440px), resulting in only some color changes and almost no structural change (Figure 2). This phenomenon occurs because the receptive fields of convolutional neural networks have fixed sizes, and in higher-resolution images its becomes relatively
Figure 1: An image from _The Golden Gates Bridge_ stylized with different _style images_ through the Gatys’ 2015 method. All samples are generated at a _500px_ resolution, and each style used is minimized at the right-bottom edge of each sample.
small, what makes the VGG-Network to pay attention only to small structures of the image during the stylization process, ignoring the overall image structure.
Following the paper _Controlling Perceptual Factors in Neural Style Transfer_[1], the optimal resolution size of input images when using the VGG-Network for style transfer is around 500px, where bigger structural changes occurs while image content is well-preserved, like in Figure 1(c).
## 3 Improving the Quality of the Generations
The Gatys' 2016 paper also proposes that the quality of generations at high resolution can be improved through the _coarse-to-fine_ technique. It consists of dividing the stylizing process into two stages, where given a high-resolution _content_ and _style_ images \(\vec{c}\) and \(\vec{s}\) with \(N\) number of pixels, is defined by the following processes:
In the first stage, the images are downsampled by a \(K\) factor, such that \(N/K\) corresponds to the resolution where the stylization will be performed, e.g. \(500^{2}\)px for VGG-Network. After this process, the stylization is performed with the downsampled images, generating a low-resolution _result image_\(\vec{z}\).
Now, at the second stage, the generated image \(\vec{z}\) is upsampled to \(N\) pixels and then used as _initial image_ for the stylization process of the original input images \(\vec{c}\) and \(\vec{s}\), generating the final _result image_\(\vec{x}\). This process causes the algorithm to fill in the high-resolution information without losing the overall style structure of the \(\vec{z}\) image (Figure 3(b)).
The method proposed in this paper is based on Crowson's Neural Style Transfer implementation [12]. Various relevant changes were made to the stylizing
Figure 3: An example of the _coarse-to-fine_ process applied to _The Golden Gates Brigade_ stylized image from Figure 1(c).
Figure 2: An image from _The Golden Gates Bridge_ stylized with the respective _style images_ from Figure 1 in high resolution (1080px). These samples show the limitations of the method when applied to high-resolution images.
process to improve the quality of the generated images, but here only some of those will be discussed.
First, the _coarse-to-fine_ technique is divided into \(s\) stages, rather than one stage for low-resolution stylization and another to high-resolution one. The stylization process is started at an _initial resolution_\(r_{i}\) and then is applied to progressively larger scales, each greater by a factor \(k=\sqrt{2}\) until it reaches a _final resolution_\(r_{f}\). In this way, the resolution \(R(s)\) at a given stage \(s\) is defined as:
\[R(s)=\min\{r_{f},r_{i}\times k^{s-1}\} \tag{7}\]
To improve the use of available memory, the Adam optimizer [Kingma and Ba 2015] is used instead of the L-BFGS optimizer, which allows the processing of higher resolution images while still producing similar results.
The algorithm's style perception can be improved to yield more expressive results by setting a different weight for each style layer. The layers used for stylization are the same as in Gatys' 2015 method: _relu1\(1\), _relu2\(1\), _relu3\(1\), _relu4\(1\) and _relu5\(1\), and the weights assigned to respectively each layer is \(256,64,16,4\) and \(1\), what is then normalized through the _softmax_ function.
To approximate the effects of gradient normalization and produce better visual effects, a variation of the _mean squared error_ function is used to compute the content and style losses. Here, the traditional _squared error_ is divided by the sum of the absolute difference of the inputs, in a way that the gradient L1-norm of the resulting function will be \(\approx 1\). For a given input \(y\), a target \(\hat{y}\) and an \(\epsilon=1\mathrm{e}{-8}\) value to avoid zero divisions, the _normalized squared error_ function is defined as:
\[NSE(y,\hat{y})=\frac{\sum_{i}(y_{i}-\hat{y}_{i})^{2}}{\sum_{i}|y_{i}-\hat{y}_{ i}|+\epsilon} \tag{8}\]
The Gram Matrix for style representation was also changed. Now it is normalized by the number of filters contained in each style layer:
\[G_{ij}^{l}=\frac{\sum_{k}F_{ik}^{l}\times F_{jk}^{l}}{N_{l}} \tag{9}\]
Finally, following the paper _Understanding Deep Image Representations by Inverting Them_[Mahendran and Vedaldi 2014], spatial smoothness in the resulting image \(\vec{x}\) can be encouraged by the _L2 total variation_ loss, defined as:
\[TV_{loss}(\vec{x})=\frac{1}{N}\sum_{i,j}^{N}\Bigl{(}(\vec{x}_{i,j+1}-\vec{x}_ {ij})^{2}+(\vec{x}_{i+1,j}-\vec{x}_{ij})^{2}\Bigr{)} \tag{10}\]
So, it is summed with the _content_ and _style_ losses with a weight control value \(\gamma\), defining the final _total loss_ as:
\[\mathcal{L}_{total}(\vec{c},\vec{s},\vec{x})=\alpha\mathcal{L}_{content}(\vec {c},\vec{x})+\beta\mathcal{L}_{style}(\vec{s},\vec{x})+\gamma TV_{loss}(\vec{ x}) \tag{11}\]
With all these changes and a few others that can be found in Crowson's repository, the results of the style transfer process are even improved, resulting in smoother images, with well-preserved content and more expressive strokes (Figure 4(b)).
### Limitations
Despite those improvements, the method still produces facial deformations when applied to content images with human faces (Figure 5(c)), as the content layer _relu4\(2\) can't output meaningful representations of human facial features.
These distortions in the generated results (Figure 5(c)) occurs because the VGG-Network layers used for content extraction (_relu4\(2\) in the Gatys' and Crowson's methods) can't output meaningful activations about human faces when exposed to it, as this network was trained for the general-purpose image classification task and not for domain-specific tasks like face recognition.
## 4 Proposed Method
This paper proposes a new method, that will be called as _Portrait Stylization_, to solve the face distortion problem in Crowson's algorithm. It solves the problem by adding to the _total loss_ function a new domain-specific content loss, called here as _FaceID loss_, that uses the responses from the convolutional filters of a pre-trained face recognition algorithm, like the state-of-the-art FaceNet [14], to compute the difference between the facial features of the _content image_\(\vec{c}\) and the _result image_\(\vec{x}\).
Figure 4: Comparison between the Gatys’ 2016 style transfer method before (a) and after (b) Crowson’s changes. After these improvements, the image content becomes well-preserved while the stylized regions are more expressive.
Figure 5: A failure case of the Crowson’s method, where the facial features of the _content image_ are not propagated to the final _result image_, making the face contained in the result unrecognizable.
These responses are extracted from an _Inception-Resnet-V1_ FaceNet model, pre-trained on the _VGGFace2_[1] dataset, and will be called here as _facial features_. The layers selected for extracting these _facial features_ are: _conv_1a_, _conv_2a_, _maxpool_3a_, _conv_4a_ and _conv_4b_, with the same weight assigned to each one. It was empirically selected, following the idea that higher layers in a feed-forward convolutional neural network architecture are better in extracting general-features.
The _FaceID loss_ function can be defined as the weighted sum of the _normalized squared error_ (Eq. 8) between the _content image facial features_\(C^{l}\) and the _result image facial features_\(X^{l}\), being \(W_{l}^{f}\) the weight for a given FaceNet layer \(l\):
\[\mathcal{L}_{facial}(\vec{c},\vec{x})=\delta\times\sum_{l=0}^{L}W_{l}^{f} \times NSE(C^{l},X^{l}) \tag{12}\]
Where \(\delta\) is the _face weight_ value, that gives the user control of how similar the faces contained in the _result image_ will be in comparison to the faces in the _content image_.
This auxiliary criterion helps the algorithm retain the facial features of the human faces in the _content image_ after the stylizing process, which avoids drastic facial deformations while still producing expressive stylization results (Figure 6(b)).
Nevertheless, even with those improvements, using only the _facial features_ as a criterion for the face reconstruction don't give the user much control about the style of _result image_, as the style strokes becomes lower expressive as the \(\delta\) value increases (Figure 7).
Figure 6: Comparison between the Crowson’s style transfer method before (a) and after (b) the addition of the _FaceID Loss_.
Figure 7: Changes caused by the _FaceID loss_, from \(\delta=0.05\) until \(\delta=0.45\), with a step size of \(\Delta=0.1\). All these experiments use \(\alpha=0.05\) and \(\beta=1.0\), with the same _content_ and _style_ images from Figure 5.
In that way, to improve the control of the face geometry in the _result image_ while maintaining the expressive stylization results, the differentiable output from the FaceMesh [11] algorithm can be used to compute the difference between the surface geometry of the faces contained in the _content_ and _result_ images.
The FaceMesh algorithm is a feed-forward convolutional neural network trained to approximate a 3D mesh representation of human faces from only RGB image inputs (Figure 8). It outputs a relatively dense _mesh model_ of 468 vertices that was designed for face-based _augmented reality_ effects.
So, the new _FaceID loss_ can be defined by adding to the old one the _normalized squared error_ (Eq. 8) between the _content mesh model_\(C_{M}\) and the _result mesh model_\(X_{M}\):
\[\mathcal{L}_{facial}(\vec{c},\vec{x})=\eta NSE(C_{M},X_{M})+\delta\sum_{l=0} ^{L}W_{l}^{f}\times NSE(C^{l},X^{l}) \tag{13}\]
Where \(\eta\) is the _meshes weight_ value, that gives the user control of how much similar the surface geometry of faces contained in the _result image_ will be in relation to the ones in the _content image_.
This encourages the algorithm to reproduce the face geometries contained on the _content image_, on the _result image_, while still allows it to produce expressive strokes, as the other facial features like the skin texture or micro expressions is not represented by the _mesh model_.
Figure 8: Examples of _mesh models_ generated by the FaceMesh algorithm applied to _content images_ with real human faces. To produce high-precision meshes, the faces on the input image need to be cropped and resized to 192x192 resolution.
Figure 9: Changes caused by the new _FaceMesh term_ in different algorithm settings. All these experiments use \(\alpha=0.05\) and \(\beta=1.0\), with the same _content_ and _style_ images from Figure 5.
This new term improves the quality of face reconstructions in the _result image_ even when it is used alone. When used jointly with the _facial features_ (with \(\delta>0.00\)), makes the algorithm produce even better results, helping the user to adjust fine facial geometry details like the nose or mouth formats, producing facial expressions most similar to the ones in the _content image_ (Figure 9(d)).
## 5 Image Preprocessing
To improve the stylization results of the algorithms used here, making it focus only on the faces contained in the _content image_ and avoiding possible confusion caused by the background texture, the background was removed and replaced by a flat color through the MODNet Trimap-Free Portrait Matting [14] algorithm (Figure 10).
This step is not needed by the algorithms, but it improves their overall performance in portrait images. The _Portrait Stylization_ method can generate an image with only a few facial distortions even with its background, but the other algorithms seem to yields bigger distortions when applied to this same case (Figure 11).
Figure 11: The results of stylizing a portrait image without removing its background. To generate these samples, the same _style image_ and hyperparameters from Figure 9(d) were used.
Figure 10: An example of a _content image_ before (a) and after (c) the background removal process. All the black pixels in the mask (b) are replaced by a black color in the resulting image, and the white pixels are maintained with the same color values.
## 6 Style Transfer with Multiple Face Portraits
To enable the support of stylizing images that contain more than one human face, a few changes need to be made to the algorithm. A face detection algorithm like the state-of-the-art MTCNN [22] is used to extract the coordinates of the faces contained in the _content image_. With this coordinates, the facial regions in the _content_ and _result_ images are cropped, generating sub-images from the ground truth and generated faces (Figure 12).
The _facial features_ of all these sub-images are extracted through the FaceNet and FaceMesh models. The features from the _content image_ faces are concatenated layer by layer, generating a tuple of vectors \(G\) that represents the ground truth facial geometries of all faces. At the same time, the extracted features from the _result image_ faces are too concatenated layer by layer, generating a tuple of vectors \(H\) that represents the actual geometries of the faces in the generated image.
After these processes, the _FaceID Loss_ are calculated using these concatenated vectors as facial representations rather than using the _facial features_ extracted from the entire image pixels.
\[\mathcal{L}_{facial}(\vec{c},\vec{x})=\eta NSE(G_{M},H_{M})+\delta\sum_{l=0} ^{L}W_{l}^{f}\times NSE(G_{f}^{l},H_{f}^{l}) \tag{14}\]
Where \(G_{M}\) and \(G_{f}^{l}\) are the concatenated _mesh models_ and _facial features_ from the faces in the _content image_, and \(H_{M}\) and \(H_{f}^{l}\) are the concatenated _mesh models_ and _facial features_ from the faces in the _result image_.
Figure 12: An example of how the MTCNN algorithm extracts the face coordinates from the _content image_ and uses them to crop the faces in the _result image_. This process ensures that the faces in the generated image will always be detected, no matter what changes the styling process does.
## 7 Performance Comparisons
The _Portrait Stylization_ method produces significant visual improvements, compared to the current state-of-the-art methods, when applied to images that contain human faces (Table 1). This still can be applied to any other image types as the method becomes exactly the same as Crowson's method when \(\delta=0.00\) and \(\eta=0.00\).
## 8 Conclusion
This paper proposes improvements to the Crowson's algorithm to improve the quality of the results in images that contain one or more human faces. These improvements show that this algorithm can be optimized for specific image groups (e.g. portraits, cars, or animals) through changes in the _total loss_ function.
The addition of auxiliary models, pre-trained on domain-specific tasks, can help the stylization process giving the user more control over how the style and content will be merged in the generated image. So, in the same way that face detectors can help in the portrait styling process, other domain-specific models like a pre-trained pose-estimation algorithm, could help in the stylization of full-body images.
This is a great experiment to explore in future research, jointly with the possibility of fine-tuning the VGG-Network with a stacked dataset of open-domain and specific-domain images. This could enable a single content extraction model, optimizing the memory usage and the speed of the whole stylization process.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \# & Input & Gatys et al. & K. Crowson & P.S. Method \\ \hline \hline \end{tabular}
\end{table}
Table 1: Some experiments were made to compare the performance between the Gatys’ 2016 method, the Crowson’s method and the _Portrait Stylization_ method when applied to portrait images. The _content_ and _style_ image of each sample is shown in the Input column. |
2309.11285 | Overview of AuTexTification at IberLEF 2023: Detection and Attribution
of Machine-Generated Text in Multiple Domains | This paper presents the overview of the AuTexTification shared task as part
of the IberLEF 2023 Workshop in Iberian Languages Evaluation Forum, within the
framework of the SEPLN 2023 conference. AuTexTification consists of two
subtasks: for Subtask 1, participants had to determine whether a text is
human-authored or has been generated by a large language model. For Subtask 2,
participants had to attribute a machine-generated text to one of six different
text generation models. Our AuTexTification 2023 dataset contains more than
160.000 texts across two languages (English and Spanish) and five domains
(tweets, reviews, news, legal, and how-to articles). A total of 114 teams
signed up to participate, of which 36 sent 175 runs, and 20 of them sent their
working notes. In this overview, we present the AuTexTification dataset and
task, the submitted participating systems, and the results. | Areg Mikael Sarvazyan, José Ángel González, Marc Franco-Salvador, Francisco Rangel, Berta Chulvi, Paolo Rosso | 2023-09-20T13:10:06Z | http://arxiv.org/abs/2309.11285v1 | Overview of AuTexTification at IberLEF 2023: Detection and Attribution of Machine-Generated Text in Multiple Domains
###### Abstract
This paper presents the overview of the AuTexTification shared task as part of the IberLEF 2023 Workshop in Iberian Languages Evaluation Forum, within the framework of the SEPLN 2023 conference. AuTexTification consists of two subtasks: for Subtask 1, participants had to determine whether a text is human-authored or has been generated by a large language model. For Subtask 2, participants had to attribute a machine-generated text to one of six different text generation models. Our AuTexTification 2023 dataset contains more than 160.000 texts across two languages (English and Spanish) and five domains (tweets, reviews, news, legal, and how-to articles). A total of 114 teams signed up to participate, of which 36 sent 175 runs, and 20 of them sent their working notes. In this overview, we present the AuTexTification dataset and task, the submitted participating systems, and the results.
Machine-Generated Text, Large Language Models, Generalization, AuTexTification.
**Resumen:** Este articulo presenta un resumen de la tarea AuTexTification como parte del workshop IberLEF 2023 sobre el Iberian Languages Evaluation Forum, en el marco de la conferencia SEPLN 2023. AuTexTification consta de dos subtareas: en la Subtarea 1, los participantes tuvieron que determinar si un texto fue escrito por un humano o generado por un modelo de lenguaje masivo. Para la Subtarea 2, los participantes debian atribuir un texto generado automaticamente a uno de seis modelos de generacion de texto diferentes. El conjunto de datos AuTexTification contiene mas de 160.000 textos en dos idiomas (ingles y espaol) y cinco dominios (tweets, resenas, noticias, legislacion y articulos instructivos). Un total de 114 equipos se inscribieron para participar, de los cuales 36 environ 175 resultados y 20 de ellos enviaron articulos. En este articulo, presentamos el conjunto de datos y la tarea AuTexTification, los sistemas enviados por los participantes y sus resultados.
**Palabras clave:** Texto Generado por Maquina, Modelos de Lenguaje Masivos, Generalizacion, AuTexTification.
## 1 Introduction
Current developments in Large Language Models (LLMs) have strongly improved the quality of Machine-Generated Text (MGT). Their latest surge in popularity through services such as ChatGPT,1 and large-scale democratization efforts to broaden the public's access to large models (Scao et al., 2022; Touvron et al., 2023; Wolf et al., 2020; Seger et al., 2023), have made it easier for non-technical people to interact with and use these models for various interesting applications (Eloundou et al., 2023; Liu et al., 2023).
Footnote 1: [https://tinyurl.com/reuters-chatgpt](https://tinyurl.com/reuters-chatgpt)
However, these advances have also lowered
the barrier of entry for users to generate high-quality, multi-style and multi-domain text in a massive scale. This means that motivated malicious users could easily generate massive quantities of text without the need of large computational resources, technical knowledge, or human intervention (see Table 1). Supporting this concern, recent research suggest that disinformation generated with state-of-the-art LLMs is more credible than the one generated by humans (Spitale, Biller-Andorno, and Germani, 2023), thus showing the difficulty for humans to distinguish between MGT and human-authored text.
As expected, the aforementioned advancements have also promoted discussions in ethical AI (Widder et al., 2022) as well as model, data and training regulations,2 and new licenses (Benjamin et al., 2019; Contractor et al., 2022). Content moderation due to AI democratization, and the need for regulations, are strong motivators for researchers to ensure a responsible use of LLMs and their generations. A promising research line to carry this out involves identifying MGT, while also attributing it to specific text generation models to learn about the specific actors behind an MGT from a forensics viewpoint.
Footnote 2: European Commission, Proposal for a Regulation of the European Parliament [https://tinyurl.com/EURAIAct](https://tinyurl.com/EURAIAct)
There have been many efforts to detect MGT, including zero-shot approaches (Mitchell et al., 2023; Zellers et al., 2019), supervised systems (Ippolito et al., 2020; Uchendu et al., 2020; Maronikolakis, Schutze, and Stevenson, 2021), and evaluation campaigns (Kashnitsky et al., 2022; Shamardina et al., 2022). While it has been found that in-domain MGT detection with supervised approaches is easy (Bakhtin et al., 2019), most of the works often overlooked that MGT detection systems would be applied to a broad variety of domains, writing styles, and generation models. Therefore, there is a need to evaluate the generalization of MGT detectors through a more realistic lens. In this regard, some works have studied generalization across model families and scales (Sarvazyan et al., 2023), however, the generalization to new domains is still under-explored.
In this context, we present the AuTexTification (**A**utomated **T**ext Iden**T**ification) task. This shared task is proposed to study: (i) the automatic detection of MGT, (ii) the generalization capabilities of MGT detectors to new domains, and (iii) the feasibility of fine-grained MGT attribution to one of many generation models. Furthermore, we automatically collect a multi-domain annotated dataset of human-authored text and MGT generated by various LLMs, which is a valuable resource for exploratory linguistic analysis of machine-generated and human-authored texts. To our knowledge, AuTexTification is the first shared task to study both MGT detection and attribution in a multi-domain setting for English and Spanish, while also focusing on generalization of MGT detectors to new domains.
## 2 Task Description
The AuTexTification 2023 Shared Task includes two subtasks in English and Spanish in five different domains.
Subtask 1: MGT Detection.This subtask consists in distinguishing between human and generated text. It is framed as a binary classification task of human text (Hum) and MGT (Gen), where text from three domains is included in the training set, and submissions are evaluated in two unseen ones. This way, we aim to study the MGT detectors' cross-domain generalization capabilities.
Subtask 2: MGT Attribution.In this subtask, participants must attribute MGT to the model that generated it, out of six models. Thus, Subtask 2 is framed as a six-class classification task, where we strive to study the feasibility of fine-grained attribution. Differently to Subtask 1, the training and test splits include all five domains.
\begin{table}
\begin{tabular}{c|c|c} \multicolumn{2}{c}{**Model adaptation**} \\ \hline
**Human** & \multicolumn{1}{c|}{**Pre-trained**} & **Fine-tuned** \\ \cline{2-3}
**Mod.** & \multicolumn{1}{c|}{Full accessibility} & \multicolumn{1}{c}{Technical accessibility} \\ \cline{2-3}
**No** & \begin{tabular}{c} Few comp. resources \\ Massive scale \\ Medium quality \\ \end{tabular} & \begin{tabular}{c} Large comp. resources \\ Massive scale \\ High quality \\ \end{tabular} \\ \hline \multirow{3}{*}{Yes} & \multirow{3}{*}{\begin{tabular}{c} Few comp. \& human resources \\ Small scale \\ High quality \\ \end{tabular} } &
\begin{tabular}{c} Technical accessibility \\ Large comp. \& human resources \\ Small scale \\ High quality \\ \end{tabular} \\ \end{tabular}
\end{table}
Table 1: Types of MGT. The AuTexTification 2023 Shared Task focuses on generations from pre-trained models with no human modification. We cover the most accessible approach, involving little computational and human resources and can be used massively.
## 3 Dataset
The AuTexTification dataset consists of texts written by humans and LLMs in five domains: tweets, reviews, how-to articles, news and legal documents. These domains were chosen to encompass a range of writing styles, from more structured and formal to less structured and more informal. We collected human texts from publicly available datasets, namely: _MultiEURLEX_(Chalkidis, Fergadiotis, and Androutsopoulos, 2021), _XSUM_(Narayan, Cohen, and Lapata, 2018), _XLSUM_(Hasan et al., 2021), _MLSUM_(Scialom et al., 2020), _Amazon Reviews_(McAuley and Leskovec, 2013), _WikiLingua_(Ladhak et al., 2020), _COAR_\(\&\)_COAH_(Gonzalez et al., 2014), _XLM-Tweets_(Barbieri, Espinosa Anke, and Camacho-Collados, 2022), _TSATC_(Naji, 2012), and _TSD_(Leis et al., 2019). Table 2 groups these datasets per domain and language.
The MGT was generated from the human texts by using three _BLOOM_ models (Scao et al., 2022), _BLOOM-1B7_,3_BLOOM-3B_,4 and _BLOOM-7B1_;5 as well as three _GPT-3_ models (Brown et al., 2020; Ouyang et al., 2022): _babbage_, _curie_, and _text-davinci-003_, with 1b, 6.7b and 175b parameter scales respectively. Our motivation behind using these models were fourfold: (i) both _BLOOM_ and _GPT-3_ show great capabilities in multiple languages, (ii) _BLOOM_ models' usage is not as restricted via licensing (as opposed to other popular models such as _LLaMA_(Touvron et al., 2023) or _OPT_(Zhang et al., 2022)), (iii) _GPT-3_ has been one of the most popular and best performing language models until recently,6 and (iv) we aimed to cover a broad spectra of model families and scales. While we were hoping to include _BLOOM-175B_ generations too, this was not possible due to the lack of public APIs.
Footnote 3: [https://tinyurl.com/bloom-1b7](https://tinyurl.com/bloom-1b7)
Footnote 4: [https://tinyurl.com/bloom-3b](https://tinyurl.com/bloom-3b)
Footnote 5: [https://tinyurl.com/bloom7b](https://tinyurl.com/bloom7b)
Footnote 6: _GPT-3.5-turbo_ and _GPT-4_ were not released at time of compiling our dataset.
We manually tuned the decoding parameters to obtain MGT that appears realistic through subjective evaluations carried out by two of the authors. We found that with nucleus sampling (Holtzman et al., 2020), using a top-p of 0.9 and a temperature of 0.7, the models generated texts of higher quality. The maximum number of completion tokens was manually selected for each domain to be similar to the median token-length of the human texts: 20 tokens for tweets, 70 for reviews, and 100 for news, legal, and how-to articles.
### Gathering process
We aim to build a dataset of human and generated texts that share the same prefix. For instance, given a human text "_Today it's 20 degrees. It is sunny in Valencia._", we could use "_It is sunny in Valencia._" as human text, and generate a continuation by prompting an LLM with "_Today it's 20 degrees._". In this manner, both generated and human texts are plausible continuations of the same prefix and they can be compared fairly in terms of topics and domains. To build the dataset in this way, we opted for a data gathering process consisting in the steps depicted in Figure 1, namely (i) gathering human data, (ii) preparing the inputs for LLMs, (iii) generating MGTs, and (iv) cleaning and filtering the resulting texts.
We first gather a set of human-authored texts \(\mathcal{H}\) from the source datasets for each domain and language. We manually analyze and define extraction schemes for splitting \(\mathcal{H}\) into prefixes \(\mathcal{H}_{p}\) and continuations \(\mathcal{H}_{c}\) such that \(\mathcal{H}=\mathcal{H}_{p}\oplus\mathcal{H}_{c}\). In some domains and source datasets, we also define prompts \(\mathcal{P}\) to prevent the generation models from generating topic-inconsistent texts, e.g., guiding
\begin{table}
\begin{tabular}{l|c c} & **English** & **Spanish** \\ \hline
**Legal** & _MultiEURLEX_ & _MultiEURLEX_ \\
**News** & _XSUM_ & _MLSUM_\(\&\)_XLSUM_ \\
**Reviews** & _Amazon Reviews_ & _COAR_\(\&\)_COAH_ \\
**Tweets** & _TSATC_ & _XLM-Tweets_\(\&\)_TSD_ \\
**How-to** & _WikiLingua_ & _WikiLingua_ \\ \end{tabular}
\end{table}
Table 2: Human-authored source datasets for the AuTexTification 2023 dataset.
Figure 1: Data gathering process.
models to generate hotel reviews instead of car reviews when using a prefix from the _COAH_ dataset, made up of hotel reviews. Afterwards, the prompts and prefixes \(\mathcal{P}\oplus\mathcal{H}_{p}\) are fed into each LLM to obtain one resulting generation per prompt and prefix. We refer to the set of generations as \(\mathcal{G}\). Texts from both \(\mathcal{H}_{c}\) and \(\mathcal{G}\) are fed into a text cleaning pipeline that removes duplicated spaces, multiple line breaks, and special symbols. Additionally, we ensure that the human continuation and generation obtained from the same prefix have roughly the same token-lengths by truncating to the minimum length of the two texts, thus removing token-length bias. Then, we apply a set of language identification filters: _langdetect_,7_SpaCy FastLang_,8 and _fastText_(Joulin et al., 2017). If one of these filters finds a text to be not in Spanish or English, the text is removed from our dataset.
Footnote 7: [https://tinyurl.com/langdetect](https://tinyurl.com/langdetect)
Footnote 8: [https://tinyurl.com/fastlang](https://tinyurl.com/fastlang)
To obtain the dataset for Subtask 1, we sample a subset of \(\mathcal{H}_{c}\) labeled as Hum and a subset of \(\mathcal{G}\) labeled with Gen. The dataset was then split into training and test sets for a cross-domain scenario: tweets, how-to articles and legal documents were included in the training set, while reviews and news data comprised the test set. To compile the dataset for Subtask 2, we only sample texts from \(\mathcal{G}\), labeling each text with the LLM's name that generated it. The dataset is randomly split into training and test sets following 80%-20% proportions. All the five domains are included in both training and test splits. The released version of the dataset for Subtask 2 includes anonymized model labels to remove bias toward particular models or model families in participating submissions.
The statistics of each subtask's contents per domain, class, and language are presented in Table 3. In both subtasks, both languages contain similar amounts of texts, and the domains and classes are balanced in both splits. This way we guarantee that our analysis is fair by ensuring that every dimension is balanced. Besides, we checked that the generated texts follow the Zipf and Heap's empirical laws, thus ensuring a high quality of the dataset.9
Footnote 9: See [https://tinyurl.com/overview-datasets](https://tinyurl.com/overview-datasets)
### Human Assessment
We performed a small-scale study to assess the difficulty of the Subtask 1 for human annotators. The study consisted in asking human annotators to classify texts as human or generated.10 Five annotators were involved: four Spanish native speakers (SP) and one Italian native speaker (IT). All of them were men between the ages of 25 and 35, with C1-C2 proficiency level in English. From these annotators, SP1 and SP4 are familiar with generated text (they created the dataset and analysed hundreds of examples), while the others were exposed to the task for the first time.
Footnote 10: The annotation interface and instructions are available at [https://tinyurl.com/colab-annotation](https://tinyurl.com/colab-annotation)
We provided the same 40 texts to each annotator, drawn from the test set of the Subtask 1 both for English and Spanish. The texts were balanced in terms of classes and domains: 20 texts were generated by LLMs and 20 were written by a human, half of them were news and the other half were reviews. The generated texts were only obtained from _BLOOM_ models: 6 texts from _BLOOM-1b7_, 6 texts from _BLOOM-3b1_, and 8 texts from _BLOOM-7b1_. Figure 2 shows the Macro-F\({}_{1}\) score of each annotator in each domain.
For both languages, the average annotator performance is very similar, most annotators are close to the random baseline. Regarding the domains, it seems more difficult for humans to distinguish between human-authored and machine-generated news rather than reviews. Most of the annotators per
Figure 2: Human performance in English (top) and Spanish (bottom). The grey dotted line is the random baseline.
form worse than the random baseline distinguishing texts from the news domain. On the contrary, humans are typically better than the random baseline in the reviews domain, especially in English.
Language proficiency seems to play a role. IT1 shows better performance in English than in Spanish, where he is not proficient. Despite how SP1 and SP4 are familiar with generated texts, there seems to be no significant difference between them and other annotators.
The human annotators did not follow any systematic pattern to detect MGT. For reviews, some mentioned that the generated reviews seemed generic, describing many general aspects with short sentences. In contrast, human reviews focused on few and more concrete aspects.
## 4 Systems and Results
In this section, we briefly introduce the participants' systems, describe the baselines and evaluation metrics, and study the results of the shared task.
### Submitted Approaches
The AuTexTification shared task received submissions from 36 teams, belonging to 30 different institutions and 18 different countries. All teams participated in the English track of Subtask 1, with 23 teams also taking part in the Spanish track. For Subtask 2, 19 teams participated in the English track and 14 in the Spanish track. Teams were allowed to submit a maximum of 3 runs per subtask and language. Overall, AuTexTification received a total of 175 runs, comprising 71 for the English track of Subtask 1, 47 for the Spanish track, 33 for the English track of Subtask 2, and 24 for the Spanish track. Outside of the competition scope, the AuTexTification datasets have been used in NLP courses within academic institutions. We are aware of at least 3 institutions,11 with 17 participating teams and 58 runs.
Footnote 11: Universitat Politècnica de València, AixMarseille Université, and IMT Atlantique.
Following the trend in the Natural Language Processing (NLP) field, most teams relied on pre-trained Transformer Vaswani et al. (2017) models. The most used ones were BERT-based models Devlin et al. (2019) like _RoBERTa_Liu et al. (2019) and _DeBERTa_He et al. (2021). Also, domain-specific and multilingual variants of _BERT_ were frequent, including _XLM-RoBERTa_Conneau et al. (2020), _RemBERT_Chung et al. (2020), and _Twhin-BERT_Zhang et al. (2022). A smaller set of participants included generative models in their systems such as _GPT-2_Radford et al. (2019), _Grover_Zellers et al. (2019), and _OPT_Zhang et al. (2022).
Most of the best performing approaches used ensembles of pre-trained models, as well as combinations of lexical, stylometric or statistical features. In some cases, participants fine-tuned their models using auto-train procedures and performed hyper-parameter tuning. Some teams also included _Convolutional Neural Networks_LeCun et al. (1989) or _Long Short Term Memory (LSTM) Networks_Hochreiter and Schmidhuber (1997)
\begin{table}
\begin{tabular}{l l|c c c||c c c c c c c} & \multicolumn{3}{c||}{**Subtask 1**} & \multicolumn{6}{c}{**Subtask 2**} \\ & & & & \multicolumn{3}{c}{**BLOOM**} & \multicolumn{3}{c}{**GPT**} & \\ \hline & & Gen & Hum & \(\Sigma\) & 1b7 & 3b & 7b1 & 1b & 6b7 & 175b & \(\Sigma\) \\ \hline \hline \multirow{3}{*}{**Synthesis**} & **Legal** & 4,846 & 4,358 & 9,204 & 640 & 665 & 712 & 919 & 942 & 919 & 4,797 \\ & **News** & 5,514 & 5,223 & 10,737 & 839 & 860 & 881 & 972 & 978 & 987 & 5,517 \\ & **Reviews** & 5,695 & 3,697 & 9,392 & 952 & 962 & 935 & 945 & 941 & 947 & 5,682 \\ & **Tweets** & 5,739 & 5,634 & 11,373 & 967 & 965 & 965 & 928 & 930 & 964 & 5,719 \\ & **How-to** & 5,690 & 5,795 & 11,485 & 894 & 929 & 960 & 970 & 983 & 966 & 5,702 \\ \hline \multirow{3}{*}{**Synthesis**} & **Total** & 27,484 & 24,707 & 52,191 & 4,292 & 4,381 & 4,453 & 4,734 & 4,774 & 4,783 & 27,417 \\ \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-111} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-11} \cline{2-111} \cline{2
as part of their systems. Traditional machine learning models like _Logistic Regression_ and _Support Vector Machines (SVM)_(Cortes and Vapnik, 1995) were also frequent among the participants. However, these approaches generally performed worse than Transformer-based approaches.
There was also a great diversity in terms of features. Probabilistic token-level features from generative language models seem to play an important role in the best performing approaches. Most participants used contextual representations from pre-trained models, either as features, or through end-to-end fine-tuning. Linguistic features including lexical, structural, and discourse features were also frequent. Among the most common linguistic features, we observed bag of word/char n-grams, counts of personal pronouns, stop-words, punctuations, and POS tags. Some participants also incorporated linguistic and factual knowledge directly in their models. Among these, we found the inclusion of syntactic dependencies in pre-trained models through contrastive learning, Wikipedia fact-checking, and native language identification.
The best ranked systems for each subtask ranged from complex ensembles of many different models and features, to single generative models fine-tuned for the task. In Subtask 1, both for English and Spanish, the best system was proposed by _TALN-UPF_(Przybyla, Duran-Silva, and Egea-Gomez, 2023). This system relied on a bidirectional _LSTM_(Schuster and Paliwal, 1997) model trained with a combination of probabilistic token-level features from different _GPT-2_ versions, linguistic token-level features such as word-frequencies or grammar errors, and text representations from pre-trained encoders. Besides, _TALN-UPF_ was the only team that considered a cross-domain evaluation in the validation step, by performing cross-validation over topically-split data after inferring the topics using _Latent Dirichlet Allocation_(Blei, Ng, and Jordan, 2003). In the Spanish track, the _TALN-UPF_ system performed similar to the _Linguistic_UCM_ system (Alonso et al., 2023), consisting of an _SVM_ trained with a set of morphological, lexical, and discourse features selected according linguistic expertise and human analysis.
In Subtask 2, both for English and Spanish, the three runs of the _Drocks_ team (Abburi et al., 2023) were the highest ranked ones. These systems were ensembles of five different Transformer-based classifiers fine-tuned on the task. The best ensembles differed for each language. For English, the best ensemble was an _Error-Correcting Output Codes_(Dietterich and Bakiri, 1994) model trained using the concatenation of the classification probabilities as features. For Spanish, the best ensemble was implemented with an _SVM_ using the average of the classification probabilities as features.
### Baselines
We consider several baselines for each subtask and language. Namely, we include a random baseline (_Random_), zero-shot (_SB-ZS_) and few-shot (_SB-FS_) approaches based on text and label embedding similarities, a bag-of-words encoding with logistic regression (_BOW+LR_), Low Dimensional Semantic Embeddings (_LDSE_), and fine-tuned language specific transformers (Transformer), _DeBERTaV3_(He, Gao, and Chen, 2021)12 for English and _RoBERTa-BNE_(Fandino et al., 2022)13 for Spanish. These baselines consist in the following:
Footnote 12: [https://tinyurl.com/debertav3](https://tinyurl.com/debertav3)
Footnote 13: [https://tinyurl.com/robertabne](https://tinyurl.com/robertabne)
Footnote 14: [https://www.symanto.com/nlp-tools/symanto-brain/](https://www.symanto.com/nlp-tools/symanto-brain/)
Footnote 15: Itu: “_This text has been written by a human.”_Gen: “This text has been automatically generated by a bot.”_
Random.The random baseline assuming class balance. Defined as \(\frac{1}{C}\) where \(C\) is the number of classes.
SB-ZS and SB-FS.Zero-shot and Few-Shot Symanto Brain API,14 a C Symanto solution optimized for highly efficient and scalable state-of-the-art zero-shot and few-shot classification (Mueller, Perez-Torro, and Franco-Salvador, 2022). We verbalize labels for Subtask 1,15 but not for Subtask 2 given the anonymity of the classes. For _SB-FS_ we use 1024 shots.
BOW+LR.We encode the texts with bag of n-grams, using the top 5K word \(n\)-grams, \(n\in\{1,2\}\) and character \(n\)-grams, \(n\in\{2,\ldots,6\}\) following (Pizarro, 2019). We train a _Logistic Regression_ model offered by scikit-learn (Pedregosa et al., 2011) with default parameters on z-score normalized and concatenated features.
LDSE.We represent texts on the basis of the probability distribution of occurrence of their tokens in the different classes with _LDSE_(Rangel, Franco-Salvador, and Rosso, 2018). We train an _SVM_ classifier provided by scikit-learn (Pedregosa et al., 2011) with default parameters.
Transformer.We use the HuggingFace ecosystem (Wolf et al., 2020) to fine-tune a pre-trained Transformer with a randomly initialized classification head for 5 epochs and default hyperparameters. We use a batch size of 32 texts for _DeBERTaV3_ and a batch size of 64 for _RoBERTa-BNE_.
### Evaluation
The submissions for both subtasks are evaluated with the Macro-F\({}_{1}\) score. Statistical significance is computed through bootstrapping with replacement at a confidence level of \(\alpha=0.95\) with 1,000 resamples.
### Subtask 1: MGT Detection
For the MGT detection subtask, we received 71 submissions from 36 different teams in English, and 47 submissions from 23 teams in Spanish. Tables 4 and 5 show the top-3 performing teams, the weakest team, as well as the first team that beats each baseline, both for English and Spanish.
The best system was proposed by the _TALN-UPF_ team, with 80.91 and 70.77 Macro-F\({}_{1}\) scores in English and Spanish. In English, the best team is significantly better than the second-best ranked team. However, in Spanish there are no significant differences between the two best teams and the best baseline. In Figure 3, we illustrate the rank-ordered Macro-F\({}_{1}\) scores for all the teams in both languages.
Many teams surpassed the best baseline in English by large margins, whereas for Spanish only two teams were able to outperform it with small differences in Macro-F\({}_{1}\). Moreover, the performance of the top-11 ranked teams in English is higher than the performance of the best team in Spanish. This could suggest that detecting MGT and generalizing to new domains is easier in English than in Spanish, either due to language idiosyncrasies or because of the larger availability and quality of English NLP models. For both languages, we observe a linear relationship between the rank-ordered Macro-F\({}_{1}\) scores, with a small set of outliers in both tails. This hints that, even though the resulting Macro-F\({}_{1}\) scores in each language are in different ranges, there is similar variability and difficulty in both languages. The teams' systems cover almost the entire Macro-F\({}_{1}\) range in both languages, and,
\begin{table}
\begin{tabular}{l c c c} \hline
**Rank** & **Team** & **Run** & **Macro-F\({}_{1}\)** \\ \hline
1 & TALN-UPF & HB\_plus & **70.77** \\
2 & Ling-UCM & run1 & 70.60 \\
3 & Transformer & baseline & 68.52 \\
20 & GLPSI & run3 & 63.90 \\
21 & LDSE & baseline & 63.58 \\
25 & turing\_testers & run1 & 62.77 \\
26 & BOW+LR & baseline & 62.40 \\
39 & bucharest & run2 & 56.49 \\
40 & SB-FS & baseline & 56.05 \\
46 & ANLP & run1 & 51.38 \\
47 & Random & baseline & 50.00 \\
50 & UAEMex & run3 & 35.17 \\
51 & SB-ZS & baseline & 34.58 \\
53 & LKE\_BUAP & run3 & 31.60 \\ \hline \end{tabular}
\end{table}
Table 5: Ranking of Subtask 1 (Spanish).
Figure 3: Rank-ordered Macro-F\({}_{1}\) with error bars for Subtask 1 in English (top) and Spanish (bottom). Colored lines are baselines.
\begin{table}
\begin{tabular}{l c c c} \hline
**Rank** & **Team** & **Run** & **Macro-F\({}_{1}\)** \\ \hline
1 & TALN-UPF & HB\_plus & **70.77** \\
2 & TALN-UPF & HB & 74.16 \\
3 & CIC-IPN-CsCog & run2 & 74.13 \\
22 & turquoise\_titans & run1 & 65.79 \\
23 & BOW+LR & baseline & 65.78 \\
33 & turing\_testers & run3 & 60.64 \\
34 & LDSE & baseline & 60.35 \\
37 & OD-21 & run3 & 59.49 \\
38 & SB-FS & baseline & 59.44 \\
51 & swissnlp\_team & run2 & 57.20 \\
52 & Transformer & baseline & 57.10 \\
69 & UMZ & run1 & 50.18 \\
70 & Random & baseline & 50.00 \\
74 & SB-ZS & baseline & 43.47 \\
77 & UAEMex & run1 & 33.87 \\ \hline \end{tabular}
\end{table}
Table 4: Ranking of Subtask 1 (English).
in many cases, they are very similar (same Transformer-based models, similar linguistic features, etc.). Therefore, one has to be careful when developing a MGT detector, small changes could lead to large improvements or declines.
We also include fine-grained results per-domain and per-class in Figure 4. When observing the domain-wise Macro-F\({}_{1}\) scores in Figure 3(a), we find that the systems generalized better to reviews than to news, with a mean Macro-F\({}_{1}\) below the random baseline for the latter. Furthermore, both domains show long-tailed distributions, revealing the variability in generalization capabilities of the systems. Concerning class-wise F\({}_{1}\) scores in Figure 3(b), we find that the systems are better at classifying generated text, and there is lower dispersion among the systems' F\({}_{1}\) scores for this class than for the human class. From the precision-recall distributions depicted in Figure 3(c), we observe that systems are more biased towards predicting text to be generated (high recall), often doing so incorrectly (low precision). We observe the opposite for human texts, few predictions (low recall) that are mostly correct (high precision). All the conclusions above hold for both languages.
For the sake of completeness, we refer the reader to the AuTexTification repository,16 which includes additional plots, the most difficult and easiest examples for the systems, complete rankings including submissions outside the competition, etc.
Footnote 16: [https://tinyurl.com/overview-results](https://tinyurl.com/overview-results)
### Subtask 2: MGT Attribution
For the MGT Attribution subtask we received 33 submissions from 19 different teams in English, and 24 submissions from 14 teams in Spanish. Tables 6 and 7 show the top-3 performing teams, the weakest team, as well as the first team that beats each baseline, both for English and Spanish.
The best system was submitted by team _Drocks_, obtaining 62.5 and 65.37 Macro-F\({}_{1}\) scores for English and Spanish, respectively. This is in contrast to the best scores of Subtask 1 nearing 80 and 70 Macro-F\({}_{1}\), showing that in-domain MGT attribution is more difficult than out-of-domain MGT detection. In this subtask, teams did not deviate significantly from the baselines, and for both languages the relative ranking of baselines remained the same, as opposed to Subtask 1. Rank ordered Macro-F\({}_{1}\) scores for both languages are presented in Figure 5. Few teams were able to surpass the best baselines, with most submissions performing between the top-2 baseline scores. Similarly to Subtask 1, we observe a linear relationship between rank and Macro-F\({}_{1}\) with outliers in the right tail, meaning that there is variability and difficulty in attributing MGT irrespective of language. However, teams cover
Figure 4: Fine-grained plots for Subtask 1 in English (top) and Spanish (bottom).
a smaller range of Macro-F\({}_{1}\) scores than in Subtask 1, suggesting there is less variability when attributing MGT than detecting it. In contrast to Subtask 1, teams generally obtained better Macro-F\({}_{1}\) scores in Spanish than English, but the differences were marginal, which could be because of the randomness of the learning procedures or due to a smaller number of participants in Subtask 2. Generally, MGT attribution appears promising but limited, suggesting the need for further research into new approaches or framings of the problem. Fine-grained per-domain and per-class results for Subtask 2 are presented in Figure 6. Per-domain results (Figure 5(a)) show that attribution of generated tweets is much more difficult than the remaining domains. For tweets, systems are unable to reach 50% Macro-F\({}_{1}\), while for the other domains they surpass it by a large margin. We additionally find many outliers toward lower scores, indicating the difficulty of the task. Finally, most domains have similar distributions centered around different medians, meaning that the variability of participating systems is maintained through all five domains. We also present per-class results in Figure 5(b), where we find that it is easier to attribute generated text to _BLOOM-1B7_ and _text-davinici-003_. Moreover we observe large variability for _curie_, while the other classes have narrower distributions.
Additionally, we computed overall confusion matrices by taking the median at each position of the confusion matrix from all the participant's systems. Figure 5(c) shows the results for English and Spanish. In both languages, the largest confusions are across models within the same families, suggesting that it is easier to distinguish generation models of different families. Besides, _text-davinici-003_ is the model with less number of confusions, being different enough to be easily distinguished from the other models.
Once again, we refer to the AuTexTification repository[16] for additional plots, results and analyses.
## 5 Conclusions and Future Work
This paper describes the AuTexTification shared task at IberLEF 2023, which aimed to study the automatic detection of MGT in cross-domain scenarios and MGT attribution to specific generation models, across five domains and two languages. The AuTexTification dataset was comprised of around 160,000 texts collected through an automatic data gathering process which can be easily extended to new domains and languages. The task received a significant amount of participation: 175 runs from 36 teams, belonging to 30 different institutions and 18 different
\begin{table}
\begin{tabular}{l r r r} \hline
**Rank** & **Team** & **Run** & **Macro-F\({}_{1}\)** \\ \hline
1 & Drocks & run3 & **62.50** \\
2 & Drocks & run1 & 61.29 \\
3 & Drocks & run2 & 61.27 \\
4 & ViDa & run1 & 60.99 \\
5 & Transformer & baseline & 60.42 \\
31 & LKE\_BUAP & run1 & 45.62 \\
32 & LDSE & baseline & 44.56 \\
33 & turquoise\_titans & run2 & 43.37 \\
34 & BOW+LR & baseline & 39.98 \\
35 & UAEMex & run2 & 33.19 \\
36 & SB-FS & baseline & 28.94 \\
37 & Random & baseline & 16.66 \\
38 & SB-ZS & baseline & 15.70 \\
39 & ANLP & run1 & 14.61 \\ \hline \end{tabular}
\end{table}
Table 6: Ranking Subtask 2 (English).
\begin{table}
\begin{tabular}{l r r r} \hline
**Rank** & **Team** & **Run** & **Macro-F\({}_{1}\)** \\ \hline
1 & Drocks & run2 & **65.37** \\
2 & Drocks & run3 & 64.72 \\
3 & Drocks & run1 & 64.17 \\
7 & TALN-UPF & Hybrid\_plus & 61.45 \\
8 & Transformer & baseline & 61.34 \\
20 & immsLPN & run1 & 51.43 \\
21 & LDSE & baseline & 45.46 \\
22 & BOW+LR & baseline & 45.31 \\
25 & UAEMex & run2 & 33.78 \\
26 & SB-FS & baseline & 31.38 \\
28 & ANLP & run1 & 17.93 \\
29 & Random & baseline & 16.66 \\
30 & SB-ZS & baseline & 16.23 \\ \hline \end{tabular}
\end{table}
Table 7: Ranking Subtask 2 (Spanish).
Figure 5: Rank-ordered Macro-F\({}_{1}\) for Subtask 2 in English (top) and Spanish (bottom). Colored lines are baselines.
countries, thus showing the overall interest of the community in addressing MGT detection and attribution. Moreover, other 17 teams submitted 58 runs although after the deadline, for a total of 233 runs by 53 teams.
The participating systems relied on a wide variety of approaches, with a strong trend towards the use of pre-trained Transformer models. Ensembles of pre-trained models and combinations of probabilistic, lexical, and stylometric features led to the best performing systems in both subtasks. The results suggest that cross-domain MGT detection is easier in English than in Spanish, and that MGT attribution is generally more challenging than MGT detection. While MGT attribution appears promising, the small gap between the participant's systems and the baselines encourage further research. Overall, the results suggest that MGT detection and attribution remain challenging tasks and there is potential for further progress.
As future work, we hope to expand the AuTexTification dataset to include more languages, domains, generation models and decoding strategies, to encourage the development of more robust and generalizable systems. Furthermore, it would be valuable to explore alternative formulations of MGT attribution, as fine-grained attribution remains a challenging task.
## Acknowledgements
We would like to thank Guillermo Perez-Torro, Ian Borrego Obrador, and Angelo Basile for their precious help participating in the human assessment, and Mara Chinea Rios for developing a custom implementation of the LDSE baseline.
The work from Symanto has been partially funded by the Pro\({}^{2}\)Haters - Proactive Profiling of Hate Speech Spreaders (CDTi IDI-20210776), the XAI-DisInfodemics: eXplainable AI for disinformation and conspiracy detection during infodemics (MICIN PLEC2021-007681), the OBULEX - _OBservatorio del Uso de Lenguage sEXista en la red_ (IVACE IMINOD/2022/106), and the ANDHI - ANomalous Diffusion of Harmful Information (CPP2021-008994) R&D grants. The work of Areg Mikael Sarvazyan has been partially developed with the support of valgrAI - Valencian Graduate School and Research Network of Artificial Intelligence and the Generalitat Valenciana, and co-founded by the European Union. The research at the Universitat Politecnica de Valencia was framed under the FairTransNLP research project, Grant PID2021-124361OB-C31 funded by MCIN/AEI/10.13039/501100011033 and by ERDF, EU A way of making Europe.
Figure 6: Fine-grained plots for Subtask 2 in English (top) and Spanish (bottom). B- prefix denotes _BLOOM_ models and G- prefix denotes _GPT_ models. |
2307.00142 | BuildingsBench: A Large-Scale Dataset of 900K Buildings and Benchmark
for Short-Term Load Forecasting | Short-term forecasting of residential and commercial building energy
consumption is widely used in power systems and continues to grow in
importance. Data-driven short-term load forecasting (STLF), although promising,
has suffered from a lack of open, large-scale datasets with high building
diversity. This has hindered exploring the pretrain-then-fine-tune paradigm for
STLF. To help address this, we present BuildingsBench, which consists of: 1)
Buildings-900K, a large-scale dataset of 900K simulated buildings representing
the U.S. building stock; and 2) an evaluation platform with over 1,900 real
residential and commercial buildings from 7 open datasets. BuildingsBench
benchmarks two under-explored tasks: zero-shot STLF, where a pretrained model
is evaluated on unseen buildings without fine-tuning, and transfer learning,
where a pretrained model is fine-tuned on a target building. The main finding
of our benchmark analysis is that synthetically pretrained models generalize
surprisingly well to real commercial buildings. An exploration of the effect of
increasing dataset size and diversity on zero-shot commercial building
performance reveals a power-law with diminishing returns. We also show that
fine-tuning pretrained models on real commercial and residential buildings
improves performance for a majority of target buildings. We hope that
BuildingsBench encourages and facilitates future research on generalizable
STLF. All datasets and code can be accessed from
https://github.com/NREL/BuildingsBench. | Patrick Emami, Abhijeet Sahu, Peter Graf | 2023-06-30T21:26:24Z | http://arxiv.org/abs/2307.00142v3 | BuildingsBench: A Large-Scale Dataset of 900K Buildings and Benchmark for Short-Term Load Forecasting
###### Abstract
Short-term forecasting of residential and commercial building energy consumption is widely used in power systems and continues to grow in importance. Data-driven short-term load forecasting (STLF), although promising, has suffered from a lack of open, large-scale datasets with high building diversity. This has hindered exploring the pretrain-then-finetune paradigm for STLF. To help address this, we present BuildingsBench, which consists of 1) Buildings-900K, a large-scale dataset of 900K simulated buildings representing the U.S. building stock, and 2) an evaluation platform with over 1,900 real residential and commercial buildings from 7 open datasets. BuildingsBench benchmarks two under-explored tasks: zero-shot STLF, where a pretrained model is evaluated on unseen buildings without fine-tuning, and transfer learning, where a pretrained model is fine-tuned on a target building. The main finding of our benchmark analysis is that synthetically pretrained models generalize surprisingly well to real commercial buildings. An exploration of the effect of increasing dataset size and diversity on zero-shot commercial building performance reveals a power-law with diminishing returns. We also show that fine-tuning pretrained models on real commercial and residential buildings improves performance for a majority of target buildings. We hope that BuildingsBench encourages and facilitates future research on generalizable STLF. All datasets and code can be accessed from [https://github.com/NREL/BuildingsBench](https://github.com/NREL/BuildingsBench).
## 1 Introduction
Residential and commercial buildings in the United States are responsible for about 40% of energy consumption and 35% of greenhouse gas emissions [10]. Globally, these estimates are respectively 30% and 27% [21]. Building energy demand forecasting plays a part in reducing global emissions by helping to decarbonize the building sector.
Short-term load forecasting (STLF), which typically ranges from hour-ahead to a few days ahead, plays a multitude of critical roles. STLF can help match shifting energy supply with customer demand as well as aid energy markets with accurately setting prices based on forecasted supply/demand [18]. Accurate forecasts can be directly used by reinforcement learning [40] and model predictive control [1; 9] for optimal building energy management.
However, STLF remains a challenging problem as energy demand can fluctuate heavily due to a variety of unobserved and exogenous factors As such, data-driven methods have risen in popularity to address STLF [2; 46]. Interest in these techniques has also been spurred by a rise in deployments of advanced sensors (i.e., smart meters) that record building energy consumption. However, the number of instrumented buildings with publicly released data remains low. Moreover, it is typical for multiple years of historical data to be reserved for training and validation, with trained models tested
on a single held-out year _for the same building_[26]. This incurs a lengthy data collection period per building that is not scalable. There is thus a lack of sufficiently large publicly available datasets (Table 1) which has hindered the investigation of large-scale pretraining (i.e., one model trained on every building [7; 17]). Recent successes of large-scale pretraining outside of language and vision include pretraining on one million hours of audio for speech recognition [47], suggesting its promise for STLF.
In this work, we introduce BuildingsBench, an open source platform for large-scale pretraining and for benchmarking zero-shot STLF [7; 17] and transfer learning for STLF [43; 14]. In zero-shot STLF, models forecast loads for unseen target buildings _without any fine-tuning_. This drastically reduces time-to-deployment on newly instrumented buildings. In transfer learning, a pretrained model is fine-tuned on a target building assuming a limited yet realistic amount of data (e.g., 6 months).
As part of BuildingsBench, we introduce Buildings-900K, a dataset of nearly one million _simulated_ time series, which approaches the scale of datasets in natural language processing and computer vision. This data is sourced from the NREL End-Use Load Profiles (EULP) database [41]. The EULP is a multi-year multi-institution effort to create a statistically representative database of the entire U.S. building stock's energy consumption via careful calibration and validation of physics-based building simulations. We also provide an evaluation suite that combines 7 open datasets totalling over 1,900 _real_ buildings (Table 2). Example time series from the simulated and real datasets are shown in Figure 1. Uniquely, BuildingsBench has simulated and real building energy consumption time series spanning a wide range of geographic locations, years, and types of both residential _and_ commercial buildings.
Our platform automates the evaluation of a variety of simple and advanced baselines on the two tasks. Our results on zero-shot STLF reveals synthetically pretrained models achieve strong performance
Figure 1: **BuildingsBench gallery**. Top row: commercial buildings (farthest left is simulated commercial Buildings-900K data). Second and third rows are residential buildings (farthest left in the second row is simulated residential Buildings-900K data).
on real commercial buildings. We observe a smaller distribution shift between simulated and real commercial buildings than residential buildings. We also show that pretrained models can further improve performance by fine-tuning on real commercial and residential building data. Buildings-900K also enables studying large-scale pretraining of transformers on geographical time series. The utility of transformers for forecasting has recently been questioned [44], but a lack of sufficiently large public time series datasets has made investigating this challenging. Our main finding is a power-law scaling with diminishing returns between dataset scale (size and diversity) and generalization (for commercial buildings). We expect that BuildingsBench will facilitate research on new modeling techniques and large-scale datasets for generalizable STLF.
To summarize, we contribute 1) Buildings-900K, a simulated dataset for large-scale pretraining, 2) a platform for benchmarking zero-shot STLF and transfer learning, and 3) valuable insights on model pretraining for STLF.
## 2 Short-term Load Forecasting
The BuildingsBench benchmark considers the following univariate forecasting problem. Given \(H\) past load values \(x_{t-H:t}\) and \(H+T\) covariates \(z_{t-H:t+T}\) (defined in Section 3.1), we aim to predict the conditional distribution for \(T\) unobserved future load values \(y_{t+1:t+T}\):
\[p(y_{t+1},\ldots,y_{t+T}\mid x_{t-H},\ldots,x_{t},z_{t-H},\ldots,z_{t+T}). \tag{1}\]
We consider a day-ahead STLF scenario with \(H=168\) hours (one week) and \(T=24\) hours. A primary goal of BuildingsBench is to study and evaluate STLF models which learn a single set of parameters \(\theta\) shared by all buildings for the distribution in Eq. 1. We use a probabilistic formulation for our benchmark since applications of STLF increasingly require uncertainty estimates, such as planning and scheduling of renewable energy sources for buildings [18].
## 3 The Buildings-900K Dataset
In this section, we introduce our dataset for pretraining STLF models.
**Simulated data source:** Our dataset is sourced from the NREL EULP [41] database. The EULP provides 15-minute resolution appliance-level consumption simulated with EnergyPlus [8; 27] for a base set of 900K building models (550K residential and 350K commercial) spread across all climate regions in the U.S. It aims to provide a statistically representative picture of the entire U.S. building stock at various aggregation levels (county, state, etc.) and under various electrification scenarios. The building simulations were extensively calibrated over a three year period with advanced metering data (\(\sim\) 2.3 million meters), data from 11 utilities, as well as other public/private datasets related to energy usage. Socio-economic building characteristics, including location based on Public Use Microdata Area (PUMA, \(\sim\)2400 PUMAs in the U.S.--see inset), are sampled from distributions generated from U.S. Census survey responses. Please see Wilson et al. [41] for the complete description.
**Processing and storage:** To create Buildings-900K, we extract 900K total energy consumption time series (in kilowatt-hours (kWh)) from each of the non-upgraded buildings in the 2021 version of the EULP. To promote accessibility of our dataset, we also aggregate the 15-minute resolution to hourly to reduce the size. This data requires about 110 GB to store, significantly less than the entire EULP (70+ TB). We store all buildings within each PUMA in a single Parquet file for each of the two available years (2018 and an aggregated "typical meteorological year" [41]) and building types (residential/commercial), which amounts to 9,600 Parquet files. Each file has a column for the timestamp and a column for each building's energy consumption (8,760 rows per file). Processing the EULP to extract this data took \(\sim\)3 days with Apache PySpark on a 96-core AWS cloud instance.
**Pretraining splits and loading:** A validation set is created by withholding the final two weeks of 2018. The test set consists of buildings in four PUMAs that are withheld from both training and validation. All splits use a 24-hour sliding-window to extract 192-hour load sub-sequences.
Since shuffling large datasets of sub-sequences is computationally demanding, our platform provides a custom PyTorch [30]Dataset that creates an index file to map a shuffled list of integers to a sub-sequence. Each line of the index file is accessible in \(O(1)\) time with Python's seek function. Apache PyArrow is used to efficiently load the indexed building's time series into memory.
**Hosting and licensing:** Buildings-900K is hosted by the Open Energy Data Initiative and is available for download with a permissive CC-4.0 license (link available in App. A).
### Feature extraction
Beyond the load time series, we extract covariates on-the-fly to condition forecasts on:
* Calendar features--day of the week, day of the year, and hour of the day--are generated from the timestamp, which we then cyclically encode with sine and cosine.
* The latitude and longitude coordinate of the centroid of the building's PUMA (using metadata provided by the EULP) for encoding geospatial information.
* A binary feature for building type (residential (0) or commercial (1)).
## 4 BuildingsBench Evaluation Platform
BuildingsBench provides a open source software platform for evaluating models on zero-shot and transfer learning for STLF using a collection of _real building datasets_. To avoid confusion, in our analysis (Sec. 5) we will use the name BuildingsBench to refer _only to the real building evaluation data_ and explicitly state whether the results are for the Buildings-900K test set (_simulated_) or for BuildingsBench (_real_) (Table 2). We now describe each task in more detail.
**Zero-shot STLF:**: For this task, pretrained models are asked to provide day-ahead forecasts for unseen simulated and real buildings. Our use of "zero-shot" here refers to a setting where only one week of historical data is presumed available, and is thus insufficient for training or fine-tuning a deep neural network. All available years are used for each test building.
**Transfer learning**: Our transfer learning task assumes 6 months of metered data has been collected for each target building, about \(\sim\)180 training samples. Fine-tuned models are then tasked with day-ahead forecasting for the next 6 months using a 24-hour sliding window.
### Real building datasets
Here, we briefly describe the real building data used for evaluating the two tasks and defer additional details to App. C. All together, our real building benchmark has over 1,900 buildings and 1.2M days of energy usage.
**Electricity**[37]: 370 commercial buildings in Portugal with data spanning 2011-2014.
\begin{table}
\begin{tabular}{l r r r r r} \hline \hline & \# buildings & Open access & Residential & Commercial & Total hours & \# Sites \\ \hline Pecan Street [36] & 1,000 & ✗ & ✓ & ✗ &? &? \\ Electricity [37] & 370 & ✓ & ✗ & ✓ & \(\sim\)12.9M & 1 \\ Building D.G.P. 2 [25] & 1,636 & ✓ & ✗ & ✓ & \(\sim\)53.6M & 19 \\ Low Carbon London [28] & 5,567 & ✓ & ✓ & ✗ & \(\sim\)97.5M & 1 \\ Buildings-900K (_simulated_) & **900,000** & ✓ & ✓ & ✓ & \(\sim\)**15B** & **2400** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparing popular building energy consumption datasets to our STLF dataset.
\begin{table}
\begin{tabular}{l r r r r r} \hline \hline & \multicolumn{1}{c}{Residential} & Commercial & Years spanned & \# Sites & Total days \\ \hline Buildings-900K test (_simulated_) & 915 & 565 & TMY, 2018 & 4 & 1.1M \\ BuildingsBench (_real_) & 953 & 970 & 2007–2018 & 10 & 1.2M \\ \hline \hline \end{tabular}
\end{table}
Table 2: BuildingsBench day-ahead STLF evaluation data.
**Building Data Genome (BDG) Project 2**[25]: 1,636 commercial buildings across 19 sites in 2016 and 2017. We include buildings from 4 U.S. sites (Panther, Fox, Bear, Rat).
**Low Carbon London**[28]: Energy consumption meter readings from 5,567 London, UK households between 2011-2014. To keep the overall number of residential and commercial buildings roughly the same, we keep a random sample of 713 buildings.
**SMART**[5]: Meter readings for 7 homes in western Massachusetts, U.S., between 2014-2016.
**IDEAL**[31]: Electricity meter data from 255 homes in Edinburgh, UK, between 2017-2018.
**Individual household electric power consumption**[15]: Energy consumption from a single household in **Sceaux**, Paris between 2007-2010.
**Borealis**[4]: 6-second load measurement recorded for 30 homes in Waterloo, ON in 2011-2012.
**Processing and storage:** We resample each time series to hourly if the data is sub-hourly. Smart meter time series typically have missing values, sometimes extending over weeks or months. Buildings with more than 10% of the data missing were not included. For included buildings, we linearly interpolate missing values, and if values are missing for a span greater than 1 week, we fill with zeros. We also provide an option to exclude buildings with a max hourly consumption \(>\) 1.3 MW (17 from BDG-2 and 61 from Electricity) to keep the range of consumption values similar between Buildings-900K and BuildingsBench test data, as extrapolation is not our focus. Each annual consumption time series per building is stored as a CSV file.
**Hosting and licensing:** We host the processed versions of each dataset along with the original permissive licenses alongside Buildings-900K under the same CC-4.0 license. Our code is open sourced under a BSD-3 license.
### Evaluation metrics
We primarily evaluate task performance with two metrics, the normalized root mean square error (NRMSE) and the ranked probability score (RPS). The NRMSE is widely used as it captures the ability to predict the correct load shape. For a target building with \(M\) days of load time series,
\[NRMSE:=100\times\frac{1}{\hat{y}}\sqrt{\frac{1}{24M}\sum_{j=1,i=1}^{M,24}(y_{i,j}-\hat{y}_{i,j})^{2}}, \tag{2}\]
where \(\hat{y}\) is the predicted load, \(y\) is the actual load, and \(\bar{y}\) is the average actual load over all \(M\) days. In the appendix, we also report the normalized mean absolute error and normalized mean bias error (see descriptions in App. D).
The RPS is a well-known metric for uncertainty quantification in probabilistic forecasting [13]. It compares the squared error between two cumulative distribution functions (CDFs), the predicted CDF and the observation represented as a CDF. Define the indicator function \(\mathbf{1}_{x\leq y}\) as 1 if the actual load \(x\) is \(\leq y\) and 0 otherwise. The continuous RPS for a predicted CDF \(\hat{F}\) for the load at hour \(i\) is
\[RPS:=\int_{0}^{\infty}(\hat{F}_{i}(y)-\mathbf{1}_{y_{i}\leq y})^{2}dy. \tag{3}\]
Our platform implements the closed form RPS for Gaussian CDFs as well as a discrete RPS for categorical distributions (used by baselines that discretize the positive real line for token-based load forecasting, see Sec. 4.3). These are formally defined in App. D.
### Baselines
Various baselines are implemented and benchmarked in the BuildingsBench platform. For zero-shot STLF, we pretrain a representative time series transformer [42] on Buildings-900K. Transformers have recently gained significant interest for STLF [17; 20; 33; 45]. The model is described in brief here and with more detail in App. E.
**Transformer (Gaussian)**: This model is the original encoder-decoder transformer [39] re-purposed for autoregressive time series forecasting as proposed in Wu et al. [42]. It predicts a Gaussian distribution at time \(t+i\) conditioned on the past week and the previous \(i-1\) predictions. We train
it to minimize the Gaussian negative log-likelihood loss. Since our demand time series are highly non-Gaussian (periods of low consumption interspersed with bursts), the loss diverged during training when we used standard scaling to normalize the data. Using a Box-Cox power transformation [6] removes this instability. To compute the RPS, we approximate a Gaussian in the unscaled space by backprojecting the scaled standard deviation (see App. D for approximation details).
**Transformer (Tokens)**: We also implement a transformer variant that uses _tokenization_ to predict discrete load tokens. This baseline allows us to test whether quantizing loads into tokens is beneficial for large-scale pretraining. We also found that a comparison between tokenization and Gaussian time series transformers was missing from the literature. The model is trained to predict a categorical distribution over a vocabulary of load tokens by minimizing a multi-class cross-entropy loss. To quantize loads, we use a simple strategy--faiss-gpu [22] KMeans clustering with \(K\) = 8,192 fit to the Buildings-900K training set. We merge clusters \(<\)10 Watts apart to obtain a compact vocabulary size of 2,274 tokens. See App. E for analysis on our tokenizer design.
Each transformer is pretrained jointly on residential and commercial buildings at three different sizes: **Transformer-S** (3M params), **Transformer-M** (17M params), and **Transformer-L** (160M params). Models are trained on 1 billion load hours (162,760 gradient steps with a batch size of 256) before early stopping. Full discussion on hyperparameters and compute are in App. E and G.
We also benchmark simple **persistence** on both the zero-shot STLF and transfer learning tasks:
**Previous Day:**: For scenarios where the load conditions change relatively slowly, the previous day's load is a strong baseline for day-ahead STLF [38].
**Previous Week:**: The 24-hour load profile on the same day from the previous week is used [17].
**Ensemble:**: This persistence baseline computes a Gaussian distribution for each predicted hour \(t+i\) whose mean is the average load at hour \(t+i\) over the past 7 days:
\[\hat{\mu}=\frac{1}{7}\sum_{j=1}^{7}x_{(t+i-24j)};\quad p(y_{t+i}|x_{t-H:t}):= \mathcal{N}\Bigg{(}\hat{\mu},\sqrt{\frac{1}{7}\sum_{j=1}^{7}(x_{(t+i-24j)}- \hat{\mu})^{2}}\Bigg{)}. \tag{4}\]
In STLF, outperforming persistence is an indicator that a model is producing meaningful forecasts. Standard forecasting baselines are benchmarked on the transfer learning task by training directly on target building data:
**LightGBM:**: The light gradient-boosting machine (LightGBM) [23] is a popular decision-tree-based algorithm suitable for STLF [29]. We use the multi-step forecasting implementation from skforecast [3] with 100 estimators and no max depth.
**Linear regression, DLinear:**: Inspired by Zeng et al. [44], we benchmark a linear _direct_ multi-step forecaster that regresses all 24 future values as a weighted sum of the past 168 values. We also implement DLinear which decomposes the time series with a moving average kernel, applies a linear layer to each component, and then sums the two to get the final prediction.
## 5 Benchmark results
Here, we analyze our baselines on the two benchmark tasks guided by the following questions:
1. Can models pretrained on Buildings-900K generalize to real buildings?
2. Does fine-tuning a pretrained model on limited data from a target building lead to improved performance?
3. How does the number of pretraining buildings affect zero-shot generalization?
4. How does the size of the pretrained model affect zero-shot generalization?
### Zero-shot STLF
For zero-shot STLF, models are evaluated on day-ahead forecasting for all unseen simulated and real buildings _without fine-tuning_. Results for Transformer-L and other baselines are shown in Table 3
aggregated by the median over all buildings. Due to space constraints, performance profiles over all buildings and per-dataset metrics with 95% stratified bootstrap CIs are reported in App. H and I.
On unseen simulated buildings, the pretrained transformers outperform the persistence baselines in accuracy and uncertainty quantification. The average change in accuracy **from simulated to real** test buildings for the transformers was _better_ (-1% NRMSE) for commercial and _worse_ (+48% NRMSE) for residential. The best transformer outperforms the best persistence method (Persistence Ensemble) in zero-shot accuracy on real commercial buildings. The Persistence Ensemble has better accuracy than the pretrained models on real residential buildings. The RPS is sensitive to the load magnitude making it difficult to evaluate the sim-to-real gap, but the relative RPS across baselines follows a similar trend as accuracy (see Fig. 2 for visualizations). These results suggest synthetic pretraining is viable for commercial zero-shot STLF--see Sec. 6.2 for more discussion on residential buildings.
### Transfer Learning
This task evaluates fine-tuning on a single target building for which 6 months of data has been collected. Our fine-tuning protocol is to train for a max of 25 epochs using the first 5 months and to use the last month for early stopping with a patience of 2 epochs. To ease the computational burden of fine-tuning a model on each building in the benchmark, we randomly sample 100 residential and 100 commercial buildings and fine-tune separately on each. Table 4 displays the results aggregated by the median over all buildings. We fine-tune both pretrained Transformer-L and randomly initialized transformers. We also report performance for the pretrained model without fine-tuning.
\begin{table}
\begin{tabular}{l l l l l l l} \hline \hline & \multicolumn{4}{c}{Compmercial buildings} & \multicolumn{4}{c}{Residential buildings} \\ \cline{2-7} & NRMSE (\%) & RPS & \(P(X<Y)\) & NRMSE (\(\%\)) & RPS & \(P(X<Y)\) \\ \hline
**Not pretrained + Not fine-tuned** & & & & & & \\ Persistence Ensemble & 17.41 & 5.16 & - & **80.13** & **0.058** & - \\ Previous Day Persistence & 16.98 & - & - & 101.78 & - & - \\ Previous Week Persistence & 18.93 & - & - & 104.38 & - & - \\ \hline
**Not pretrained + Fine-tuned** & & & & & & \\ Linear regression & 47.43 & - & - & 102.17 & - & - \\ DLinear & 39.22 & - & - & 98.56 & - & - \\ Transformer (Tokens) & 44.62 & 27.18 & - & 108.61 & 5.76 & - \\ Transformer (Gaussian) & 39.26 & 13.24 & - & 94.25 & 0.080 & - \\ LightGBM & 16.63 & - & - & 83.97 & - & - \\ \hline
**Pretrained + Not fine-tuned** & & & & & & \\ Transformer (Tokens) & 14.46 & 3.98 & - & 98.34 & 0.062 & - \\ Transformer (Gaussian) & 13.65 & 4.07 & - & 85.83 & 0.077 & - \\ \hline
**Pretrained + Fine-tuned** & & & & & & \\ Transformer (Tokens) & 13.86 (-0.60) & 3.73 (-0.254) & 77.5 & 95.55 (-2.79) & 0.060 (-0.002) & 77 \\ Transformer (Gaussian) & **13.36 (-0.30)** & **3.50 (-0.570)** & 84 & 82.17 (-0.60) & 0.061 (-0.016) & 83 \\ \hline \hline \end{tabular}
\end{table}
Table 4: **BuildingsBench transfer learning results. We show median accuracy (NRMSE) and ranked probability score (RPS). For statistical robustness, we compute average probability of improving NRMSE due to fine-tuning: \(P(X<Y):=\frac{100}{N}\sum_{i=1}^{N}\mathbf{1}_{\{X_{i}<Y_{i}\}}\) where \(X_{i}\) is the pretrained + fine-tuned NRMSE for building \(i\) and \(Y_{i}\) is the pretrained NRMSE. Improvement due to fine-tuning.**
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline & \multicolumn{4}{c}{**Buildings-900K-Test (_simulated_)} & \multicolumn{4}{c}{**BuildingsBench (_real_)**} \\ \cline{2-7} & \multicolumn{2}{c}{Compmercial} & \multicolumn{2}{c}{Residential} & \multicolumn{2}{c}{Commercial} & \multicolumn{2}{c}{Residential} \\ \cline{2-7} & NRMSE (\%) & RPS & NRMSE (\%) & RPS & NRMSE (\%) & RPS & NRMSE (\%) & RPS \\ \hline Persistence Ensemble & 32.73 & 1.13 & 54.33 & 0.179 & 17.17 & 5.39 & **80.11** & **0.067** \\ Previous Day Persistence & 34.53 & - & 58.84 & - & 17.41 & - & 102.44 & - \\ Previous Week Persistence & 31.76 & - & 73.39 & - & 19.96 & - & 103.51 & - \\ \hline Transformer (Tokens) & **14.08** & **0.482** & **43.94** & **0.138** & 14.82 & **4.81** & 101.7 & 0.072 \\ Transformer (Gaussian) & 16.62 & 0.613 & 44.76 & 0.150 & **13.86** & 5.15 & 83.87 & 0.082 \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Zero-shot STLF results. Median accuracy (NRMSE) and ranked probability score (RPS). Lower is better. No fine-tuning is performed on any test building. Residential NRMSEs larger than 100% occur when the normalization factor—the average consumption per hour—is small (e.g., \(\sim\)0.5 kWh) and thus is also smaller than the RMSE.**
Our results indicate that fine-tuning the pretrained transformers on limited data is likely to improve the performance on the target building. We found that fine-tuning _all_ layers of both pretrained models was necessary--only fine-tuning the model's last layer lead to slightly decreased performance (see App. K) Encouragingly, the best overall fine-tuned model (Transformer (Gaussian)) beats LightGBM, a strong baseline in the low-data regime, on both commercial and residential buildings.
### Empirical Scaling Laws
**Pretraining buildings:** To characterize how size and diversity of the dataset impacts generalization, we trained the Transformer-M (17M) models on 1K, 10K, and 100K simulated buildings. We adjust the training hyperparameters so all models take similar numbers of gradient steps. The Transformer (Gaussian) model accuracy on commercial buildings roughly follows a power-law scaling, while RPS slightly decreases between \(10^{5}\) and \(10^{6}\) buildings (Fig. 2(a) and Fig. 2(b)). The Transformer (Tokens) model's NRMSE and RPS both exhibit power-law scaling with diminishing returns. This suggests naively increasing the dataset size may not result in large improvements. Performance remains constant for residential buildings (see Fig. 6 in Appendix J), which we attribute to a stronger sim-to-real distribution shift compared to commercial buildings.
**Model size:** We also compare transformers of sizes S, M, and L. The NRMSE and RPS improves from S to M but performance plateaus or decreases from the M to L models (Fig. 2(c) and Fig. 2(d)). We suspect that auto-correlation in the time series causes the largest models to overfit despite using sliding windows to extract training samples and aggressive early stopping. The good performance of the smallest Transformer (Tokens) model is likely due to training on quantized load values. Quantization helps generative models efficiently learn important structure in the data and ignore negligible information [34] (our tokenizer achieves a \(\sim\)63% dataset compression rate). Although, this comes at a cost of lower accuracy (Fig. 2(c)). We observe model performance on residential buildings is roughly constant across sizes (Fig. 6 in Appendix J).
Figure 3: **Empirical scaling laws for commercial buildings**. Intervals are 95% stratified bootstrap CIs. Residential results are in Appendix J. a-b) Dataset scale vs. zero-shot performance. The trends appear to be power-laws with diminishing returns. c-d) Model size vs. zero-shot performance.
Figure 2: **Forecast uncertainty**. Ground truth time series are truncated to previous 24 hours for visibility. Light blue lines are 10 samples from the predicted distribution. a-b) Successful commercial building forecasts. c-d) Failed residential building forecasts.
Discussion
### Findings
**Pretraining on Buildings-900K leads to good zero-shot STLF performance on real commercial buildings**. More investigation is needed to achieve similar results for residential buildings (Sec. 6.2).
**Buildings-900K pretraining + finetuning on real buildings improves STLF for both commercial and residential buildings**. Our best pretrained + finetuned baseline outperforms LightGBM in the BuildingsBench transfer learning task. Finetuning all model layers appears necessary to mitigate negative transfer caused by the distribution shift between simulated pretraining and real target data.
**We observe an approximate power-law relationship with diminishing returns between dataset scale and zero-shot STLF for commercial buildings**. Performance plateaus as model size increases, possibly due to overfitting and distribution shifts.
**Pretraining on tokenized building loads obtains better zero-shot uncertainty quantification but lower accuracy**. Tokenization also enables smaller pretrained transformers to achieve better zero-shot STLF task performance and stable training.
**Pretraining with geospatial coordinates slightly improves generalization:** Due to space constraints, the results of an ablation where the model ignores the building's latitude and longitude are provided in App. K. Briefly, improvements in accuracy were modest (0.1 - 1% NRMSE).
### Residential STLF Challenges
Residential loads are inherently more uncertain and variable than commercial loads because they are more sensitive to occupant behaviour and changes in weather (e.g., temperature and humidity) [35]. This is evidenced by the relatively large persistence NRMSEs (80.11%-103.51%) on BuildingsBench residential loads compared to commercial loads (17.17%-19.96%). However, rather than decreasing the value of BuildingsBench, our work enables exploring directions such as the inclusion of weather covariates, multi-variate formulations of STLF, and benchmarking of advanced approaches [19].
### Limitations
While we expect BuildingsBench to stimulate research on generalizable STLF, our framework has limitations. First, pretraining on imperfect simulated data is fundamentally limited if deploying on real buildings is the goal. Mixing synthetic and real data is an interesting direction to explore. Second, the training and evaluation data is mainly representative of building energy consumption in the northwestern hemisphere. Expanding our framework to include data from other global regions is needed to comprehensively evaluate generalization. Third, due to limited time, we only pretrained transformer baselines only Buildings-900K. We will maintain a leaderboard for BuildingsBench in our code repository for new results.
## 7 Conclusions & Future Work
In this work, we introduce BuildingsBench, which consists of a large-scale simulated dataset and a collection of real building datasets for benchmarking zero-shot STLF and transfer learning. Upon comparing the performance of pretrained transformers against persistence and traditional ML-based forecasting baselines, we observe promising results for commercial buildings and identify areas of improvement for residential buildings.
We plan to maintain and extend BuildingsBench in the following ways. As relevant EULP data becomes newly available we will release updated versions of Buildings-900K. To ground improvements on benchmark tasks in a concrete downstream application, we will look into adding a reinforcement learning task to the benchmark for building HVAC control using STLF. Overall, we hope that BuildingsBench will facilitate research for communities applying ML to the built environment as well as those conducting foundational studies on large-scale pretraining and finetuning for time series.
## Acknowledgments and Disclosure of Funding
This work was authored by the National Renewable Energy Laboratory (NREL), operated by Alliance for Sustainable Energy, LLC, for the U.S. Department of Energy (DOE) under Contract No. DE-AC36-08GO28308. This work was supported by the Laboratory Directed Research and Development (LDRD) Program at NREL. The views expressed in the article do not necessarily represent the views of the DOE or the U.S. Government. The U.S. Government retains and the publisher, by accepting the article for publication, acknowledges that the U.S. Government retains a nonexclusive, paid-up, irrevocable, worldwide license to publish or reproduce the published form of this work, or allow others to do so, for U.S. Government purposes. The research was performed using computational resources sponsored by the Department of Energy's Office of Energy Efficiency and Renewable Energy and located at the National Renewable Energy Laboratory.
The authors would like to thank Cong Feng and Alex Rybchuk for helping revise a draft version of this manuscript, as well as Dave Biagioni for providing valuable insights.
|
2310.20649 | Dynamic Batch Norm Statistics Update for Natural Robustness | DNNs trained on natural clean samples have been shown to perform poorly on
corrupted samples, such as noisy or blurry images. Various data augmentation
methods have been recently proposed to improve DNN's robustness against common
corruptions. Despite their success, they require computationally expensive
training and cannot be applied to off-the-shelf trained models. Recently, it
has been shown that updating BatchNorm (BN) statistics of an off-the-shelf
model on a single corruption improves its accuracy on that corruption
significantly. However, adopting the idea at inference time when the type of
corruption is unknown and changing decreases the effectiveness of this method.
In this paper, we harness the Fourier domain to detect the corruption type, a
challenging task in the image domain. We propose a unified framework consisting
of a corruption-detection model and BN statistics update that improves the
corruption accuracy of any off-the-shelf trained model. We benchmark our
framework on different models and datasets. Our results demonstrate about 8%
and 4% accuracy improvement on CIFAR10-C and ImageNet-C, respectively.
Furthermore, our framework can further improve the accuracy of state-of-the-art
robust models, such as AugMix and DeepAug. | Shahbaz Rezaei, Mohammad Sadegh Norouzzadeh | 2023-10-31T17:20:30Z | http://arxiv.org/abs/2310.20649v1 | # Dynamic Batch Norm Statistics Update
###### Abstract
DNNs trained on natural clean samples have been shown to perform poorly on corrupted samples, such as noisy or blurry images. Various data augmentation methods have been recently proposed to improve DNN's robustness against common corruptions. Despite their success, they require computationally expensive training and cannot be applied to off-the-shelf trained models. Recently, it has been shown that updating BatchNorm (BN) statistics of an off-the-shelf model on a single corruption improves its accuracy on that corruption significantly. However, adopting the idea at inference time when the type of corruption is unknown and changing decreases the effectiveness of this method. In this paper, we harness the Fourier domain to detect the corruption type, a challenging task in the image domain. We propose a unified framework consisting of a corruption-detection model and BN statistics update that improves the corruption accuracy of any off-the-shelf trained model. We benchmark our framework on different models and datasets. Our results demonstrate about 8% and 4% accuracy improvement on CIFAR10-C and ImageNet-C, respectively. Furthermore, our framework can further improve the accuracy of state-of-the-art robust models, such as AugMix and DeepAug.
## 1 Introduction
Deep neural networks (DNNs) have been successfully applied to solve various vision tasks in recent years. At inference time, DNNs generally perform well on data points sampled from the same distribution as the training data. However, they often perform poorly on data points of different distribution, including corrupted data, such as noisy or blurred images. These corruptions often appear naturally at inference time in many real-world applications, such as cameras in autonomous cars, x-ray images, etc. Not only DNNs' accuracy drops across shifts in the data distribution, but also the well-known overconfidence problem of DNNs impedes the detection of domain shift.
One straightforward approach to improve the robustness against various corruptions is to augment the training data to cover various corruptions. Recently, many more advanced data augmentation schemes have also been proposed and shown to improve the model robustness on corrupted data, such as SIN [6], ANT [16], AugMix [11], and DeepAug [9]. Despite their effectiveness, these approaches require computationally expensive training or re-training process.
Two recent works [18, 1] proposed a simple batch normalization (BN) statistics update to improve the robustness of a pre-trained model against various corruptions with minimal computational overhead. The idea is to only update the BN statistics of a pre-trained model on a target corruption. If the corruption type is unknown beforehand, the model can keep BNs updating at inference time to adapt to the ongoing corruption. Despite its effectiveness, this approach is only suitable when a constant flow of inputs with the same type of corruption is fed to the model so that it can adjust the BN stats accordingly.
In this work, we first investigate how complex the corruption type detection task itself would be. Although corruption type detection is challenging in the image domain, visualizing the Fourier spectrum reveals that each corruption category has a relatively distinctive frequency profile. However, training a model on a raw Fourier spectrum causes numerical instability because the values are practically unbounded. Specifically, the values in high frequency components of the Fourier spectrum changes in a
range several order of magnitudes from one corruption to another. Furthermore, a prevalent min-max normalization fades small variation within low frequency and high frequency components, and, consequently, leads to a poor performance. We show that a subtle normalization approach and a very simple DNN can modestly detect corruption types.
Given the ability to detect corruption types in the Fourier domain, we adopt the BN statistic update method such that it can change the BN values dynamically based on the detected corruption type. The overall architecture of our approach is depicted in Fig. 1. First, we calculate the Fourier transform of the input image, and after applying a specifically designed normalization, it is fed to the corruption type detection DNN. Based on the detected corruption, we fetch the corresponding BN statistics from the BN stat lookup table, and the pre-trained network BNs are updated accordingly. Finally, the dynamically updated pre-trained network processes the original input image.
In summary, our contributions are as follows:
* We harness the frequency spectrum of an image to identify the corruption type. On ImageNet-C, a shallow 3-layer fully connected neural network can identify 16 different corruption types with \(65.88\%\) accuracy. The majority of the misclassifications occur between similar corruptions, such as different types of noise, for which the BN stat updates are similar nevertheless.
* Our framework can be used on any off-the-shelf pre-trained model, even robustly trained models, such as AugMix [11] and DeepAug [9], and further improves the robustness.
* We demonstrate that updating BN statistics at inference time as suggested in [1, 18] does not achieve good performance when the corruption type does not continue to be the same for a long time. On the other hand, our framework is insensitive to the rate of corruption changes and outperforms these methods when dealing with dynamic corruption changes.
## 2 Method
### Overall Framework
The overview of our framework is depicted in Fig. 1. It consists of three main modules: A) a pre-trained model on the original task, such as object detection, B) a DNN trained to detect corruption type, and C) a lookup table storing BN statistics corresponding to each type of corruption. This paper mainly focuses on improving the natural robustness of trained DNNs. However, the framework can be easily extended to domain generalization and circumstances where the lookup table may update the entire model weights or even the model architecture itself.
### Adaptation to New Corruptions
In [1, 18], a simple BN statistic update has significantly improved the natural robustness of trained DNNs. Fig. 2 shows the effectiveness of their approach on various corruption types. The drawback of their approach is that the BN statistics obtained for one type of corruption often significantly degrades the accuracy for other types of corruption, except for similar corruptions, such as different types of noise. The authors claim that in many applications, such as autonomous vehicles, the corruption type will remain the same for a considerable amount of time. Consequently, the BN statistics can be updated at inference time. However, neither of those papers has shown the performance of BN statistic update when the corruption type changes. We conduct an experiment in Section 3.4 to show that detecting corruption types and utilizing appropriate BN stats provides better results when the corruption type is not fixed.
Figure 1: Overall Framework
Figure 2: ResNet18 (ImageNet-C): The y-axis shows the corruption with which the model BN stats are updated. The x-axis shows the corruption on which the model performance is evaluated. The numbers in the cells are accuracy gain compared to the original model, the model with BN stats obtained from the natural dataset.
### Corruption Detection
The average Fourier spectrum of different corruptions has been shown to have different visual appearances [22]. However, conducting a corruption classification on the Fourier spectrum of individual images is not a trivial task. Feeding a DNN with the raw Fourier spectrum leads to poor results and unstable training. Here, we first visually investigate the Fourier spectrum of various corruption types. Then, we propose a tailored normalization technique and a shallow DNN to detect corruption types.
We denote an image of size \((d_{1},d_{2})\) by \(x\in R^{d_{1}\times d_{2}}\). We omit the channel dimension here because the Fourier spectrum of all channels turns out to be similar, when the average is taken over all samples. We only show the results of the first channel here. We denote natural and corrupted data distribution by \(D_{n}\) and \(D_{c}\), respectively. We denote 2D discrete Fourier transform operation by \(F\). In this paper, we only consider the amplitude component of \(F\) since the phase component does not help much with corruption detection. Moreover, we shift the low-frequency component to the center for better visualization.
Fig. 3 shows the normalized Fourier spectrum of different corruption types in CIFAR10-C. The results on ImageNet-C is presented in Fig. 4. We explain the normalization process in the next paragraph. For visualization purposes, we clamp the values above one. However, we do not clamp pixel values of the input when fed to the corruption detection model. As shown in Fig. 3, most corruption types have a distinguishable average Fourier spectrum. The almost identical ones, i.e., different types of noise, are not needed to be distinguished accurately because the BN stat updates for one of them can improve the accuracy for others nevertheless, as shown in Fig. 2.
To normalize the data, we first obtain the average Fourier spectrum of the natural samples, denoted by \(\epsilon_{n}=\mathbb{E}_{x\sim D_{n}}[|F(x)|]\). Then we compute normalized Fourier spectrum by \(log(\frac{\mathbb{E}_{x\sim D_{n}}[|F(x)|]}{\epsilon_{n}}+1)\) for each corruption type, separately. For corruption detection purpose, we substitute the expected value over the entire corruption type dataset by an individual image, i.e., \(log(\frac{|F(x)|}{\epsilon_{n}}+1)\). We empirically find this specific normalization to outperform others significantly. The intuition behind this normalization is twofold: First, natural images have a higher concentration in low frequencies [22]. Although corrupted images also have large values on low-frequency components, they may also have large concentration on high-frequency components, depending on the corruption. Hence, we divide the values by \(\epsilon_{n}\) to ensure that model does not exclusively focus on low-frequency components during training. Second, the range of values from one pixel to another may vary multiple orders of magnitude, which causes instability during training. Typical normalization techniques on unbounded data, such as tanh or sigmoid transforms, leads to poor accuracy because values larger than a certain point converge to 1 and become indistinguishable. The _log_ operation in our normalization scheme allows the values to go beyond 1 if that frequency component is extremely large. Hence, it does not lose the information.
We employ a three-layer fully connected (FC) neural network for corruption-type detection. Despite having an image-like structure, we avoid using convolutional neural networks (CNNs) here because of the apparent absence of shift-invariance in the Fourier spectrum. Due to the symmetry in the Fourier spectrum, we only feed half of the Fourier spectrum to the model. For CIFAR10, we flatten the 2D data and feed it to a three-layers FC model with 1024, 512, and 16 neurons. Note that this paper deals with 15 corruption types and natural data as previously experimented in [18, 1]. For ImageNet-C, we first use 2D average pooling with kernel size and stride of 2 to reduce the input size. Then, we flatten the output and feed it to a model with three FC layers of size 2058, 512, and 16. Additionally, we use ReLU function as non-linearity after the first and second layers.
We train the model with stochastic gradient descent (SGD) for 50 epochs. We decrease the learning rate by a factor of 10 at epochs 20 and 35. We only use a small num
Figure 4: Normalized Fourier spectrum of ImageNet-C dataset
Figure 3: Normalized Fourier spectrum of CIFAR10-C dataset
ber of samples for training, i.e., 100 samples per corruption/intensity, and we keep the rest for validation. Using the Fourier spectrum and the proposed normalization method, we achieve validation accuracy of 49.21% and 65.88% on CIFAR10-C and ImageNet-C, respectively. The same architecture and model capacity only yields 7.64% and 6.32% accuracy in the image domain. We also could not achieve good accuracy with CNNs in the image domain. The confusion matrix of the corruption detection is presented in Fig. 5.
## 3 Experimental Setup
**Datasets & Metrics.** CIFAR10 dataset [13] contains \(32\times 32\) color images of 10 classes, with 50,000 training samples and 10,000 test samples. ImageNet dataset [3] contains around 1.2 millions images of 1000 classes. For ImageNet, we resize images to \(256\times 256\) and take the center \(224\times 224\) as input. CIFAR10-C and ImageNet-C datasets [10] contain corrupted test samples of the original CIFAR10 and ImageNet. There are 15 test corruptions and 4 hold-out corruptions. For a fair comparison with previous work, we only use the 15 test corruptions as in [1, 18]. Each corruption type, \(c\), contains 5 different intensities or severity level, denoted by \(s\). Similar to [11], we use unnormalized corruption error \(uCE=\sum_{s=1}^{5}E_{c,s}\) on CIFAR10, and normalized corruption error \(CE=\sum_{s=1}^{5}E_{c,s}/\sum_{s=1}^{5}E_{c,s}^{AlexNet}\) for ImageNet-C. Corruption error averaged over all 15 corruptions is denoted by \(mCE\).
**Models.** Our framework consists of two DNNs, namely the corruption type detector and a pre-trained model on the original task. The details of the corruption type detector model are explained in Section 2.3. For CIFAR10, we consider ResNet-20, ResNet-110 [8], VGG-19 [20], WideResNet-28-10 [23], and DenseNet (L=100, k=12) [12]. All CIFAR10 models are adopted from a public _github_ repository1. For ImageNet, we consider ResNet-18, ResNet-50 [8], VGG-19 [20], WideResNet-50 [23], and DenseNet-161 [12]. All ImageNet models are adopted from _torchvision_ library [15]. We also adopted trained ResNet-50 models from state-of-the-art robustness literature, i.e., Stylized ImageNet training (SIN) [6], adversarial noise training (ANT) [16], AugMix [11], and DeepAug [9].
Footnote 1: [https://github.com/bearpaw/pytorch-classification](https://github.com/bearpaw/pytorch-classification)
**BN Statistics.** In this paper, we specifically adopted BN stat update from [18] with parameters \(N=1\) and \(n=1\). For a corruption \(c\), this choice of parameters indicates that we take an average of a natural BN stats and the BN stats of the corruption \(c\). We compute BN stats from the same samples we use to train the corruption-type detection model. Due to the small sample size for BN stat adoption, we find that taking an average with natural BN stats leads to better results than only using the target corruption BN stats.
### Evaluation on CIFAR10-C
Table 1 presents the results of CIFAR10-C over several models. Our approach improves the accuracy over all corruptions by around \(8\%\). However, the accuracy over natural samples is dropped by less than \(1\%\). Because the base model is trained on natural samples, any misclassification of natural samples in the corruption detection model negatively affects the model performance, while any correct classification of corruptions positively affects the accuracy. As shown in Table 2, our approach significantly improves the accuracy over all the corruption types, except for brightness and JPEG corruption, in which the accuracy barely changes. Note that these two corruptions have the least improvement when BN stat is applied, as shown in Fig. 2.
### Evaluation on ImageNet-C
Evaluation results on ImageNet-C is shown in Table 3 and 4. We observe a similar pattern as CIFAR10 with a slightly smaller improvement. Here, accuracy improvement is around \(4\%\). Similarly, improvement occurs over all corruptions except for brightness and JPEG.
### Evaluation on robust models
In this section, we investigate if our approach can further improve the accuracy of state-of-the-art models on ImageNet-C. Table 5 presents the evaluation of five state-of-the-art models. Our approach consistently improves the performance of robust approaches even further. Note that, for fair comparison, here we exclude the data we use to train the corruption type detection model from the validation set. That explains the small discrepancy between the base accuracy reported in the paper and those in previous work.
### Inference Time Adaptation
Two recent papers [1, 18] that investigated BN statistics update suggested that the idea can be used at inference time, and the model will adopt to a new corruption eventually. However, they have never empirically evaluated their performance for inference time adaptation. Here, we start with the original model trained on clean samples. Then, during evaluation, after a certain number of batches, we randomly pick another corruption and then continue evaluating the model. The samples within one batch come from only a single corruption, and there are 16 samples in each batch. We let the model BN stats be updated from the last ten batches at the beginning of each batch. Because our approach does not update the BN stat lookup table, it is insensitive to how the inference time evaluation is conducted, and consequently, the performance is similar.
The results of the experiment are shown in Fig. 6. In CIFAR10, only in VGG-19 and only when we let the corruption stay the same for 32 consecutive batches our approach is underperformed. In ImageNet, both VGG-19 and
ResNet18 outperforms our approach only after 32 successive batches. This experiment reveals that the original BN stat update mechanism in [18, 1] only works when input corruption remains the same for a considerable number of consecutive samples. Although this assumption is reasonable for some applications, such as autonomous vehicles with continuous stream input, it does not hold for many applications, particularly for non-stream inputs common in healthcare applications.
## 4 Limitations & Discussion
One major limitation of the current framework is that it needs data samples from all corruption types to train the corruption type detection model. Although using the Fourier spectrum allows us to train the corruption detector easily with a small number of samples, it still limits the generalizability of the framework to unseen corruptions. One solution to this problem is to attach an off-the-shelf outlier detection mechanism or an uncertainty mechanism to discover new types of corruption at inference time. Then, we can make a new entry in the BN stat lookup table, and the model can gradually learn BN statistics at inference time by observing multiple samples from the new class. Hence, we can prevent the need to collect image samples from all corruptions during training. Another related perspective is to frame the _supervised_ corruption type detection as an _unsupervised_ problem. This reformulation is possible because the corruption labels themselves are nonessential in our framework. For example, we can use a clustering algorithm to cluster different corruption and then associate each cluster with an entry in the BN stats table. This strategy can also be extended to detect new clusters at inference time for better generalization. We will investigate this idea in future work.
In this paper, Our framework is only evaluated on natural and corrupted images. We can employ the same corruption detection idea for domain detection. Since the pre-trained model does not need to be re-trained in our framework, it might be interesting to adopt our framework for domain generalization. For instance, a natural image and cartoon have distinguishable features, such as color distributions, Fourier spectrum, etc. Accurate domain detection might be a simple task if proper features are found.
Currently, our framework accuracy is bounded by the BN statistics update proposed in [18, 1]. As a result, with the presence of perfect corruption/domain detection, the accuracy may not be improved if the BN statistic update does not work for the target corruption/domain. In the future, we will investigate other approaches to eliminate this limitation.
\begin{table}
\begin{tabular}{l|c c c|c c c|c c|c c|c c} \hline \hline Model & \multicolumn{3}{c}{All combined (accuracy)} & \multicolumn{3}{c}{On natural images (accuracy)} & \multicolumn{3}{c}{On corrupted images (accuracy)} & \multicolumn{3}{c}{On corrupted images (mCE)} \\ - & Base & Ours & \(\Delta\) & Base & Ours & \(\Delta\) & Base & Ours & \(\Delta\) & Base & Ours & \(\Delta\) \\ \hline ResNet-20 & 69.46\% & 76.82\% & 7.37\% & 91.62\% & 90.86\% & -0.76\% & 67.98\% & 75.89\% & 7.91\% & 32.02\% & 24.11\% & -7.91\% \\ ResNet-110 & 72.06\% & 80.59\% & 8.53\% & 93.55\% & 93.01\% & -0.54\% & 70.63\% & 79.76\% & 9.13\% & 29.37\% & 20.24\% & -9.13\% \\ VGG-19 & 74.08\% & 80.69\% & 6.61\% & 93.19\% & 92.95\% & -0.24\% & 72.81\% & 79.87\% & 7.07\% & 27.19\% & 20.13\% & -7.07\% \\ WRN-28-10 & 78.00\% & 85.21\% & 7.22\% & 96.23\% & 96.05\% & -0.18\% & 76.78\% & 84.9\% & 7.71\% & 23.22\% & 15.51\% & -7.71\% \\ DenseNet & 74.34\% & 81.91\% & 7.58\% & 95.04\% & 94.49\% & -0.55\% & 72.96\% & 81.07\% & 8.12\% & 27.04\% & 18.93\% & -8.12\% \\ \hline \hline \end{tabular}
\end{table}
Table 1: Evaluation results on CIFAR10-C
Figure 5: Corruption type detection model’s confusion matrix
## 5 Related Work
Dodge et al. [4] revealed that deep models' accuracy significantly drops with corrupted images despite having similar performance to humans on clean data. Several studies [7, 21] verified that training with some corruptions does not improve the accuracy for unseen corruptions. However, [17] later challenged this notion by showing that Gaussian data augmentation can enhance the accuracy of some other corruptions as well. In [1, 18], authors have shown that corruption accuracy can be significantly increased by only updating the BN statistics of a trained model on a specific corruption. Although it is claimed that it can be easily adopted at inference time by updating the model BN stats using a batch of most recent samples, the performance of the models has not been evaluated in a situation where the corruption type changes.
There are numerous data augmentation methods shown to improve corruption robustness. AutoAugment [2] automatically searches for improved data augmentation policies but has been shown later to improve corruption error [22]. AugMix [11] combines a set of transforms with a regularization term based on the Jensen-Shannon divergence. It has been shown that applying Gaussian noise to image patches can also improve accuracy [14]. In Stylized-ImageNet, the idea of using style-transfer were adopted for data augmentation [6]. Using adversarially learned noise distribution has been proposed in [16]. In DeepAug [9], images are passed through image-to-image models while being distorted to create new images leading to large improvements in robustness. The adoption of adversarially training to improve corruption robustness has not been consistent. For instance, [17] has shown that adversarial training does not improve corruption robustness while [19] and [5] have reported otherwise, using \(l\infty\) adversarial training.
## 6 Conclusion
In this paper, we propose a framework where an off-the-shelf naturally trained vision model can be adapted to perform better against corrupted inputs. Our framework consists of three main components: 1) corruption type detector, 2) BN stats lookup table, and 3) an off-the-shelf trained model. Upon detecting the corruption type with the first component, our framework pulls the corresponding BN stats from the lookup table and substitutes the BN stats of the trained model. Then, the original image is fed to the updated trained model.
Even though detecting the corruption type is a very challenging task in the image domain, we can use the Fourier spectrum of an image to detect the type of corruption. We use a shallow three-layer FC neural network that detects the corruption type based on Fourier amplitudes of the input. We show that this model can achieve significant accuracy
by training on minimal samples. The same small sample size is shown to be also enough to obtain the BN stats stored in the BN stat lookup table.
|
2309.04844 | Are NH$_3$ and CO$_2$ ice present on Miranda? | Published near-infrared spectra of the four largest classical Uranian
satellites display the presence of discrete deposits of CO$_2$ ice, along with
subtle absorption features around 2.2 $\mu$m. The two innermost satellites,
Miranda and Ariel, also possess surfaces heavily modified by past endogenic
activity. Previous observations of the smallest satellite, Miranda, have not
detected the presence of CO$_2$ ice, and a report of an absorption feature at
2.2 $\mu$m has not been confirmed. An absorption feature at 2.2 $\mu$m could
result from exposed or emplaced NH$_3$- or NH$_4$-bearing species, which have a
limited lifetime on Miranda's surface, and therefore may imply that Miranda's
internal activity was relatively recent. In this work, we analyzed
near-infrared spectra of Miranda to determine whether CO$_2$ ice and the
2.2-$\mu$m feature are present. We measured the band area and depth of the
CO$_2$ ice triplet (1.966, 2.012, and 2.070 $\mu$m), a weak 2.13-$\mu$m band
attributed to CO$_2$ ice mixed with H$_2$O ice, and the 2.2-$\mu$m band. We
confirmed a prior detection of a 2.2-$\mu$m band on Miranda, but we found no
evidence for CO$_2$ ice, either as discrete deposits or mixed with H$_2$O ice.
We compared a high signal-to-noise spectrum of Miranda to synthetic and
laboratory spectra of various candidate compounds to shed light on what species
may be responsible for the 2.2-$\mu$m band. We conclude that the 2.2-$\mu$m
absorption is best matched by a combination of NH$_3$ ice with NH$_3$-hydrates
or NH$_3$-H$_2$O mixtures. NH$_4$-bearing salts like NH$_4$Cl are also
promising candidates that warrant further investigation. | Riley A. DeColibus, Nancy J. Chanover, Richard J. Cartwright | 2023-09-09T17:11:52Z | http://arxiv.org/abs/2309.04844v1 | # Are NH\({}_{3}\) and CO\({}_{2}\) ice present on Miranda?
###### Abstract
Published near-infrared spectra of the four largest classical Uranian satellites display the presence of discrete deposits of CO\({}_{2}\) ice, along with subtle absorption features around 2.2 \(\mu\)m. The two innermost satellites, Miranda and Ariel, also possess surfaces heavily modified by past endogenic activity. Previous observations of the smallest satellite, Miranda, have not detected the presence of CO\({}_{2}\) ice, and a report of an absorption feature at 2.2 \(\mu\)m has not been confirmed. An absorption feature at 2.2 \(\mu\)m could result from exposed or emplaced NH\({}_{3}\)- or NH\({}_{4}\)-bearing species, which have a limited lifetime on Miranda's surface, and therefore may imply that Miranda's internal activity was relatively recent. In this work, we analyzed near-infrared spectra of Miranda to determine whether CO\({}_{2}\) ice and the 2.2-\(\mu\)m feature are present. We measured the band area and depth of the CO\({}_{2}\) ice triplet (1.966, 2.012, and 2.070 \(\mu\)m), a weak 2.13-\(\mu\)m band attributed to CO\({}_{2}\) ice mixed with H\({}_{2}\)O ice, and the 2.2-\(\mu\)m band. We confirmed a prior detection of a 2.2-\(\mu\)m band on Miranda, but we found no evidence for CO\({}_{2}\) ice, either as discrete deposits or mixed with H\({}_{2}\)O ice. We compared a high signal-to-noise spectrum of Miranda to synthetic and laboratory spectra of various candidate compounds to shed light on what species may be responsible for the 2.2-\(\mu\)m band. We conclude that the 2.2-\(\mu\)m absorption is best matched by a combination of NH\({}_{3}\) ice with NH\({}_{3}\)-hydrates or NH\({}_{3}\)-H\({}_{2}\)O mixtures. NH\({}_{4}\)-bearing salts like NH\({}_{4}\)Cl are also promising candidates that warrant further investigation.
Planetary surfaces (2113); Surface composition (2115); Surface ices (2117); Surface processes (2116); Uranian satellites (1750 +
Footnote †: journal: PSJ
0000-0002-8002-8003]Riley A. DeColibus
0000-0002-1881-7088]Nancy J. Chanover
0000-0002-1881-7088]Richard J. Cartwright
## 1 Introduction and Background
Ground-based near-infrared spectroscopic observations of the five classical Uranian satellites have revealed that their surfaces are dominated by H\({}_{2}\)O ice, mixed with a dark, low albedo component (Brown & Cruikshank, 1983; Brown & Clark, 1984). Ariel, Umbriel, Titania, and Oberon show spectral evidence of crystalline CO\({}_{2}\) ice, primarily concentrated on their trailing hemispheres (Grundy et al., 2003, 2006; Cartwright et al., 2015). In contrast, Miranda does not show evidence of CO\({}_{2}\) ice (Gourgeot et al., 2014; Cartwright et al., 2018), possibly because its lower gravity prevents retention of CO\({}_{2}\)(Grundy et al., 2006; Sori et al., 2017).
All five satellites show evidence of a weak absorption feature near 2.2-\(\mu\)m (Bauer et al., 2002; Cartwright et al., 2018, 2020, 2023). This 2.2-\(\mu\)m absorption feature appears qualitatively similar to the 2.21-\(\mu\)m band on Charon, which has been attributed to NH-bearing species such as NH\({}_{3}\)-hydrates and NH\({}_{4}\)Cl (e.g., Cook et al., 2007, 2018, 2023). Ammonia (NH\({}_{3}\)) acts as an antifreeze and is predicted to have been incorporated into these icy bodies during their formation. Ammonia at the surface of icy bodies is thought to be dissociated by radiation on relatively short geological timescales (Strazzulla and Palumbo, 1998; Moore et al., 2007). Detection of NH-bearing species could then imply recent exposure by impacts, or perhaps emplacement by endogenic processes like cryovolcanism. Previous work noted a weak 2.2-\(\mu\)m band in spectra of Miranda (Bauer et al., 2002), but subsequent studies were unable to confirm its presence (Gourgeot et al., 2014; Cartwright et al., 2018). New ground-based near-IR spectra of Miranda were recently published with much higher signal-to-noise ratios (S/N) compared to prior studies (DeCollibus et al., 2022). This new dataset was used to find subtle variations in H\({}_{2}\)O ice band strengths between Miranda's anti-Uranus and sub-Uranus quadrants (DeCollibus et al., 2022). We analyzed these higher quality spectra to perform a new, in-depth investigation to determine whether CO\({}_{2}\) ice and the 2.2-\(\mu\)m absorption feature are present on Miranda.
In the following subsections we describe the state of knowledge for the surface composition of Miranda and the other classical Uranian moons. We discuss our Miranda data set in Section 2 and our analysis of integrated band areas and depths of the CO\({}_{2}\) ice triplet and the 2.13-\(\mu\)m and 2.2-\(\mu\)m absorption bands in Section 3. We summarize our results in Section 4, our spectral modeling in Section 5, and discuss the implications of our findings in Section 6.
### Geology of Miranda
Miranda is the smallest of the five classical Uranian satellites. With a radius of \(\sim\) 234 km, it is intermediate in size between the Saturnian moons Mimas and Enceladus. Prior to the Voyager 2 flyby of Uranus in 1986, Miranda was expected to possess a surface similar to Mimas (heavily cratered, with minimal to no evidence of geological activity). However, imaging of Miranda's southern hemisphere collected by Voyager 2 revealed an icy body where three large regions of its surface appeared to be geologically young (called "coronae"), while other regions appeared to be ancient and heavily cratered (Figure 1)(Smith et al., 1986; Greenberg et al., 1991; Schenk and Moore, 2020). The coronae are bounded by large tectonic fault systems, and both Arden and Inverness Corona possess patches of high albedo
Figure 1: (Left panel): An image mosaic of Miranda’s southern hemisphere from imaging by Voyager 2 in 1986. We use an orthographic projection centered on the south pole, using the Miranda mosaic produced by Schenk and Moore (2020). The ‘coronae’ are regions heavily modified by tectonic activity and possessing intriguing patches of high-albedo material. Only two of the disk-integrated spectra in our dataset were observed with sub-observer latitudes on the southern hemisphere: UT990607, green (Bauer et al., 2002) and UT000907, blue (Cartwright et al., 2018). Errorbars in longitude represent the range of sub-observer longitudes covered during the duration of that observation. (Right panel): All other spectra in our dataset observed Miranda’s northern hemisphere, which was in darkness at the time of the Voyager flyby and has no imaging data available. Orange spectra were observed with TripleSpec, blue spectra were observed with SpeX, and red spectra were observed with GNIRS. Longitudes appear to increase in opposite directions as seen from the north and south poles, so we chose to orient the 0\({}^{\circ}\)meridian at the top of the figure in both panels.
material that might represent deposits of fresher H\({}_{2}\)O ice. At the time, the incongruuity between such a small icy body possessing complex tectonic and geological surface features led to the suggestion that Miranda had been disrupted into large chunks after a giant impact and subsequently reaccreted (Smith et al., 1986; Janes and Melosh, 1988). Later work suggests that Miranda's bizarre coronae instead represent the surface expression of internal upwelling from low order convection (e.g. Pappalardo et al., 1997; Hammond and Barr, 2014), likely with tidal interactions from orbital resonances as a heat source. Crater counts suggest that the coronae are geologically young (0.1 - 1 Gyr, Zahnle et al. (2003); Kirchoff et al. (2022)), and Miranda's larger neighbor Ariel also displays a relatively young surface, implying that Miranda and Ariel have both experienced complex geological histories.
Furthermore, it has previously been noted that Miranda's large scale geology is reminiscent of the similarly sized active ocean world Enceladus, such as the presence of three regions of heavily tectonized terrain at the south pole and near the centers of the leading and trailing hemispheres, interspersed with heavily cratered ancient terrain (Pappalardo and Schubert, 2013; Beddingfield and Cartwright, 2020). Miranda is not currently in an orbital resonance that should produce significant internal heating, so the retention of an Enceladus-like subsurface ocean over geological timescales is unlikely (Hussmann et al., 2006; Castillo-Rogez et al., 2023). However, the surface geology indicates that Miranda likely experienced one or more significant heating events in the past (Beddingfield et al., 2015; Schenk and Moore, 2020; Beddingfield et al., 2022), and the apparent youthfulness of the coronae suggests that these events may have been geologically recent.
### H\({}_{2}\)O and CO\({}_{2}\) ice
The presence and spectral properties of H\({}_{2}\)O ice on the Uranian satellites have been discussed at length in prior work (Brown and Cruikshank, 1983; Brown and Clark, 1984; Grundy et al., 1999; Bauer et al., 2002; Grundy et al., 2003, 2006; Cartwright et al., 2015, 2018, 2020; DeColibus et al., 2022). We therefore give only a brief overview in this section. The surfaces of the Uranian satellites are characterized primarily by their lower albedos in comparison to similarly sized Saturnian satellites. These low albedos are attributed to a dark, spectrally neutral component (often assumed to be carbonaceous in nature) mantling or intermixed with the icy regolith. This dark material effectively erases the 1.04-\(\mu\)m and 1.25-\(\mu\)m H\({}_{2}\)O ice bands and substantially decreases the depth of the 1.5-\(\mu\)m and 2.0-\(\mu\)m absorption band complexes. The four largest Uranian satellites show leading/trailing asymmetries in the strength of their H\({}_{2}\)O ice bands, likely due to a combination of magnetospheric irradiation and impact gardening. Miranda does not show a leading/trailing asymmetry in the strength of its H\({}_{2}\)O ice bands, although it possesses an anti-Uranus/sub-Uranus asymmetry instead (Cartwright et al., 2018; DeColibus et al., 2022).
Previous work reported the presence of CO\({}_{2}\) ice on the four largest Uranian satellites (Grundy et al., 2003, 2006; Cartwright et al., 2015, 2022). This CO\({}_{2}\) ice is identified via a triplet of narrow absorption features at 1.966, 2.012, and 2.070 \(\mu\)m. Weaker CO\({}_{2}\) ice absorption bands are also present at 1.543, 1.578, and 1.609 \(\mu\)m, mostly on Ariel, which has the strongest CO\({}_{2}\) ice signatures. These CO\({}_{2}\) ice deposits are thought to be present primarily in pure deposits and not intermixed with the H\({}_{2}\)O ice regolith. Furthermore, a forbidden 2\(\nu_{3}\) overtone mode of CO\({}_{2}\) exhibits an absorption feature near 2.134 \(\mu\)m (Bernstein et al., 2005), which only appears in molecular mixtures of CO\({}_{2}\) ice in H\({}_{2}\)O ice and/or methanol (CH\({}_{3}\)OH) ice, might be present on Ariel (Cartwright et al., 2022) and Umbriel (Cartwright et al., 2023).
The distribution of CO\({}_{2}\) ice on the Uranian moons shows both longitudinal and planetocentric trends. The CO\({}_{2}\) absorption features are stronger on moons closer to Uranus, and on each individual moon, the CO\({}_{2}\) ice bands are stronger on their trailing hemispheres. This information supports an interpretation in which CO\({}_{2}\) molecules are produced _in situ_ by magnetospheric bombardment of the trailing hemispheres of the Uranian satellites. Irradiation of the H\({}_{2}\)O ice and carbonaceous compounds in the regolith dissociates molecules that then recombine into CO\({}_{2}\), potentially with CO as an intermediary product (Cartwright et al., 2022). Plasma densities in the magnetosphere are higher closer to Uranus, implying that a radiolytic production process of CO\({}_{2}\) should plausibly be more effective on moons closer to the planet. This assertion is supported by the relative strength of the CO\({}_{2}\) absorption features across the various satellites, as the CO\({}_{2}\) signature is strongest on Ariel and weakest on Oberon, which spends part of its orbit outside of the Uranian magnetosphere (Ness et al., 1986). Thermodynamical modeling work indicates that radiolytically generated CO\({}_{2}\) molecules should preferentially accumulate in cold traps at low latitudes on the trailing hemispheres of each satellite, assuming radiolytic production is greater on their trailing sides (Grundy et al., 2006; Sori et al., 2017). More recent telescope observations show a decrease in the strength of the CO\({}_{2}\) ice bands on Ariel at higher sub-observer latitudes, consistent with the hypothesis that CO\({}_{2}\) cold traps are concentrated at low latitudes (Cartwright et al., 2022).
Miranda is the innermost of the five classical satellites, and therefore, it should be the most irradiated by charged particles trapped in the Uranian magnetosphere. Miranda presumably possesses the same "raw ingredients" in its regolith (carbonaceous compounds and H\({}_{2}\)O ice) as the other Uranian satellites (Brown and Clark, 1984), unless Miranda formed from a different mix of materials, possibly as a consequence of the Uranus-tilting event (Rufu and Canup, 2022; Salmon and Canup, 2022). It therefore follows that radiolytic production of CO\({}_{2}\) molecules should also be occurring on Miranda, and it may also maintain deposits of CO\({}_{2}\) ice like the other Uranian moons. However, previous spectroscopic studies of Miranda have not detected CO\({}_{2}\) ice on its surface (Bauer et al., 2002; Gourgeot et al., 2014; Cartwright et al., 2018). These prior studies were somewhat limited by low signal-to-noise ratio (S/N) reflectance spectra in the 2.0-\(\mu\)m region, due to several observational limitations. Miranda is much fainter than the other Uranian moons at K\({}_{mag}\sim\) 15, compared to K\({}_{mag}\sim\)12-13 for the larger moons. The CO\({}_{2}\) ice absorption features of interest are narrow, requiring higher spectral resolving power and therefore lower S/N per spectral pixel. The CO\({}_{2}\) ice absorption bands are superimposed on the wide and deep 2.0-\(\mu\)m H\({}_{2}\)O ice absorption band, and the CO\({}_{2}\) ice bands are also in a region of strong telluric absorption from CO\({}_{2}\) in the Earth's atmosphere. Although the spectral signature of atmospheric CO\({}_{2}\) is distinctly different than that of crystalline CO\({}_{2}\) ice (e.g., Hansen, 1997), telluric contamination introduces additional uncertainty into the spectral data points. These prior observations established CO\({}_{2}\) ice band depth upper limits of 5% of the continuum on Miranda. Therefore, if CO\({}_{2}\) ice is present, it is much less abundant on Miranda than on Ariel (Cartwright et al., 2018). Additionally, CO\({}_{2}\) molecules in a molecular mixture with H\({}_{2}\)O ice might be present and detectable prior to escaping Miranda's low gravity. Investigating whether Miranda spectra show the 2.13-\(\mu\)m 2\(\nu_{3}\) overtone band could help determine whether regolith-mixed CO\({}_{2}\) is present. The primary questions relating to CO\({}_{2}\) that this work aims to answer are therefore twofold: (1) do concentrated CO\({}_{2}\) ice deposits exist on Miranda's surface at low abundances, and (2) is there evidence for CO\({}_{2}\) in a molecular mixture with H\({}_{2}\)O ice in the regolith?
### The 2.2-\(\mu\)m feature
Some icy bodies show evidence for weak absorption features in the region between 2.18 and 2.26 \(\mu\)m, which we refer to generically as the 2.2-\(\mu\)m feature. The 2.2-\(\mu\)m feature on icy bodies has frequently been attributed to ammonia (NH\({}_{3}\))- or ammonium (NH\({}_{4}\))-bearing compounds (which we will collectively refer to as NH-bearing). NH-bearing species are of significant geological and astrobiological interest, as NH\({}_{3}\) is a potent antifreeze, capable of reducing the freezing point of liquid H\({}_{2}\)O by nearly 100 K if present in sufficiently high abundances (Kargel, 1992). This makes NH\({}_{3}\) an important compound for enabling internal activity and extending the retention of subsurface oceans on icy bodies. NH\({}_{4}\)-bearing salts and minerals are much less effective than NH\({}_{3}\) in an antifreeze role (Neveu et al., 2017), but indicate the past presence of NH\({}_{3}\), and NH\({}_{4}^{+}\) is readily produced by irradiation of H\({}_{2}\)O + NH\({}_{3}\) ice mixtures (Moore et al., 2003, 2007).
Despite the expected ubiquity of NH\({}_{3}\)-bearing compounds in the solar nebula at intermediate to large heliocentric distances (Lewis, 1971), the 2.2-\(\mu\)m feature is not observed in the spectra of many icy bodies, possibly because of removal by irradiation over short geological timescales. In the case of Miranda, NH\({}_{3}\) may be effectively removed from its surface in timescales as short as \(<\)10\({}^{6}\) years (Moore et al., 2007). Detection of the 2.2-\(\mu\)m feature could therefore imply that NH\({}_{3}\)-bearing compounds have been exposed or emplaced in the geologically recent past, either via endogenic processes such as tectonism and cryovolcanism or via mass wasting and impact events. The possibility of recent endogenic activity on Miranda is particularly tantalizing given the apparent youth of some regions of its heavily modified surface, discussed in SS1.1, and past reports of a weak 2.2-\(\mu\)m absorption feature (Bauer et al., 2002).
#### 1.3.1 The 2.2-\(\mu\)m feature on other icy bodies
One of the icy bodies that shows the most consistent evidence of a 2.2-\(\mu\)m feature is Charon (e.g., Brown and Calvin, 2000; Dumas et al., 2001). Prior to the New Horizons flyby of the Pluto system, ground-based spectroscopic studies investigated the 2.2-\(\mu\)m feature on Charon in detail, finding evidence for longitudinal variation in the strength and wavelength position of the 2.2-\(\mu\)m band (Cook et al., 2007; Merlin et al., 2010; DeMeo et al., 2015; Holler et al., 2017). Data acquired during the New Horizons flyby further confirmed the band's presence on Charon and an apparent spatial association with impact craters (Grundy et al., 2016; Cook et al., 2018; Dalle Ore et al., 2018). Dalle Ore et al. (2019) and Cruikshank et al. (2019) found the 2.2-\(\mu\)m band to be present near geologically young surface features on Pluto that also exhibit spectral evidence of H\({}_{2}\)O ice, strengthening a possible association with emplacement of NH-bearing compounds via cryovolcanism/internal activity. In contrast, Cook et al. (2018) reported on strong 2.2-\(\mu\)m bands on Pluto's minor moons Nix and Hydra, which are too small to support internal geological activity.
In addition to the Uranian satellites and icy bodies in the Pluto system, detection of weak 2.2-\(\mu\)m features also have been reported on Enceladus (Emery et al., 2005; Verbiscer et al., 2006), Tethys (Verbiscer et al., 2008), Orcus (Carry et al., 2011), Haumea and its satellite Hi'iaka (Barkume et al., 2006), and Quaoar (Jewitt and Luu, 2004). NH\({}_{4}\)-bearing species have been reported on Ceres (King et al., 1992; de Sanctis et al., 2016; Raponi et al., 2019) and comets (Poch et al., 2020), and the 2.2-\(\mu\)m feature on Charon and the smaller moons in the Pluto system has been attributed to NH\({}_{4}\)Cl (Cook et al., 2018, 2023). However, a 2.2-\(\mu\)m absorption band is not exclusive to NH-bearing species. Other minerals and compounds can also exhibit absorption in this region, including many phyllosilicates (hydrated silicates) with an Al-OH bond (Clark et al., 1990). Certain types of phyllosilicates are also prone to incorporating NH\({}_{4}\) into their composition through cation exchange processes (Bishop et al., 2002; Berg et al., 2016).
#### 1.3.2 The 2.2-\(\mu\)m feature on the Uranian satellites
Bauer et al. (2002) was the first work to identify a 2.2-\(\mu\)m feature on any of the Uranian satellites, reporting a relatively strong band on Miranda (plotted in green in the left panel of Figure 2). The authors reported an absorption band similar to that of the 2.21-\(\mu\)m feature on Charon, and attributed it to ammonia hydrate (NH\({}_{3}\cdot n\)H\({}_{2}\)O). At the time, Miranda was only the third icy body on which a 2.2-\(\mu\)m band had been reported. However, subsequent spectral studies of Miranda (Gourgeot et al., 2014; Cartwright et al., 2018) did not find conclusive evidence of a 2.2-\(\mu\)m absorption feature, leaving its existence an open question.
From visual inspection, Cartwright et al. (2018) noted weak absorption features in the 2.2-\(\mu\)m region on Ariel, Umbriel, Titania, and Oberon. The authors followed up with a detailed study of 2.2-\(\mu\)m features on Ariel in Cartwright et al. (2020), finding several different absorption bands associated with NH\({}_{3}\)- and NH\({}_{4}\)-bearing compounds, which varied in strength and band center in different spectra. They found no large scale longitudinal patterns in these variations. They compared these bands with NH\({}_{3}\) hydrates (NH\({}_{3}\cdot n\)H\({}_{2}\)O, 2.215 \(\mu\)m), flash frozen NH\({}_{3}\)-H\({}_{2}\)O solutions (2.210 \(\mu\)m), NH\({}_{3}\) ice (2.238 \(\mu\)m), and an NH\({}_{4}\)-bearing species (possibly (NH\({}_{4}\))\({}_{2}\)CO\({}_{3}\), 2.181 \(\mu\)m). Some of these bands have been reported to show double absorptions and wavelength shifts based on temperature and the ratio of NH\({}_{3}\) to H\({}_{2}\)O (Moore et al., 2007), further complicating the identification of individual compounds.
Further research has also presented evidence of similar weak absorption features in the 2.2-\(\mu\)m region on Umbriel (Cartwright et al., 2023). However, the surface of Umbriel is ancient and heavily cratered (within the resolution limits of Voyager 2 imaging), lacking evidence of resurfacing driven by endogenic activity. The presence of 2.2-\(\mu\)m bands in spectra of Umbriel raises important questions about the presumed short lifetime of NH-bearing species on the surfaces of the Uranian moons (Moore et al., 2007), and also raises the possibility that other species such as phyllosilicate minerals or nitrogen-bearing organics contribute to the 2.2-\(\mu\)m band, at least on Umbriel.
## 2 Observations
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline \multicolumn{1}{c}{ Observing PI} & Year(s) & Latitude(s) & Telescope & Instrument & Resolving power & N\({}_{spec}\) & Reference \\ & & \({}^{\circ}\)N, Sub-Earth & & & (\(\lambda/\Delta\lambda\)) & & \\ \hline Bauer & 1999 & -36.5 & UKIRT & CGS4 & \(\sim\)200 & 1 & Bauer et al. (2002) \\ Rivkin & 2000 & -35.4 & IRTF & SpeX SXD & \(\sim\)750 & 1 & Cartwright et al. (2018) \\ Gourgeot & 2012 & 21.4 & IRTF & SpeX SXD & \(\sim\)750 & 2 & Gourgeot et al. (2014) \\ Cartwright & 2014–2017 & 24.7 - 36.7 & IRTF & SpeX SXD & \(\sim\)750 & 2 & Cartwright et al. (2018) \\ Cartwright & 2014–2017 & 24.7 - 36.7 & IRTF & SpeX PRISM & \(\sim\)95 & 5 & Cartwright et al. (2018) \\ DeColibus & 2019–2021 & 42.3 - 50.4 & ARC 3.5m & TripleSpec & \(\sim\)3500 & 18 & DeColibus et al. (2022) \\ DeColibus & 2020 & 47.2 - 49.6 & Gemini N & GNIRS XD & \(\sim\)1130 & 2 & DeColibus et al. (2022) \\ DeColibus & 2021 & 53.3 - 53.8 & Gemini N & GNIRS XD & \(\sim\)750 & 2 & DeColibus et al. (2022) \\ \hline \end{tabular} Note. – A table of basic information about the collection of Miranda spectra analyzed in this work. For more details, we direct the reader to the listed references.
\end{table}
Table 1: Miranda Observations
We analyzed 33 disk-integrated near-IR spectra of Miranda, covering the wavelengths between \(\sim\)0.95 - 2.45 \(\mu\)m, summarized in Table 1. Eighteen of these spectra were obtained with the TripleSpec spectrograph on the ARC 3.5-meter telescope at Apache Point Observatory, four spectra were acquired with GNIRS on the 8.1-meter Gemini North telescope on Maunakea, and ten spectra were observed with SpeX on NASA's 3-meter Infrared Telescope Facility (IRTF) on Maunakea. For information on the acquisition and data reduction of the TripleSpec and GNIRS spectra, we refer the reader to DeColibus et al. (2022), while for the SpeX spectra, we refer the reader to Gourgeot et al. (2014) and Cartwright et al. (2018). Table 1 lists the average spectral resolving power (\(R=\lambda/\Delta\lambda\)) of data acquired with each
Figure 2: (Left panel): Examples of Miranda spectra displaying weak absorption features around 2.2 \(\mu\)m. The green spectrum is the (unbinned) spectrum reported in Bauer et al. (2002). The two blue spectra were obtained with SpeX in the PRISM and SXD configuration (unbinned and binned by 10 pixels, respectively). The orange spectra are spectra from individual nights with TripleSpec (binned by 30 pixels). The text next to each spectrum indicates the date of observation in format UTYYMMDD (e.g. UT200907 = UT 2020-09-07). This UTYYMMDD date format is used throughout this work. Vertical lines are placed at the expected wavelengths for the absorption features studied in this work: brown lines at 1.966, 2.012, and 2.070 \(\mu\)m for the CO\({}_{2}\) ice triplet, a black line at 2.134 \(\mu\)m for a feature associated with molecular mixtures including CO\({}_{2}\) ice, a blue line at 2.21 \(\mu\)m for NH\({}_{3}\)-hydrates, and a blue line at 2.24 \(\mu\)m for crystalline NH\({}_{3}\) ice. The shaded region indicates the wavelength range measured for the 2.2-\(\mu\)m band in this work. (Right panel): The GNIRS spectra and TripleSpec quadrant and hemisphere grand average spectra originally reported in DeColibus et al. (2022). Colored points are binned by 15 pixels, and the gray error bars are presented at the native resolution of the data.
instrument configuration. Two spectral data points can be placed across the narrow CO\({}_{2}\) absorption bands at R\(\sim\)750, so they should be detectable in spectra acquired at this or higher spectral resolution. All of the spectra except those acquired with the SpeX PRISM configuration have sufficient spectral resolution to at least detect the CO\({}_{2}\) bands, and the 2.2-\(\mu\)m feature is broad enough to be detectable in the PRISM spectra. The dataset also includes seven TripleSpec 'grand average' (GAvg) spectra, which were constructed as averages of all TripleSpec exposures in which the sub-observer longitude fell within defined ranges. These are the leading, trailing, sub-Uranus, and anti-Uranus quadrants; the leading and trailing hemispheres; and all exposures regardless of longitude. These grand average spectra and the GNIRS spectra are plotted in Figure 2.
We also modified one spectrum in the above dataset. The GNIRS anti-Uranus quadrant spectrum from UT211121 displayed a high S/N, but the error bars on the spectrum were uniformly overestimated compared to what might reasonably be expected given the apparent S/N of the data and when compared to the other GNIRS spectra obtained in similar conditions (see Figure 5 of DeCollibus et al. (2022)). In order to calculate more accurate uncertainties for each data point while retaining wavelength-dependent error information, such as increased uncertainties in regions of heavy telluric absorption, we adopted a procedure similar to that of Holler et al. (2022). We calculated the standard deviation of the flux values and the mean of the original uncertainty values in a region of the spectrum that could reasonably be described by a flat continuum (1.70 - 1.80 \(\mu\)m). Division of these two quantities results in the factor by which the errors in the original spectrum were overestimated, approximately \(\sim\)3.32. Division by this factor resulted in a more accurate estimate of the uncertainties for the data points in that spectrum, and the corrected spectrum was used for all analyses presented here.
Finally, we also digitized the spectrum of Miranda's leading hemisphere published in Bauer et al. (2002), in which the presence of a 2.2-\(\mu\)m feature was first reported. The K-band portion of this spectrum was acquired on 1999 June 7 with the CGS4 spectrograph on the 3.8-meter UKIRT telescope on Maunakea. We utilized timing and observing geometry information provided in Table 1 of Bauer et al. (2002), combined with the JPL HORIZONS ephemeris service, to derive a central sub-observer longitude of 75.7\({}^{\circ}\)E and a latitude of 36.5\({}^{\circ}\)S. For further information on the Bauer et al. spectrum, we direct the reader to the original work. This spectrum and one SpeX SXD spectrum acquired in 2000 at latitude 35.4\({}^{\circ}\)S are the only two southern hemisphere spectra in our data set. All other spectra were acquired at northern sub-observer latitudes (Figure 1).
## 3 Spectral Analysis
We measured the integrated band areas of three CO\({}_{2}\) ice bands in the Miranda spectra. The CO\({}_{2}\) ice bands at 1.966, 2.012, and 2.070 \(\mu\)m are referred to in this work as the "CO\({}_{2}\) ice triplet" as a whole and as bands 1, 2, and 3 individually. We also measured the integrated band areas and fractional band depths of a subtle feature at 2.13-\(\mu\)m and the 2.2-\(\mu\)m absorption band.
### Band measurement methods
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Band name & Left continuum & Band width & Right continuum & Center \\ & (\(\mu m\)) & (\(\mu m\)) & (\(\mu m\)) & (\(\mu m\)) \\ \hline CO\({}_{2}\) 2\(\nu_{1}\)+\(\nu_{3}\) (Band 1) & 1.957 – 1.962 & 1.962 – 1.969 & 1.969 – 1.974 & 1.966 \\ CO\({}_{2}\)\(\nu_{1}\)+2\(\nu_{2}\)+\(\nu_{3}\) (Band 2) & 2.002 – 2.008 & 2.008 – 2.015 & 2.015 – 2.020 & 2.012 \\ CO\({}_{2}\) 4\(\nu_{2}\)+\(\nu_{3}\) (Band 3) & 2.062 – 2.068 & 2.068 – 2.073 & 2.073 – 2.079 & 2.070 \\
2.13-\(\mu\)m band & & 2.121 – 2.135 & & 2.134 \\
2.2-\(\mu\)m band & & 2.186 – 2.251 & & 2.21 \\ \hline \end{tabular} Note. – We tabulate the wavelengths we used to define the band continua, widths, and centers. For the 2.13-\(\mu\)m and 2.2-\(\mu\)m band, the continuum was fit with the 3rd degree polynomial fit described in the text. ‘Center’ refers to the expected central wavelength of the absorption band.
\end{table}
Table 2: Absorption band wavelengths
Our band area and depth measurements were conducted with the same analysis routine described in DeColibus et al. (2022), modified for use on the CO\({}_{2}\) ice bands, 2.13-\(\mu\)m band, and 2.2-\(\mu\)m band. This routine is a Python implementation of the technique described in Cartwright et al. (2015, 2018), which used a modified version of the SARA band analysis routine originally developed for asteroid spectra (Lindsay et al., 2015). The band analysis code generates a sample spectrum, drawing from a Gaussian distribution where the mean and standard deviation are the flux and errors of the input spectrum, respectively. The absorption bands of interest (Table 2) are measured by drawing a line between the continuum on either side of the absorption band, normalizing by this line, and integrating the area inside the absorption band with the trapezoidal rule. The CO\({}_{2}\) ice absorption bands are further fit by a Gaussian function to determine the band centers, from which the depths of the bands are measured using the points within 0.0005 \(\mu\)m of the center of the band. This choice of band width for the depth measurement was determined from inspection of the CO\({}_{2}\) ice bands in spectra of Ariel. This process is demonstrated visually using a GNIRS spectrum of Miranda's leading hemisphere in Figure 3. To calculate the uncertainties on each measurement, this entire sample spectrum generation and measurement process is repeated 20,000 times, which is a number of samples consistent with other
Figure 3: A demonstration of the band area and depth measurement process for the CO\({}_{2}\) ice triplet and the 2.13-\(\mu\)m and 2.2-\(\mu\)m bands, using the GNIRS leading hemisphere spectrum of Miranda from UT201008. The data points over which the band areas are integrated are marked in blue, the points where the band depth is measured in red, continuum fits are cyan lines, and Gaussian band fits are orange lines. A copy of the spectrum plotted with an offset has been binned by five pixels (turquoise points). Vertical lines are the same as in Figure 2.
Monte Carlo type approaches to measurements of band parameters (e.g. Lindsay et al., 2015; Cartwright et al., 2015). The mean and standard deviation of the measurements of this ensemble are reported as the final band measurements for each individual spectrum in Table 3. For each spectrum, we also summed the band area measurements for the three CO\({}_{2}\) bands into a total CO\({}_{2}\) band area. Due to inadequate spectral resolution, we did not measure the CO\({}_{2}\) ice triplet for the Bauer et al. (2002) spectrum nor the SpeX spectra collected in PRISM mode.
The process for the 2.13-\(\mu\)m and 2.2-\(\mu\)m bands was slightly different. Instead of measuring the continuum by drawing a line on either side of the absorption band, the continuum is defined with a third-order polynomial curve, fit to the data points between 2.11 - 2.19 \(\mu\)m and 2.23 - 2.34 \(\mu\)m. The region between 2.19 - 2.23 \(\mu\)m is not used in the fit to avoid the influence of any possible 2.2-\(\mu\)m absorption features. The spectrum is divided by this continuum model, and the integrated band areas and fractional band depths are measured in wavelength ranges corresponding to the absorption bands of interest. We used a Gaussian fit procedure similar to the one used for the CO\({}_{2}\) ice triplet to measure the central wavelength and depth of the 2.13-\(\mu\)m band. However, for the 2.2-\(\mu\)m band, the potential presence of multiple absorptions across the 2.18 - 2.25 \(\mu\)m range would be poorly fit by a Gaussian model. We instead binned the spectrum by five pixels and chose the lowest data point in the continuum-divided band as the central wavelength. The fractional band depth for the 2.2-\(\mu\)m band are measured using the mean of the unbinned, native resolution data points within \(\pm\)0.002 \(\mu\)m of this central wavelength. This process was also iterated 20,000 times for each spectrum, and the mean and standard deviation of the measurements for each individual spectrum are reported in Table 4.
### Means, ratios, and sinusoidal models
We calculated the mean band parameters, using the final measurements of each of the spectra that fall within given sub-observer longitude quadrants and hemispheres. We also calculated a set of mean measurements that included all spectra from individual nights (i.e. not including the TripleSpec grand averages). These mean measurements are tabulated in Table 5 and plotted in Figure 5. Figure 5 also includes the band measurements from the quadrant and hemisphere-averaged TripleSpec grand average spectra and the GNIRS spectra for comparison to the mean measurements.
We took the ratios of the mean band measurements between opposing quadrants and hemispheres (Table 6) to test whether a band was statistically stronger on one quadrant/hemisphere versus the other, as might be expected from exogenic effects. We did not find statistically significant (\(\geq 2\sigma\)) departures from a ratio of unity for any of the measurements or quadrant/hemisphere pairs.
We fit our band measurements to a sinusoidal model to search for longitudinal variation, with the period fixed at \(2\pi\) to represent one rotation of the body. This approach has previously been used for Miranda and the other Uranian satellites (Grundy et al., 2006; Cartwright et al., 2015, 2018; DeCollibus et al., 2022) and we direct the reader to those works for further information. We also applied an F-test to the sinusoidal models, which allows us to discern whether the sinusoidal model fit is a statistically significant better fit compared to the null hypothesis (no variation with sub-observer longitude). The sinusoidal models and F-tests are tabulated in Table 7.
### Hapke-Mie modeling
We constructed spectral models in order to investigate possible absorption features in the 2.2-\(\mu\)m region. We chose a hybrid Hapke-Mie model using intimate (particulate) mixing, as has previously been done for the Uranian satellites (Cartwright et al., 2015, 2018, 2020). The Hapke-Mie model utilizes Mie theory to calculate the single scattering albedo of individual particles, which are then incorporated into the standard Hapke model equations. We assume an isotropic single scattering phase function which does not depend on wavelength. The Hapke-Mie approach accounts for the effects of grain sizes approaching wavelength, such as diffraction and Rayleigh scattering (Clark et al., 2012). However, at some particle sizes the exact Mie solution can introduce periodic resonance artifacts in the resulting spectrum. We mitigate this effect by calculating the average albedo of a spread of sizes around the specified grain size, typically with a 10% spread in particle diameter.
As has been noted by previous studies, Miranda's spectrum is consistent with a surface composition incorporating H\({}_{2}\)O ice and a dark, spectrally neutral component, such as amorphous carbon (Brown and Clark, 1984; Bauer et al., 2002; Gourgeot et al., 2014). We scaled our observed spectra to a geometric albedo of 0.434 at 1.72 \(\mu\)m (from Figure 7 of Karkoschka (2001)). For our models, we used the optical constants of crystalline H\({}_{2}\)O ice at 80 K from Mastrapa et al. (2008) and optical constants of amorphous carbon (sample BE1) from Rouleau and Martin (1991).
We found that a small percentage of intimately mixed sub-micron H\({}_{2}\)O ice grains assist in reproducing Miranda's spectrum. The central wavelength of the 2.0-\(\mu\)m H\({}_{2}\)O ice band is shifted to slightly longer wavelengths, the continuum
in the 2.2-\(\mu\)m region is low compared to the 1.8-\(\mu\)m continuum, and the blue slope from 2.3 - 2.5 \(\mu\)m is steeper than in typical H\({}_{2}\)O ice models of larger diameter grains. These characteristics are consistent with the presence of sub-micron H\({}_{2}\)O ice grains (Clark et al., 2012), and previous studies have suggested that the regoliths of the Uranian satellites show a 'fluffy' structure and evidence of the presence of tiny grains with sub-micron diameters, likely composed of H\({}_{2}\)O ice (Afanasiev et al., 2014; Cartwright et al., 2020).
We adjusted individual parameters (grain size and mixing ratio) of the model components until we found a fit that generally reproduced the continuum shape of our spectra between 2.0 - 2.4 \(\mu\)m. We specifically attempted to be agnostic regarding any possible absorption features in the 2.2-\(\mu\)m region, as the possible presence of several overlapping weak bands made identifying a 'clean' continuum difficult. because of degeneracies between model parameters, our spectral models provide useful but non-unique solutions. We also experimented with the addition of small amounts of other compounds (such as NH\({}_{3}\) ice) to study their effects on the resulting spectra. This analysis is described further in SS5.
## 4 Results
### CO\({}_{2}\) ice triplet
The results of our CO\({}_{2}\) triplet band measurements are shown in Table 3 and Figures 4 and 5. Even the highest S/N Miranda spectra do not show adequate evidence for the presence of the CO\({}_{2}\) ice triplet around 2 \(\mu\)m. We measured the areas and depths of each individual CO\({}_{2}\) band, but \(\geq\)2\(\sigma\) significance in any of the band area or depth measurements were only measured in three spectra. To assist in the detection of very faint features, we also calculated the sum of all three CO\({}_{2}\) ice band areas. However, only a single spectrum (UT201212) passes the 2\(\sigma\) threshold for the total CO\({}_{2}\) band area. Visual inspection of that spectrum and other spectra in which only a single band was detected indicates that these 'detections' are simply spectral noise, probably due to imperfect telluric correction, especially in the 1.966-\(\mu\)m band affected by atmospheric water vapor. The quadrant- and hemisphere-averaged means of the total CO\({}_{2}\) band area (Table 5) only reinforces our non-detection. We conclude that discrete deposits of CO\({}_{2}\) ice are not present on Miranda in amounts detectable by our observations.
### 2.13-\(\mu\)m feature
We also measured the band area and depth of the 2.13-\(\mu\)m feature in spectra of Miranda, as this absorption band could potentially be associated with a molecular mixture of CO\({}_{2}\) and H\({}_{2}\)O ice (Bernstein et al., 2005). Weak absorption features near this wavelength have been noted on both Ariel and Umbriel and suggested to be potential evidence of CO\({}_{2}\):H\({}_{2}\)O ice molecular mixtures as part of the radiolytic CO\({}_{2}\) production cycle (Cartwright et al., 2022, 2023). Our band parameter measurements for the 2.13-\(\mu\)m band are shown in Table 4 and Figures 4 and 5. Two spectra show \(\geq\) 2\(\sigma\) significance in only the 2.13-\(\mu\)m area (UT201007 and the TripleSpec sub-Uranus quadrant grand average) or only the 2.13-\(\mu\)m depth (UT201104), but none show \(\geq\) 2\(\sigma\) significance in both. Our mean measurements do show \(\geq\) 2\(\sigma\) significance in the 2.13-\(\mu\)m depth when taking the mean of all spectra, regardless of longitude. However, the lack of consistency in the significance of the band area and depth measurements leads us to conclude that there is insufficient evidence for a 2.13-\(\mu\)m band on Miranda.
### 2.2-\(\mu\)m feature
The 2.2-\(\mu\)m bands in spectra of Miranda are weak, hindering clear identifications of the compounds that might be responsible. Furthermore, absorption bands in the 2.2-\(\mu\)m region can vary in central wavelength, with absorptions previously reported on other bodies at wavelengths ranging from 2.18 \(\mu\)m to 2.25 \(\mu\)m. However, we measured non-zero band areas and depths in many of our spectra: 8 out of the 33 spectra from individual nights have \(\geq\) 2\(\sigma\) measurements of both 2.2-\(\mu\)m band areas and 2.2-\(\mu\)m band depths, although a 2.2-\(\mu\)m feature is not always visually apparent. For the GNIRS spectra, we measured \(\geq\) 3\(\sigma\) significance in both 2.2-\(\mu\)m area and depth for three out of the four spectra. We measured \(\geq\) 3\(\sigma\) significance for both area and depth in three of the seven TripleSpec grand average spectra, and \(\geq\) 2\(\sigma\) significance for six of the seven. The trailing hemisphere GNIRS spectrum and TripleSpec grand average for the
\begin{table}
\begin{tabular}{l c c c c c c c c c c} \hline \hline \multicolumn{1}{c}{ UT Date} & Long. & Lat. & \multicolumn{3}{c}{Integrated band area} & Total CO\({}_{2}\) & \(>2\sigma\) & \multicolumn{3}{c}{Band depths} \\ \cline{3-10} & & & 1.966-\(\mu\)m & 2.012-\(\mu\)m & 2.070-\(\mu\)m & band area & total & 1.966-\(\mu\)m & 2.012-\(\mu\)m & 2.070-\(\mu\)m \\ & (\({}^{\circ}\)E) & (\({}^{\circ}\)N) & (\(10^{-4}\mu\)m) & (\(10^{-4}\mu\)m) & (\(10^{-4}\mu\)m) & (\(10^{-4}\mu\)m) & area? & (\%) & (\%) & (\%) \\ \hline UT210115 & 11.8 & 47.2 & \(-2.32\pm 5.07\) & \(-2.60\pm 7.33\) & \(0.08\pm 1.97\) & \(-4.84\pm 9.13\) & No & \(-0.00\pm 0.12\) & \(0.01\pm 0.16\) & \(0.04\pm 0.08\) \\ UT20129b & 29.7 & 47.2 & \(-11.33\pm 13.59\) & \(-1.76\pm 17.21\) & \(-0.39\pm 3.24\) & \(-13.48\pm 22.17\) & No & \(-0.07\pm 0.27\) & \(0.23\pm 0.32\) & \(0.09\pm 0.12\) \\ UT000907 & 36.3 & -35.4 & \(0.51\pm 1.07\) & \(-0.00\pm 1.91\) & \(0.64\pm 0.59\) & \(1.15\pm 2.27\) & No & \(0.02\pm 0.03\) & \(0.07\pm 0.04\) & \(0.03\pm 0.04\) \\ UT191022a & 37.7 & 44.3 & \(-7.37\pm 14.71\) & \(-6.94\pm 25.97\) & \(6.28\pm 4.22\) & \(-8.08\pm 30.15\) & No & \(0.08\pm 0.39\) & \(0.00\pm 0.50\) & \(0.22\pm 0.16\) \\ UT210101 & 51.8 & 47.2 & \(3.31\pm 2.02\) & \(-3.14\pm 3.93\) & \(-1.41\pm 1.10\) & \(-12.3\pm 4.55\) & No & \(0.08\pm 0.05\) & \(0.01\pm 0.10\) & \(-0.01\pm 0.04\) \\ UT201104 & 68.7 & 48.7 & \(-0.75\pm 3.21\) & \(-7.18\pm 4.97\) & \(-1.51\pm 1.38\) & \(-9.44\pm 6.07\) & No & \(0.08\pm 0.07\) & \(-0.06\pm 0.10\) & \(-0.05\pm 0.05\) \\ UT990607 & 75.7 & -36.5 & & & & & No & & & \\ UT170930 & 80.2 & 36.7 & & & & & No & & \\ UT19104 & 80.5 & 43.8 & \(1.84\pm 3.57\) & \({\bf 76.6\pm 3.72}\) & \(-0.66\pm 1.38\) & \(8.84\pm 5.34\) & No & \(0.08\pm 0.11\) & \(0.13\pm 0.12\) & \(0.00\pm 0.05\) \\ UT150912 & 92.1 & 30.4 & & & & & No & & & \\ UT200907 & 96.1 & 50.4 & \(2.12\pm 5.72\) & \(2.38\pm 5.36\) & \(-3.87\pm 1.85\) & \(0.63\pm 8.06\) & No & \(0.07\pm 0.12\) & \(0.13\pm 0.11\) & \(-0.04\pm 0.08\) \\ UT171010 & 107.1 & 36.1 & & & & & & No & & \\ UT201008g & 109.4 & 49.6 & \(-0.70\pm 0.98\) & \(1.84\pm 1.11\) & \(0.17\pm 0.48\) & \(1.31\pm 1.56\) & No & \(-0.01\pm 0.02\) & \(0.03\pm 0.03\) & \(0.01\pm 0.03\) \\ UT120926 & 152.6 & 21.4 & \(-6.72\pm 8.24\) & \(6.31\pm 9.35\) & \(-0.14\pm 3.53\) & \(-0.55\pm 12.96\) & No & \(-0.01\pm 0.19\) & \(0.14\pm 0.21\) & \(0.06\pm 0.13\) \\ UT201007 & 174.2 & 49.7 & \(1.88\pm 2.56\) & \(-1.10\pm 3.87\) & \(-0.03\pm 1.15\) & \(0.75\pm 4.78\) & No & \(0.03\pm 0.07\) & \(0.05\pm 0.10\) & \(0.00\pm 0.05\) \\ UT200913 & 184.9 & 50.3 & \(-3.33\pm 5.43\) & \(-5.83\pm 7.08\) & \(-1.73\pm 1.94\) & \(-10.89\pm 9.13\) & No & \(-0.02\pm 0.12\) & \(-0.04\pm 0.17\) & \(0.02\pm 0.07\) \\ UT200930 & 192.8 & 49.9 & \(2.79\pm 2.50\) & \(3.53\pm 3.78\) & \(0.09\pm 1.21\) & \(6.41\pm 4.69\) & No & \(0.07\pm 0.06\) & \(0.11\pm 0.08\) & \(0.03\pm 0.05\) \\ UT211129 & 20.7 & 53.3 & \(0.45\pm 0.82\) & \(1.37\pm 1.00\) & \(0.46\pm 0.52\) & \(2.28\pm 1.39\) & No & \(0.01\pm 0.02\) & \(0.04\pm 0.02\) & \(0.02\pm 0.02\) \\ UT170925 & 232.5 & 36.7 & \(2.16\pm 3.18\) & \(-6.70\pm 6.83\) & \(-2.47\pm 2.36\) & \(-7.01\pm 7.89\) & No & \(0.13\pm 0.09\) & \(-0.01\pm 0.17\) & \(-0.03\pm 0.11\) \\ UT150911 & 236.2 & 30.4 & & & & & No & & \\ UT120925 & 256.1 & 21.4 & \(1.04\pm 6.31\) & \(1.87\pm 7.57\) & \(0.11\pm 4.34\) & \(3.02\pm 10.77\) & No & \(0.09\pm 0.15\) & \(0.07\pm 0.17\) & \(0.06\pm 0.15\) \\ UT201013 & 262.8 & 49.5 & \(2.59\pm 2.51\) & \(-5.23\pm 4.03\) & \(-0.19\pm 1.03\) & \(-2.83\pm 4.86\) & No & \(0.05\pm 0.05\) & \(-0.06\pm 0.09\) & \(0.03\pm 0.04\) \\ UT201206 & 274.6 & 47.7 & \(1.79\pm 2.80\) & \(5.01\pm 3.46\) & \(-0.80\pm 1.14\) & \(6.00\pm 4.60\) & No & \(0.03\pm 0.06\) & \(0.10\pm 0.08\) & \(-0.02\pm 0.05\) \\ UT141130 & 279.4 & 24.7 & \(6.22\pm 3.80\) & \(3.88\pm 6.61\) & \(1.89\pm 2.40\) & \(11.99\pm 8.00\) & No & \(0.11\pm 0.15\) & \(0.14\pm 0.16\) & \(0.06\pm 0.15\) \\ UT150917 & 280.3 & 30.2 & & & & & No &
\begin{table}
\begin{tabular}{l c c c c c c c c c c} \hline \hline \multicolumn{1}{c}{ UT Date} & Instrument & Long. & Lat. & \multicolumn{3}{c}{2.13-\(\mu\)m band} & \multicolumn{3}{c}{2.2-\(\mu\)m band} & \\ \cline{4-10} & & & & Band area & Band depth & Both & Band area & Band depth & Both & Center \\ & & (\({}^{\circ}\)E) & (\({}^{\circ}\)N) & (\(10^{-4}\mu\)m) & (\%) & \(>2\sigma\)? & (\(10^{-4}\mu\)m) & (\%) & \(>2\sigma\)? & (\(\mu\)m) \\ \hline UT210115 & TSpec & 11.8 & 47.2 & \(1.50\pm 1.51\) & \(1.3\pm 4.8\) & No & \(\bf{11.57\pm 3.56^{*}}\) & \(\bf{6.3\pm 2.4}\) & Yes & \(2.223\pm 0.016\) \\ UT201229b & TSpec & 29.7 & 47.2 & \(-1.29\pm 2.38\) & \(3.2\pm 7.6\) & No & \(7.28\pm 5.62\) & \(\bf{9.3\pm 3.8}\) & No & \\ UT000907 & SpeX SXD & 36.3 & -35.4 & \(-0.58\pm 0.69\) & \(1.4\pm 1.1\) & No & \(0.51\pm 1.45\) & \(\bf{1.9\pm 0.9}\) & No & \\ UT191022a & TSpec & 37.7 & 44.3 & \(-4.33\pm 3.99\) & \(13.9\pm 10.5\) & No & \(4.74\pm 9.17\) & \(10.5\pm 5.8\) & No & \\ UT210101 & TSpec & 51.8 & 47.2 & \(0.29\pm 0.84\) & \(3.0\pm 2.3\) & No & \(3.58\pm 1.81\) & \(\bf{3.0\pm 1.2}\) & No & \\ UT201104 & TSpec & 68.7 & 48.7 & \(0.66\pm 1.00\) & \(\bf{6.7\pm 2.8}\) & No & \(1.95\pm 2.44\) & \(3.2\pm 1.7\) & No & \\ UT990607 & CGS4 & 75.7 & -36.5 & \(-5.41\pm 4.65\) & \(-6.4\pm 5.8\) & No & \(\bf{30.75\pm 9.01^{*}}\) & \(9.2\pm 4.8\) & No & \\ UT170930 & SpeX PRISM & 80.2 & 36.7 & \(0.47\pm 1.40\) & \(2.5\pm 2.9\) & No & \(5.91\pm 3.84\) & \(2.8\pm 2.7\) & No & \\ UT19104 & TSpec & 80.5 & 43.8 & \(0.61\pm 1.05\) & \(3.9\pm 3.5\) & No & \(3.46\pm 2.42\) & \(3.2\pm 1.6\) & No & \\ UT150912 & SpeX PRISM & 92.1 & 30.4 & \(0.19\pm 2.74\) & \(4.8\pm 6.8\) & No & \(4.19\pm 6.16\) & \(\bf{7.8\pm 4.5}\) & No & \\ UT200907 & TSpec & 96.1 & 50.4 & \(-1.23\pm 1.25\) & \(5.0\pm 3.7\) & No & \(\bf{8.13\pm 2.85}\) & \(\bf{5.3\pm 1.7^{*}}\) & Yes & \(2.214\pm 0.013\) \\ UT171010 & SpeX PRISM & 107.1 & 36.1 & \(-1.18\pm 1.38\) & \(-1.4\pm 2.5\) & No & \(-6.74\pm 3.36\) & \(1.1\pm 1.9\) & No & \\ UT201008g & GNIRS & 109.4 & 49.6 & \(0.19\pm 0.50\) & \(1.5\pm 1.6\) & No & \(\bf{5.66\pm 1.16^{*}}\) & \(\bf{2.8\pm 0.5^{*}}\) & Yes* & \(2.218\pm 0.016\) \\ UT120926 & SpeX SXD & 152.6 & 21.4 & \(0.61\pm 3.39\) & \(5.2\pm 9.0\) & No & \(9.73\pm 10.92\) & \(12.2\pm 6.1\) & No & \\ UT201007 & TSpec & 174.2 & 49.7 & \(\bf{2.41\pm 0.90}\) & \(5.6\pm 3.9\) & No & \(0.49\pm 2.17\) & \(2.8\pm 1.6\) & No & \\ UT200913 & TSpec & 184.9 & 50.3 & \(-0.72\pm 1.39\) & \(1.5\pm 4.2\) & No & \(-0.33\pm 3.28\) & \(4.6\pm 2.6\) & No & \\ UT200930 & TSpec & 192.8 & 49.9 & \(-0.07\pm 0.91\) & \(4.4\pm 3.1\) & No & \(2.46\pm 2.14\) & \(\bf{3.3\pm 1.3}\) & No & \\ UT211121g & GNIRS & 203.7 & 53.3 & \(-0.09\pm 0.48\) & \(0.4\pm 1.5\) & No & \(\bf{5.90\pm 1.11^{*}}\) & \(\bf{2.7\pm 0.5^{*}}\) & Yes* & \(2.219\pm 0.017\) \\ UT170925 & SpeX SXD & 232.5 & 36.7 & \(1.40\pm 1.92\) & \(6.2\pm 8.3\) & No & \(-15.49\pm 4.68\) & \(3.9\pm 2.8\) & No & \\ UT150911 & SpeX PRISM & 236.2 & 30.4 & \(-5.87\pm 3.53\) & \(-2.9\pm 7.4\) & No & \(-3.87\pm 9.99\) & \(13.3\pm 6.7\) & No & \\ UT120925 & SpeX SXD & 256.1 & 21.4 & \(-1.34\pm 4.18\) & \(3.0\pm 12.3\) & No & \(5.57\pm 10.71\) & \(\bf{11.5\pm 5.2}\) & No & \\ UT201013 & TSpec & 262.8 & 49.5 & \(0.06\pm 0.78\) & \(0.3\pm 2.6\) & No & \(\bf{8.27\pm 1.88^{*}}\) & \(\bf{4.3\pm 1.6}\) & Yes & \(2.211\pm 0.018\) \\ UT201206 & TSpec & 274.6 & 47.7 & \(-0.19\pm 0.87\) & \(2.2\pm 2.7\) & No & \(0.41\pm 1.96\) & \(\bf{2.7\pm 1.3}\) & No & \\ UT141130 & SpeX SXD & 279.4 & 24.7 & \(1.52\pm 1.84\) & \(5.0\pm 8.1\) & No & \(\bf{10.49\pm 4.88}\) & \(\bf{8.4\pm 2.9}\) & Yes & \(2.219\pm 0.016\) \\ UT150917 & SpeX PRISM & 280.3 & 30.2 & \(-3.60\pm 1.70\) & \(-1.8\pm 3.9\) & No & \(-6.52\pm 4.20\) & \(3.2\pm 2.7\) & No & \\ UT191013a & TSpec & 285.7 & 44.6 & \(-1.37\pm 2.37\) & \(6.5\pm 6.6\) & No & \(-2.41\pm 5.45\) & \(\bf{6.9\pm 3.4}\) & No & \\ UT200912 & TSpec & 292.4 & 50.3 & \(-2.13\pm 1.21\) & \(0.7\pm 3.8\) & No & \(4.78\pm 2.91\) & \(4.2\pm 2.1\) & No & \\ UT191026 & TSpec & 308.2 & 44.2 & \(1.87\pm 1.45\) & \(2.9\pm 5.6\) & No & \(\bf{11.79\pm 3.25^{*}}\) & \(\bf{6.6\pm 2.4}\) & Yes & \(2.210\pm 0.015\) \\ UT201230g & GNIRS & 320.4 & 47.2 & \(0.28\pm 0.85\) & \(2.9\pm 3.0\
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \multicolumn{1}{c}{ Dataset} & Ratio & Total CO\({}_{2}\) area & 2.13-\(\mu\)m area & 2.13-\(\mu\)m depth & 2.2-\(\mu\)m area & 2.2-\(\mu\)m depth \\ \hline All spectra & LQ/TQ & \(0.01\pm 0.68\) & \(0.62\pm 1.19\) & \(0.98\pm 1.35\) & \(4.86\pm 12.50\) & \(0.66\pm 0.25\) \\ & LH/TH & \(-1.57\pm 3.79\) & \(1.06\pm 2.33\) & \(1.29\pm 1.03\) & \(3.33\pm 3.78\) & \(1.03\pm 0.31\) \\ & AQ/SQ & \(0.15\pm 1.70\) & \(-1.77\pm 7.62\) & \(0.81\pm 0.75\) & \(0.91\pm 0.90\) & \(0.97\pm 0.51\) \\ \hline \end{tabular} Note. – Ratios of the band parameters between opposing quadrants and hemispheres. All errors are 1\(\sigma\) errors.
\end{table}
Table 6: Quadrant/hemisphere ratios
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline \multicolumn{1}{c}{ Quadrant} & Longitude & Total CO\({}_{2}\) area & 2.13-\(\mu\)m area & 2.13-\(\mu\)m depth & 2.2-\(\mu\)m area & 2.2-\(\mu\)m depth & 2.13-\(\mu\)m center & 2.2-\(\mu\)m center \\ & (range, \({}^{\circ}\)E) & (\(10^{-4}\mu\)m) & (\(10^{-4}\mu\)m) & (\%) & (\(10^{-4}\mu\)m) & (\%) & (\(\mu\)m) & (\(\mu\)m) \\ \hline all & 0 – 360 & \(-0.07\pm 2.12\) & \(-0.46\pm 0.49\) & \(\bf{2.9\pm 1.2}\) & \(\bf{3.76\pm 1.58}\) & \(\bf{5.4\pm 0.8^{*}}\) & \(2.130\pm 0.001\) & \(2.218\pm 0.003\) \\ LH & 1 – 180 & \(-2.26\pm 3.16\) & \(-0.47\pm 0.78\) & \(3.3\pm 1.8\) & \(\bf{6.08\pm 2.50}\) & \(\bf{5.4\pm 1.2^{*}}\) & \(2.130\pm 0.001\) & \(2.219\pm 0.004\) \\ TH & 181 – 360 & \(1.44\pm 2.84\) & \(-0.44\pm 0.64\) & \(2.6\pm 1.5\) & \(1.82\pm 1.93\) & \(\bf{5.3\pm 1.0^{*}}\) & \(2.130\pm 0.001\) & \(2.218\pm 0.004\) \\ LQ & 45 – 135 & \(0.02\pm 2.07\) & \(-0.60\pm 0.94\) & \(2.2\pm 1.9\) & \(6.32\pm 3.65\) & \(\bf{4.3\pm 1.3^{*}}\) & \(2.130\pm 0.001\) & \(2.218\pm 0.005\) \\ TQ & 225 – 315 & \(3.04\pm 3.33\) & \(-0.96\pm 1.05\) & \(2.2\pm 2.4\) & \(1.30\pm 3.26\) & \(\bf{6.5\pm 1.6^{*}}\) & \(2.130\pm 0.001\) & \(2.216\pm 0.006\) \\ AQ & 135 – 225 & \(-0.40\pm 4.49\) & \(0.43\pm 0.95\) & \(3.4\pm 2.5\) & \(3.65\pm 3.01\) & \(\bf{5.1\pm 2.3}\) & \(2.130\pm 0.001\) & \(2.219\pm 0.008\) \\ SQ & 315 – 45 & \(-2.69\pm 6.07\) & \(-0.24\pm 0.90\) & \(4.2\pm 2.4\) & \(3.99\pm 2.14\) & \(\bf{5.3\pm 1.5^{*}}\) & \(2.130\pm 0.001\) & \(2.221\pm 0.006\) \\ \hline \end{tabular} Note. – Mean band area and depth measurements, averaged over quadrants and hemispheres. All errors are 1\(\sigma\) errors. Measurements that are greater than zero with \(\geq 2\sigma\) significance are printed in bold, and \(\geq 3\sigma\) with an asterisk. The quadrants (Q) and hemispheres (H) are designated with initials: L, leading; T, trailing; A, anti-Uranus; S, sub-Uranus. For example, LH indicates the leading hemisphere average of all spectra with longitudes between 0 – 180\({}^{\circ}\)E.
\end{table}
Table 5: Mean band measurements
anti-Uranus quadrant were both only 2\(\sigma\) significant in depth and not area. The trailing hemisphere GNIRS spectrum was lower S/N than the other three GNIRS spectra (Figure 2).
Furthermore, when taking the means of the individual band measurements (not including the grand averages), we measured \(\geq\) 2\(\sigma\) 2.2-\(\mu\)m band depths for all of the quadrant and hemisphere-averaged means, and the mean of all
Figure 4: Band area and depth measurements for each Miranda spectrum in this work, with 1\(\sigma\) error bars. Band area and depth measurements that are not statistically significant (\(<\)2\(\sigma\) from zero) are plotted as gray triangles, while those that are \(\geq\)2\(\sigma\) are plotted as red triangles. Quadrant-averaged mean band parameters (Table 5) are plotted as purple squares. The gray shaded regions on either side of each plot represent overlapping longitudes. Data points are duplicated in these regions to better visualize possible sinusoidal variations with longitude. The weighted average and sinusoidal fit to all spectra for each measurement are plotted as golden horizontal lines and gray dot-dashed lines, respectively. Top left: Total band areas for the CO\({}_{2}\) ice triplet measurements (sum of the 1.966, 2.012, and 2.070 \(\mu\)m band areas). Center row: Band area measurements (left panel) and band depths (right panel) for the 2.13-\(\mu\)m band. Bottom row: Same as the center row, but for the 2.2-\(\mu\)m band.
spectra and the mean of the leading-hemisphere spectra were measured to \(\geq 2\sigma\) significance for both the 2.2-\(\mu\)m band area and depth. We calculated an average band depth (across all spectra) of 5.4\(\pm\)0.8%. As discussed in the next section, we do not find convincing evidence for sub-observer longitudinal trends (i.e. hemispherical asymmetries) in the 2.2-\(\mu\)m band.
The band center measurements for the 2.2-\(\mu\)m band in the individual spectra generally clustered between 2.21 and 2.22 \(\mu\)m. The mean band center for all spectra was 2.218\(\pm\)0.003 \(\mu\)m (Table 5). For individual spectra in which the band was detected at \(>2\sigma\) significance, the minimum band center wavelength of 2.203\(\pm\)0.014 \(\mu\)m was measured for the TripleSpec leading hemisphere grand average. The maximum wavelength was measured at 2.223\(\pm\)0.016 \(\mu\)m, for the
Figure 5: Band area and depth measurements for the high S/N Miranda spectra (GNIRS spectra and the TripleSpec quadrant/hemisphere grand average spectra), along with the same mean measurements (purple squares) plotted in the previous figure. All error bars represent 1\(\sigma\) uncertainties. The weighted average and sinusoidal fits use the mean band measurements.
TripleSpec spectrum from UT210115. This average central wavelength of 2.218 \(\mu\)m is at slightly longer wavelengths than NH\({}_{3}\)-hydrates and H\({}_{2}\)O:NH\({}_{3}\) ice mixtures, which tend to range between 2.209 and 2.216 \(\mu\)m. Pure cubic NH\({}_{3}\) ice exhibits a band at 2.241 \(\mu\)m (Moore et al., 2007). However, the range of band centers in our spectra is not definitive, as we measured band centers based on a single band spanning 2.186 - 2.251 \(\mu\)m. Our band centers would be skewed towards longer wavelengths by the presence of crystalline NH\({}_{3}\) ice at 2.24 \(\mu\)m, and would also be affected by the presence of a'spike' in the data at 2.207 \(\mu\)m, in the middle of any band at 2.20 - 2.21 \(\mu\)m, that we attribute to residuals from differences in metal abundances in our telluric standard stars and the solar spectrum (SSA). We discuss possibilities for candidate materials, including amorphous NH\({}_{3}\) ice, other ices, and NH\({}_{4}\)-bearing minerals, in sections SS5, 6.2, and 6.3.
Finally, what of the Bauer et al. (2002) detection of the 2.2-\(\mu\)m feature? For this spectrum (UT990607), we measured a statistically significant 2.2-\(\mu\)m band area, but the band depth did not reach 2\(\sigma\) significance. Unfortunately, the axial tilt and seasonal illumination conditions of the Uranian system also makes direct reproduction (reobservation) of the Bauer et al. study impossible in the next several decades. Their spectrum was observed at southern latitudes (36.5\({}^{\circ}\)S), while all but one of the spectra in our dataset were observed on the northern hemisphere. It is worth noting that the Bauer et al. spectrum showing absorption at 2.2-\(\mu\)m was observed when Arden Corona was near disk center (Figure 1), supporting the notion that the 2.2-\(\mu\)m band may be associated with geologically young terrain. Our dataset also has a single SpeX spectrum of Miranda (UT000907) observed at a similar southern latitude (35.4\({}^{\circ}\)S), but at a different sub-observer longitude that includes more heavily cratered, older terrain. This SpeX spectrum does not show strong evidence of a 2.2-\(\mu\)m band.
### Sub-observer longitudinal trends
When considering the quadrant/hemisphere ratios of the band parameters, we did not find statistically significant (\(\geq 2\sigma\)) departures from a ratio of unity for any of the measurements or quadrant/hemisphere pairs (Table 6). Our non-detection of the CO\({}_{2}\) ice bands and 2.13-\(\mu\)m band precludes discussion of longitudinal trends for these bands, although the ratio calculations are included in the tables for completeness.
When fitting a sinusoidal model to the 2.2-\(\mu\)m band measurements from the 33 individual spectra, none of the sinusoidal models could be considered a statistically significant better fit than a constant weighted average. We found that only the model fit to the 2.2-\(\mu\)m area measurements for the quadrant-averaged mean band parameters was statistically significant (Table 7). We do not find the sinusoidal fit for the quadrant-averaged mean 2.2-\(\mu\)m band _depth_ to be significant (although it is close, at \(p\sim\) 0.07). We are inclined to trust the sinusoidal fit to 33 data points instead of four data points, even though the four data points are weighted averages of the 33 individual measurements. Finally, we do not find statistically significant deviations from unity for any of the calculated quadrant or hemisphere ratios of band parameters.
When considering the results from the quadrant/hemisphere ratios and F-tests of the sinusoidal models, we conclude that there is minimal evidence for large-scale longitudinal trends or hemispherical asymmetries in the presence or strength of the 2.2-\(\mu\)m band.
## 5 Spectral Modeling
Synthetic Hapke-Mie spectra of intimate mixtures including some of the same materials as in Cartwright et al. (2023) are plotted in Figure 6 and tabulated in Table 8. We compared these synthetic spectra to a spectrum of Miranda's leading hemisphere obtained with GNIRS (spectrum UT201008). This spectrum was chosen for being both high S/N and showing weak absorption in the 2.2-\(\mu\)m region. This absorption is composed of a shallow dip between 2.19 - 2.23 \(\mu\)m, along with a possible feature at 2.24 \(\mu\)m (Figure 7). We constructed a base model composed of two grain sizes of H\({}_{2}\)O ice (27.5 \(\mu\)m and 0.3 \(\mu\)m), along with an amorphous carbon component to act as a dark, spectrally neutral absorber.
To demonstrate the lack of CO\({}_{2}\) ice absorption features, we calculated spectral models incorporating CO\({}_{2}\) ice and plotted them against observed spectra in Figure 7. Spectral models of pure or intimately-mixed CO\({}_{2}\) ice fail to reproduce the 2.13-\(\mu\)m 2\(\nu_{3}\) overtone band seen in laboratory spectra of CO\({}_{2}\) ice in a molecular mixture with H\({}_{2}\)O (Bernstein et al., 2005). A spectrum of Ariel's trailing hemisphere (Cartwright et al., 2022) shows clear signatures of CO\({}_{2}\) ice, as previously discussed (SS1.2). In comparison, the Miranda trailing hemisphere grand average spectrum shows no evidence of any absorption features resulting from CO\({}_{2}\) ice, either as separate deposits or trapped in the regolith.
To investigate species that could be contributing to absorption features in the 2.2-\(\mu\)m range, we constructed models in which we replaced small percentages of the H\({}_{2}\)O ice component in the base model with other materials, including some of the same materials as in Cartwright et al. (2023) (Figure 6). From visual inspection, the complex absorption bands of propionitrile (C\({}_{2}\)H\({}_{5}\)CN), ethylamine (C\({}_{2}\)H\({}_{5}\)NH\({}_{2}\)) and methylamine (CH\({}_{3}\)NH\({}_{2}\)) are somewhat difficult to interpret, but do not match the Miranda spectrum closely. Ethylamine and methylamine have band complexes between 1.65 - 1.80 \(\mu\)m which are also not apparent in our spectra. The 2.22-\(\mu\)m band of ethylene (C\({}_{2}\)H\({}_{4}\)) is too sharp compared to the 2.22-\(\mu\)m band of ethylene (C\({}_{2}\)H\({}_{4}\)). The 2.22-\(\mu\)m band of ethylene (C\({}_{2}\)H\({}_{4}\)) is too sharp compared to the 2.22-\(\mu\)m band of ethylene (C\({}_{2}\)H\({}_{4}\)). The 2.22-\(\mu\)m band of ethylene (C\({}_{2}\)H\({}_{4}\)) is too sharp compared to the 2.22-\(\mu\)m band of ethylene (C\({}_{2}\)H\({}_{4}\)). The 2.22-\(\mu\)m band of ethylene (C\({}_{2}\)H\({}_{4}\)) is too sharp compared to the 2.22-\(\mu\)m band of ethylene (C\({}_{2}\)H\({}_{4}\)). The 2.22-\(\mu\)m band of ethylene (C\({}_{2}\)H\({}_{4}\)) is too sharp compared to the 2.22-\(\mu\)m band of ethylene (C\({}_{2}\)H\({}_{4}\)). The 2.22-\(\mu\)m band of ethylene (C\({}_{2}\)H\({}_{4}\)) is too sharp compared to the 2.22-\(\mu\)m band of ethylene (C\({}_{2}\)H\({}_{4}\)). The 2.22-\(\mu\)m band of ethylene (C\({}_{2}\)H\({}_{4}\)) is too sharp compared to the 2.22-\(\mu\)m band of ethylene (C\({}_{2}\)H\({}_{4}\)). The 2.22-\(\mu\)m band of ethylene (C\({}_{2}\)H\({}_{4}\)) is too sharp compared to the 2.22-\(\mu\)m band of ethylene (C\({}_{2}\)H\({}_{4}\)). The 2.22-\(\mu\)m band of ethylene (C\({}_{2}\)H\({}_{4}\)) is too sharp compared to the 2.22-\(\mu\)m band of ethylene (C\({}_{2}\)H\({}_{4}\)). The 2.22-\(\mu\)m band of ethylene (C\({}_{2}\)H\({}_{4}\)) is too sharp compared to the 2.22-\(\mu\)m band of ethylene (C\({}_{2}\)H\({}_{4}\)). The 2.22-\(\mu\)m band of ethylene (C\({}_{2}\)H\({}_{4}\)) is too sharp compared to the 2.22-\(\mu\)m band of ethylene (C\({}_{2}\)H\({}_{4}\)). The 2.22-\(\mu\)m band of ethylene (C\({}_{2}\)H\({}_{4}\)) is too sharp compared to the 2.22-\(\mu\)m band of ethylene (C\({}_{2}\)H\({}_{4}\)). The 2.22-\(\mu\)m band of ethylene (C\({}_{2}\)H\({}_{4}\)) is too sharp compared to the 2.22-\(\mu\)m band of ethylene (C\({}_{2}\)H\({}_{4}\)). The 2.22-\(\mu\)m band of ethylene (C\({}_{2}\)H\({}_{4}\)) is too sharp compared to the 2.22-\(\mu\)m band of ethylene (C\({}_{2}\)H\({}_{4}\)). The 2.22-\(\mu\)m band of ethylene (C\({}_{2}\)H\({}_{4}\)) is too sharp compared to the 2.22-\(\mu\)m band of ethylene (C\({}_{2}\)H\({}_{4}\)). The 2.22-\(\mu\)m band of ethylene (C\({}_{2}\)H\({}_{4}\)) is too sharp compared to the 2.22-\(\mu\)m band of ethylene (C\({}_{2}\)H\({}_{4}\)). The 2.22-\(\mu\)m band of ethylene (C\({}_{2}\)H\({}_{4}\)) is too sharp compared to the 2.22-\(\mu\)m band of ethylene (C\({}_{2}\)H\({}_{4}\)). The 2.22-\(\mu\)m band of ethylene (C\({}_{2}\)H\({}_{4}\)) is too sharp compared to the 2.22-\(\mu\)m band of ethylene (C\({}_{2}\)H\({}_{4}\)). The 2.22-\(\mu\)m band of ethylene (C\({}_{2}\)H\({}_{4}\)) is too sharp compared to the 2.22-\(\mu\)m band of ethylene (C\({}_{2}\)H\({}_{4}\)) is too sharp compared to the 2.22-\(\mu\)m band of ethylene (C\({}_{2}\)H\({}_{4}\)). The 2.22-\(\mu\)m band of ethylene (C\({}_{2}\)H\({}_{4}\)) is too sharp compared to the 2.22-\(\mu\)m band of ethylene (C\({}_{2}\)H\({}_{4}\)) is too sharp compared to the 2.
to the broad absorptions of the Miranda spectra. The 2.2-\(\mu\)m bands of the aluminum-bearing phyllosilicates illite
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline Short name & Component 1 & Component 2 & Component 3 & Component 4 & \(\chi^{2}_{\nu}\) & Source \\ \hline Base model & 95.25\% H\({}_{2}\)O 27.5\(\mu m\) & 4.5\% AC 5\(\mu m\) & 0.25\% H\({}_{2}\)O 0.3\(\mu m\) & & 1.050 & see caption \\
10\% illite & 85.25\% H\({}_{2}\)O 27.5\(\mu m\) & 4.5\% AC 5\(\mu m\) & 0.25\% H\({}_{2}\)O 0.3\(\mu m\) & 10\% illite 10\(\mu m\) & 2.762 & Clark et al. (1990) \\
5\% kaolinite & 90.25\% H\({}_{2}\)O 27.5\(\mu m\) & 4.5\% AC 5\(\mu m\) & 0.25\% H\({}_{2}\)O 0.3\(\mu m\) & 5\% kaolinite 10\(\mu m\) & 1.202 & Clark et al. (1990) \\
2\% propionitrile & 93.25\% H\({}_{2}\)O 27.5\(\mu m\) & 4.5\% AC 5\(\mu m\) & 0.25\% H\({}_{2}\)O 0.3\(\mu m\) & 2\% C\({}_{2}\)H\({}_{5}\)CN 10\(\mu m\) & 1.351 & Moore et al. (2010) \\
2\% ethylamine & 93.25\% H\({}_{2}\)O 27.5\(\mu m\) & 4.5\% AC 5\(\mu m\) & 0.25\% H\({}_{2}\)O 0.3\(\mu m\) & 2\% C\({}_{2}\)H\({}_{5}\)NH\({}_{2}\) 10\(\mu m\) & 1.252 & Hudson et al. (2022) \\
2\% methylamine & 93.25\% H\({}_{2}\)O 27.5\(\mu m\) & 4.5\% AC 5\(\mu m\) & 0.25\% H\({}_{2}\)O 0.3\(\mu m\) & 2\% CH\({}_{3}\)NH\({}_{2}\) 10\(\mu m\) & 1.236 & Hudson et al. (2022) \\
2\% amo. NH\({}_{3}\) & 93.25\% H\({}_{2}\)O 27.5\(\mu m\) & 4.5\% AC 5\(\mu m\) & 0.25\% H\({}_{2}\)O 0.3\(\mu m\) & 2\% amorph. NH\({}_{3}\) 10\(\mu m\) & 0.938 & Roser et al. (2021b) \\
1\% ethylene & 94.25\% H\({}_{2}\)O 27.5\(\mu m\) & 4.5\% AC 5\(\mu m\) & 0.25\% H\({}_{2}\)O 0.3\(\mu m\) & 1\% C\({}_{2}\)H\({}_{4}\) 10\(\mu m\) & 1.212 & Hudson et al. (2014) \\
1\% NH\({}_{3}\) & 94.25\% H\({}_{2}\)O 27.5\(\mu m\) & 4.5\% AC 5\(\mu m\) & 0.25\% H\({}_{2}\)O 0.3\(\mu m\) & 1\% cubic NH\({}_{3}\) 10\(\mu m\) & 0.908 & Hudson et al. (2022) \\
1\% NH\({}_{3}\)-H\({}_{2}\)O & 94.25\% (1\%)NH\({}_{3}\)-H\({}_{2}\)O 27.5\(\mu m\) & 4.5\% AC 5\(\mu m\) & 0.25\% H\({}_{2}\)O 0.3\(\mu m\) & & 1.051 & see caption \\
3\% NH\({}_{3}\)-H\({}_{2}\)O & 94.25\% (3\%)NH\({}_{3}\)-H\({}_{2}\)O 27.5\(\mu m\) & 4.5\% AC 5\(\mu m\) & 0.25\% H\({}_{2}\)O 0.3\(\mu m\) & & 1.602 & see caption \\ Mixed NH\({}_{3}\) & 94.75\% (1\%)NH\({}_{3}\)-H\({}_{2}\)O 27.5\(\mu m\) & 4.5\% AC 5\(\mu m\) & 0.25\% H\({}_{2}\)O 0.3\(\mu m\) & 0.5\% cubic NH\({}_{3}\) 10\(\mu m\) & 0.939 & see caption \\ \hline \end{tabular} Note. – The components of the synthetic Hapke-Mie spectra plotted in Figure 6, with their mixing ratios and grain sizes for each component. All spectra are intimate (particularly mixtures. Components listed as “H\({}_{2}\)O” use optical constants of 80 K crystalline ice from Mastrapa et al. (2008), while the component listed as “AC” is amorphous carbon, using the optical constants of sample BE1 from Rouleau & Martin (1991). Optical constants for other components are listed in the table. The NH\({}_{3}\)-hydrate spectra replace the primary H\({}_{2}\)O ice component with the optical constants of 1% and 3% by weight mixtures (respectively) of NH\({}_{3}\)-H\({}_{2}\)O. Reflectance spectra of these mixtures were originally measured by Brown et al. (1988), with optical constants derived by T. Roush (personal communication) and published in Cruikshank et al. (2005).
\end{table}
Table 8: Synthetic spectra for Figure 6
Figure 7: (Left panel): A close-up of the most promising NH\({}_{3}\)-bearing synthetic models from Figure 6, compared to the UT201008 GNIRS spectrum of Miranda’s leading hemisphere. The synthetic spectra are described in Table 8. Small periodic variations are visible in the synthetic spectra due to resonances from the calculated Mie scattering solutions. (Right panel): A demonstration of the lack of CO\({}_{2}\) ice features on Miranda’s trailing hemisphere, compared to the trailing hemisphere of Ariel (Cartwright et al., 2022). We include several Hapke model spectra: pure CO\({}_{2}\) ice (10 \(\mu\)m grains), a base model (Table 8), the base model with 10% aerally mixed CO\({}_{2}\) ice (27.5 \(\mu\)m grains), and the base model with 10% intimately mixed CO\({}_{2}\) ice (27.5 \(\mu\)m grains). We also include a laboratory spectrum of a H\({}_{2}\)O:CO\({}_{2}\) ice molecular mixture (Bernstein et al., 2005), which displays a prominent 2.13-\(\mu\)m 2\(\nu_{3}\) overtone band. Note that this forbidden overtone does not appear in pure CO\({}_{2}\) ice or the model spectra of H\({}_{2}\)O:CO\({}_{2}\) ice intimate mixtures.
and kaolinite are subtle, but both of these minerals also have bands at 1.4 \(\mu\)m from bound water absorption. The wavelengths between 1.38 - 1.43 \(\mu\)m are hard to observe from Earth due to strong atmospheric H\({}_{2}\)O absorption. Spectra with good telluric correction observed in dry conditions (such as the GNIRS spectra) do not show convincing evidence for the presence of the 1.4-\(\mu\)m band, but it is difficult to rule out either kaolinite or illite given the lower S/N in the region.
We also calculated \(\chi^{2}_{\nu}\) values as a comparison between the observed spectrum and the constructed models, using the wavelengths between 2.15 - 2.29 \(\mu\)m (Table 8). The synthetic intimate mixture spectra we generated are not unique solutions, but are useful as general guidelines for interpretation of spectral features. Both cubic (crystalline) and amorphous NH\({}_{3}\) mixtures have significantly improved \(\chi^{2}_{\nu}\) values compared to the base model, while a 1% mixture of NH\({}_{3}\). H\({}_{2}\)O fits the data equally as well as the base model (\(\chi^{2}_{\nu}\) of 1.051 versus 1.050). Kaolinite, propionitrile, ethylene, ethylamine, methylamine, the 3% mixture of NH\({}_{3}\). H\({}_{2}\)O, and illite all have worse \(\chi^{2}_{\nu}\) values than the base model.
While amorphous NH\({}_{3}\) ice has one of the most favorable \(\chi^{2}_{\nu}\) values, it is unlikely that amorphous NH\({}_{3}\) could persist on Miranda's surface, as it transitions to crystalline NH\({}_{3}\) when warmed to temperatures between 70 - 90 K (Moore et al., 2007). Voyager 2 measured a surface brightness temperature for Miranda of about 86 K (Hanel et al., 1986), and thermodynamical modeling predicts peak surface temperatures for the Uranian satellites around 90 K (Sori et al., 2017). Crystalline NH\({}_{3}\) ice produces a band at 2.24 \(\mu\)m, which is at longer wavelengths than the broad 2.2-\(\mu\)m feature in most of our spectra, but does correspond to the weak feature at 2.24 \(\mu\)m seen in this observed GNIRS spectrum (Figure 7). We also find a promising match between the spectrum in which the major H\({}_{2}\)O ice component is replaced with a 1% mixture of NH\({}_{3}\)-hydrates, which produces a shallow shoulder feature. However, the 3% NH\({}_{3}\)-hydrates mixture produces a distinct 2.2-\(\mu\)m absorption band that is too strong compared to the weak absorption in the observed spectrum. We further investigated the possibility of NH\({}_{3}\) species by generating an additional model with both NH\({}_{3}\)-hydrates and a small percentage of crystalline NH\({}_{3}\) ice. With the exception of the residual noise spike at 2.207 \(\mu\)m from incomplete solar line cancellation (SSA), this model visually matches the weak absorption bands at 2.2 \(\mu\)m and 2.24 \(\mu\)m more closely. This'mixed NH\({}_{3}\)' model has effectively the same \(\chi^{2}_{\nu}\) value as the amorphous NH\({}_{3}\) model.
## 6 Discussion
Our band analysis procedures did not detect any evidence for the presence of crystalline CO\({}_{2}\) ice on Miranda's surface, unlike its high abundance on the neighboring moon Ariel. Even in the high S/N spectra from GNIRS and the grand-average spectra from TripleSpec combining multiple nights of data, the three prominent CO\({}_{2}\) ice bands between 1.9 and 2.1 \(\mu\)m are not present (Figure 7). Additionally, we detected essentially no evidence for an absorption feature near 2.13 \(\mu\)m that might have hinted at the presence of CO\({}_{2}\) in a molecular mixture. In contrast, a subtle 2.2-\(\mu\)m band is exhibited by many of the Miranda spectra we analyzed.
### CO\({}_{2}\) ice
The prevailing hypothesis for the origin of CO\({}_{2}\) ice on the other classical Uranian moons is a radiolytic production mechanism (SS1.2). This hypothesis states that the observed CO\({}_{2}\) is produced _in situ_ via irradiation of surface carbon compounds and H\({}_{2}\)O ice, which recombine to form CO\({}_{2}\) and other products. Sublimation, sputtering, and other processes cause the CO\({}_{2}\) molecules to 'hop' in suborbital trajectories across the surface until they either escape or land in a cold trap. Because of the large obliquity of the Uranian system, the parts of the surface that receive the lowest amount of time-averaged sunlight over the Uranian year are at low latitudes. Crystalline CO\({}_{2}\) ice is bright and highly reflective, decreasing absorbed solar energy and likely reducing sublimation of CO\({}_{2}\) molecules from cold traps once they have been established. Theoretically, the trailing hemispheres would accumulate more CO\({}_{2}\) ice because the magnetosphere of Uranus rotates faster than the moons orbit, and might preferentially irradiate their trailing hemispheres and produce CO\({}_{2}\) molecules, thereby explaining the observed distribution of CO\({}_{2}\) on Ariel, Umbriel, Titania, and Oberon (Grundy et al., 2003, 2006; Cartwright et al., 2015, 2022). However, the exact nature of interactions between Uranus' offset and tilted magnetosphere and the surfaces of the moons are still poorly understood (e.g. Kollmann et al., 2020).
In contrast, this lack of CO\({}_{2}\) ice on Miranda, where radiolytic production of CO\({}_{2}\) molecules should be possible, could result from its small mass and lower efficiency of retaining volatiles like CO\({}_{2}\). Previous works have extensively discussed potential mechanisms for production and destruction of CO\({}_{2}\) on the Uranian satellites, such as photolysis or sputtering by charged particles (Grundy et al., 2006; Cartwright et al., 2015; Sori et al., 2017; Steckloff et al., 2022). Most importantly, a large fraction of the expected thermal velocity distribution of sputtered or sublimated CO\({}_{2}\) molecules
is greater than the escape velocity of Miranda (193 m s\({}^{-1}\)). Sori et al. (2017) found that approximately half of all CO\({}_{2}\) molecules sublimated on Miranda would escape before completing a single suborbital 'hop'. Steckloff et al. (2022) found a similar result, adding that the time required for these CO\({}_{2}\) molecules to migrate to the antipode of their initial production site on Miranda (\(\sim\)14 hours) is more than an order of magnitude longer than their expected residence time in the exosphere before escape (approximately one hour). Other processes that could destroy CO\({}_{2}\) ice are effectively irrelevant on Miranda, given the rate at which any produced or otherwise mobilized CO\({}_{2}\) molecules would be lost. Miranda cannot retain significant CO\({}_{2}\) ice deposits with such a large loss fraction. It is therefore not surprising that we do not observe any CO\({}_{2}\) ice deposits on Miranda.
However, given the assumption of ongoing radiolytic production, it is still possible that CO\({}_{2}\) is present in Miranda's regolith, perhaps as molecules mixed with H\({}_{2}\)O ice. This motivated our search for a 2.13-\(\mu\)m absorption band, attributed to the forbidden 2\(\nu_{3}\) first overtone of the strong 4.27-\(\mu\)m \(\nu_{3}\) asymmetric stretch band of CO\({}_{2}\). This forbidden feature is extremely weak in spectra of pure CO\({}_{2}\) ice, but becomes apparent when CO\({}_{2}\) is in a molecular mixture with other ices, including H\({}_{2}\)O and CH\({}_{3}\)OH (Bernstein et al., 2005). This overtone can vary significantly in its depth and shape in laboratory spectra. In many of our high S/N spectra, visual inspection suggested a weak feature centered at slightly shorter wavelengths (\(\sim\)2.130 \(\mu\)m), reminiscent of a weak 2.13-\(\mu\)m band detected on Ariel using SpeX and TripleSpec data (Cartwright et al., 2022), also centered at \(\sim\)2.130 \(\mu\)m and spanning 2.123 - 2.137 \(\mu\)m. If a molecular mixture including CO\({}_{2}\) ice is responsible for this band on Ariel, then it could also be responsible for a similar band on Miranda. However, our band analyses did not find convincing evidence for the 2.13-\(\mu\)m band on Miranda. If CO\({}_{2}\) is being produced on Miranda, it is not being trapped in the regolith in sufficient quantities to manifest the 2.13-\(\mu\)m band at a strength detectable in our spectra.
### NH\({}_{3}\)-bearing species
In Figure 8, we utilized the laboratory absorption spectra of Moore et al. (2007) to investigate the potential contributions of NH\({}_{3}\) species. Moore et al. measured the absorption spectra of multiple phases of pure NH\({}_{3}\) ice, several
Figure 8: (Left panel): The UT201008 GNIRS spectrum of Miranda’s leading hemisphere (gray errorbars native resolution, black errorbars binned by 5 pixels), compared to several laboratory-measured absorbance spectra of NH-bearing species (digitized from Figures 6 and 14 of Moore et al. (2007)). The laboratory spectra are arbitrarily scaled and offset. We also include a reflectance spectrum of NH\({}_{4}\)Cl at 90 K from Fastelli et al. (2022). (Right panel): Arbitrarily scaled linear combinations of the lab spectra with our synthetic base model for qualitative comparison of band shapes and locations. Vertical lines are placed at 2.21 and 2.24 \(\mu\)m.
NH\({}_{3}\)-H\({}_{2}\)O mixtures, two phases of NH\({}_{3}\)-hydrates, and NH\({}_{4}\)Cl (from Moore et al. (2003)). We also constructed qualitative linear mixtures using these absorption spectra to qualitatively compare band shapes and locations (right panel of Figure 8). These are not physically accurate spectral models, but provide a general idea of how low-level absorption from these features might appear. Quantitative spectral modeling would require the optical constants of these species to be measured, and optical constants have only been published for amorphous NH\({}_{3}\) ice, crystalline NH\({}_{3}\) ice, and an NH\({}_{3}\)-hydrate of uncertain stoichiometry (Figures 6,7).
However, all of the Moore et al. lab spectra of NH\({}_{3}\)-bearing species show the 2.0-\(\mu\)m band of NH\({}_{3}\) ice, which is not visible in our spectra. This 2.0-\(\mu\)m absorption band is rarely detected on icy bodies. Even in the case of Charon, where the 2.21-\(\mu\)m band is obvious, the presence of a complementary 2.0-\(\mu\)m band is uncertain (Cook et al., 2018, 2023). The 2.0-\(\mu\)m band is difficult to detect from ground-based spectra, as it lies in the spectral noise from a deep telluric CO\({}_{2}\) band, and even in spectra from New Horizons, the 2.0-\(\mu\)m band is not as strong as would be expected from laboratory spectra (Cook et al., 2018; Protopapa et al., 2021). The strong H\({}_{2}\)O absorption band at 2.0 \(\mu\)m further means that light penetrates much shallower into the regolith (\(\sim\) 0.1 mm at 2.0 \(\mu\)m) than it does at 2.24 \(\mu\)m (\(\sim\) 1.6 mm), where H\({}_{2}\)O only contributes weak absorption and the optical path length is therefore longer (Cartwright et al., 2023). NH\({}_{3}\)-bearing species could be present in the subsurface, where they only appear in absorption in regions of the spectrum where the optical path length is long enough to reach them, contributing to the difficulty of detecting the NH\({}_{3}\) 2.0-\(\mu\)m absorption feature, or an ammonium salt such as NH\({}_{4}\)Cl that lacks a 2.0-\(\mu\)m band could be responsible (next section). We also note that these qualitative linear combinations do not incorporate effects like multiple scattering that would make a difference in the strength of the 2.0-\(\mu\)m band in the Hapke-Mie synthetic spectra with NH\({}_{3}\) ice (Figure 6), in which the band is much less prominent than in our linear combinations.
The'survivability' of NH\({}_{3}\)-bearing species is also an open question. As discussed previously, NH\({}_{3}\)-bearing species exposed at the surface of icy bodies are expected to be dissociated by particle irradiation (such as cosmic rays) on geologically short timescales. Miranda is subject to additional proton and electron bombardment from charged particles trapped in Uranus's magnetosphere, and Moore et al. (2007) estimate timescales for destruction of NH\({}_{3}\) ice at Miranda's surface as short as \(\sim 10^{6}\) years. However, dissociated NH\({}_{3}\) does not simply disappear. The radiation products can readily recombine into similar forms, such as NH\({}_{4}^{+}\) ions, which can react with other compounds to form NH\({}_{4}\)-bearing salts. Protopapa et al. (2021) provides a lengthy review of the mechanisms that could destroy NH\({}_{3}\) on Charon. While the radiation environment on Miranda is more hostile to NH\({}_{3}\), NH\({}_{4}\)-bearing species still possess a 2.2-\(\mu\)m band, and NH\({}_{4}\)-bearing salts may be more resistant to destruction by irradiation.
### NH\({}_{4}\)-bearing species and other refractory compounds
Figure 9: (Left panel): Our base model of Miranda’s spectrum compared to 5% areal mixtures of NH\({}_{4}\)-bearing minerals and thermonatrite. The mixture spectra were normalized to match the base model at 1.754 \(\mu\)m, then offset in increments of 0.1. The vertical line marks the mean wavelength of our 2.2-\(\mu\)m band measurements at 2.218 \(\mu\)m. (Right panel): NH\({}_{4}\)-bearing mineral reflectance spectra measured at 90 K from Fastelli et al. (2022) and thermonatrite measured at 93 K from De Angelis et al. (2019).
Ammonium salts such as NH\({}_{4}\)Cl have been proposed as a candidate species partially or wholly responsible for the 2.2-\(\mu\)m absorption band on Charon and on other bodies in the Pluto system (Moore et al., 2007; Cook et al., 2007, 2018, 2023). NH\({}_{4}\)Cl has also been reported in the bright emplaced material in Occator Crater on Ceres (Raponi et al., 2019). Other NH\({}_{4}\)-bearing species, such as ammonium carbonate ((NH\({}_{4}\))\({}_{2}\)CO\({}_{3}\)) or ammonium bicarbonate (NH\({}_{4}\)HCO\({}_{3}\)), have also been implicated in the presence of a 2.2-\(\mu\)m absorption (de Sanctis et al., 2016; Cartwright et al., 2020). The NH\({}_{4}^{+}\) ion is a byproduct of irradiation of NH\({}_{3}\)-H\({}_{2}\)O mixtures (Moore et al., 2007), and given other suitable species to react with, it could form NH\({}_{4}\)-bearing salts. In Figure 9, we plot reflectance spectra of NH\({}_{4}\)-bearing species measured at 90 K from Fastelli et al. (2022), and constructed linear (areal) mixture models of our synthetic base model spectrum, plus a 5% abundance of various species. The absorption bands of most of these species, with the possible exception of NH\({}_{4}\)Cl, are at too short wavelengths to account for the 2.2-\(\mu\)m band we see on Miranda, but could contribute to low-level absorption between 2.18 - 2.25 \(\mu\)m. As seen in the figure, their broad absorption features tend to simply depress the 2.2-\(\mu\)m continuum of H\({}_{2}\)O ice relative to the 1.8-\(\mu\)m continuum. Miranda's spectrum exhibits the same characteristic, but the broad nature of these NH\({}_{4}\) absorptions observed in reflectance generally limits the ability to identify them, as a depressed 2.2-\(\mu\)m continuum could also result from sub-micron H\({}_{2}\)O ice grains (SS3.3) or any other compound with a strong blue slope between 1.8 and 2.2 \(\mu\)m.
However, linear (areal) mixture modeling comes with additional caveats. Linear mixture modeling does not incorporate the nonlinear effects of multiple scattering (important for bright icy surfaces) or other factors in the reflectance of a particular component, such as grain size. These effects are treated in intimate mixture models, but an intimate mixture model requires the optical constants of the material to be measured, and no optical constants for these NH\({}_{4}\)-bearing species have been published in the scientific literature. Furthermore, direct comparison demonstrates that the reflectance spectrum of a pure compound is distinctly different from the absorption spectrum; compare and contrast the absorbance spectra and reflectance spectra of NH\({}_{4}\)Cl in Figure 8. Although the spectra are arbitrarily scaled, the difference in depths and shapes of the absorption bands are clearly apparent. The Fastelli et al. sample studied in reflectance is much thicker (\(\sim\) mm) than the Moore et al. sample studied in absorbance (tens of microns), which makes weaker features easier to detect, but produces saturation and blending of the stronger absorption features. The relative band strengths are different between the Moore and Fastelli NH\({}_{4}\)Cl spectra: the weak absorptions between 2.0 and 2.2 \(\mu\)m in the Moore spectrum are heavily blended in the Fastelli spectrum, and the stronger absorptions at 2.21 and 2.24-\(\mu\)m are effectively saturated. The Fastelli spectrum also shows a clear band at 2.34-\(\mu\)m that is shallow in the Moore spectrum and absent in the Miranda spectra. Finally, the spectral resolution of the Fastelli spectra are generally inadequate compared to our Miranda spectra and the Moore spectrum.
While our spectra of Miranda were also observed in reflectance, an intimate mixture of small percentages of NH\({}_{4}\)Cl or another NH\({}_{4}\)-bearing species in H\({}_{2}\)O ice could produce a spectral signature more similar to the Moore et al. absorption spectrum. We also note that unlike the rest of the NH\({}_{3}\)-bearing species in Figure 8, NH\({}_{4}\)Cl lacks a strong 2.0-\(\mu\)m band, consistent with the lack of the 2.0-\(\mu\)m band in our spectra of Miranda. Confident quantitative estimates are difficult given the weak 2.2-\(\mu\)m absorption, the limited nature of the available laboratory data, and the lack of optical constants for candidate constituents. We suggest that an NH\({}_{4}\)-bearing species, perhaps NH\({}_{4}\)Cl, may be partially or wholly responsible for the 2.2-\(\mu\)m band, although NH\({}_{3}\)-bearing species could contribute to absorption in this range.
The potential sources (and potential destruction) of NH\({}_{4}\)Cl on Charon is discussed thoroughly in Cook et al. (2023). Many of these arguments are also applicable at some level to the Uranian satellites, including Miranda. Chlorine could be delivered to the surface via impacts, or perhaps an endogenic process like cryovolcanism emplaced NaCl-bearing or NH\({}_{4}\)Cl-bearing brines and/or NH\({}_{3}\)-rich liquids onto the surface. Some form of this process, likely impact-induced, appears to have occurred at Ceres in Occator Crater (Raponi et al., 2019; De Sanctis et al., 2020). Furthermore, carbonaceous material and H\({}_{2}\)O are readily available at Miranda's surface and particle irradiation is present to dissociate them. In addition to the previously-discussed CO\({}_{2}\), if Miranda's surface was supplied with NH\({}_{3}\)- or NH\({}_{4}\)-bearing species, NH\({}_{4}\)-bearing carbonates could be a common byproduct.
With previous results on Umbriel in mind (Cartwright et al., 2023), we also generated a synthetic intimate mixture model incorporating the aluminum-bearing phyllosilicates illite and kaolinite (Figure 6), and an areal mixture model incorporating the hydrated Na-bearing carbonate thermonatrite (Figure 9). Thermonatrite's 2.2-\(\mu\)m band is centered at shorter wavelengths than our mean band center (2.201 versus 2.218 \(\mu\)m), but could be contributing to wider low-level absorption between 2.18 - 2.25 \(\mu\)m (Figure 2). Kaolinite has a blue slope at wavelengths longer than 1.8 \(\mu\)m and a double-band absorption at 2.16 and 2.206 \(\mu\)m that could produce a similar shoulder feature, but as discussed in SS5, the
only other strong indicator of phyllosilicates in the near-IR (\(<\)2.5 \(\mu\)m) is generally the 1.4-\(\mu\)m band, which is difficult to observe from Earth due to telluric absorption.
## 7 Conclusions
We conclude that there is no crystalline CO\({}_{2}\) ice concentrated in discrete deposits on Miranda's surface, consistent with the explanation that Miranda's weak surface gravity allows CO\({}_{2}\) molecules to efficiently escape before being cold trapped. Similarly, we detected no convincing evidence of a 2.13-\(\mu\)m band that could be associated with a molecular mixture of CO\({}_{2}\) in H\({}_{2}\)O or CH\({}_{3}\)OH ice.
In contrast, we detected a 2.2-\(\mu\)m band at \(>2\sigma\) significance in several of our spectra. We do not see evidence for longitudinal trends or hemispherical asymmetries in the distribution of the 2.2-\(\mu\)m band. We compared a high S/N GNIRS spectrum of the leading hemisphere to synthetic spectra and laboratory spectra of several candidate species, including NH-bearing species, nitrogen-bearing organics, and phyllosilicates. We found that the 2.2-\(\mu\)m feature can be best explained by either a combination of NH\({}_{3}\)-bearing species (e.g. NH\({}_{3}\)-hydrates and NH\({}_{3}\) ice) or by an NH\({}_{4}\)-bearing salt like NH\({}_{4}\)Cl, but NH\({}_{4}\)-carbonates or certain phyllosilicates like kaolinite could contribute to broad and shallow absorption between 2.18 - 2.25 \(\mu\)m.
However, the study of NH\({}_{3}\)-bearing and NH\({}_{4}\)-bearing species is substantially limited by the available laboratory data. Optical constants of candidate species are few and far between, and reflectance data are often acquired at room temperature and with insufficient spectral resolution to properly capture the blended spectral bands that change position and strength with temperature and composition. Common minerals often show fine structure in absorption bands that are not detected in low resolution reflectance spectra (Clark et al., 1990). Further laboratory work to determine optical constants of NH\({}_{3}\)-hydrates, NH\({}_{3}\)-H\({}_{2}\)O mixtures, and NH\({}_{4}\)-bearing salts at high resolution and cryogenic temperatures in the near- to mid-IR should be a high priority for the icy satellites community (Dalton, 2010). The results of such laboratory work (spectra and optical constants) must be made publicly downloadable in a data repository for current and future scientists (Roser et al., 2021).
Future investigations of CO\({}_{2}\) ice and NH\({}_{3}\)-bearing species on Miranda could be carried out with JWST, as the excellent sensitivity of a 6.5-meter IR-optimized space telescope is hard to match even with larger ground-based telescopes. JWST has the ability to investigate the 4.27-\(\mu\)m CO\({}_{2}\) ice \(\nu_{3}\) fundamental absorption band, which lies within a wavelength range in which the Earth's atmosphere is completely opaque. This fundamental absorption band is a factor of \(\sim\)10\({}^{3}\) stronger than the 2-\(\mu\)m overtone bands, and the lack of the 2.13-\(\mu\)m band suggests that JWST observations of the 4.27-\(\mu\)m band would be more suitable for further constraining CO\({}_{2}\) ice on Miranda and the other Uranian moons. NH\({}_{3}\)-bearing species would be detectable by JWST with the NH\({}_{3}\)\(\nu_{3}\) fundamental band (2.96-\(\mu\)m) and the hydrated nitrogen (OH-N) features (3.05-\(\mu\)m and 3.1 - 3.2 \(\mu\)m) overprinted on the wide 3.0-\(\mu\)m H\({}_{2}\)O ice band. These bands are far stronger than the weak overtone bands in the near-IR shortward of 2.5 \(\mu\)m, and different NH-bearing compounds and hydration states are much easier to distinguish from each other at longer wavelengths (Moore et al., 2007). Similarly, NH\({}_{4}\)-bearing minerals can be detected through features between 3.0 - 3.5 \(\mu\)m (Bishop et al., 2002; Berg et al., 2016).
Spatially resolved spectroscopy of ices on Miranda and the other Uranian satellites would be most effectively carried out by a Uranus orbiter equipped with a NIR imaging spectrometer, analogous to VIMS on Cassini or MISE on the upcoming Europa Clipper mission. The Uranus Orbiter and Probe was the highest priority new Flagship mission recommended by the 2023-2032 Decadal Survey. Such a mission would be able to revolutionize our knowledge of the entire Uranian system, including the icy satellites, and could investigate spatial and spectral variations in surface composition in far more detail than could ever be achieved from Earth.
We wish to extend our gratitude to the observing, engineering, and administrative staff at Apache Point Observatory, Gemini North, and the IRTF, without whom this project would not have been possible, and to the anonymous reviewers whose comments improved this manuscript.
This work is funded under NASA FINESST grant 80NSSC20K1378, and parts of it were previously funded under the NMSU Astronomy Department's William Webber Voyager Graduate Fellowship. This work is partially based on observations obtained with the ARC 3.5-meter telescope at Apache Point Observatory, which is owned and operated by the Astrophysical Research Consortium. This work also incorporates observations previously obtained at the Infrared Telescope Facility (IRTF), which is operated by the University of Hawaii under contract 80HQTR19D0030 with the National Aeronautics and Space Administration.
Finally, this work is also based in part on observations obtained under program IDs GN-2020B-FT-205 and GN-2021B-FT-210 at the international Gemini Observatory, a program of NSF's NOIRLab, which is managed by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation on behalf of the Gemini Observatory partnership: the National Science Foundation (United States), National Research Council (Canada), Agencia Nacional de Investigacion y Desarrollo (Chile), Ministerio de Ciencia, Tecnologia e Innovacion (Argentina), Ministerio da Ciencia, Tecnologia, Inovacoes e Comunicacoes (Brazil), and Korea Astronomy and Space Science Institute (Republic of Korea). The Gemini North and IRTF telescopes are located within the Maunakea Science Reserve and adjacent to the summit of Maunakea. We are grateful for the privilege of observing the Universe from a place that is unique in both its astronomical quality and its cultural significance, and wish to emphasize our respect for the Native Hawaiian community's cultural and historical ties to Maunakea.
This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France. ARC(TripleSpec), Gemini:Gillett(GNIRS), IRTF(SpeX) AstroPy (Astropy Collaboration et al., 2013), NumPy (Harris et al., 2020), SciPy (Virtanen et al., 2020), Matplotlib (Hunter, 2007), JPL Horizons Online Ephemeris Service ([https://ssd.jpl.nasa.gov/horizons/](https://ssd.jpl.nasa.gov/horizons/)), Spectool (Cushing et al., 2004; Vacca et al., 2003), SIMBAD (Wenger et al., 2000), SpectRes (Carnall, 2017)
## Appendix A Spectral Contamination
Potential absorption features in the spectrum of Miranda in K-band are also subject to low-level contamination from various sources (Figure 11). Absorption from gases in Earth's atmosphere (telluric absorption) is a significant concern for ground-based observations in the near-infrared. Atmospheric H\({}_{2}\)O vapor is often the most difficult species to correct for, as it has a multitude of strong, narrow absorption lines, and precipitable water vapor along the line of sight can vary locally on short spatial and temporal scales. However, absorption from atmospheric H\({}_{2}\)O is not a major concern between 2.00 - 2.40 \(\mu\)m, which encompasses almost all of the absorption features of interest in this work.
Telluric CO\({}_{2}\) has three strong overtone bands which are the major source of atmospheric absorption between 1.95 - 2.10 \(\mu\)m. Unlike H\({}_{2}\)O absorption, which varies strongly due to local weather conditions, CO\({}_{2}\) is well-mixed in the Earth's atmosphere and variations in absorption strength are largely based on optical path length through the atmosphere (airmass). However, fine structure in the deepest parts of the CO\({}_{2}\) overtone bands are partially resolved in our highest-resolution data (R\(\sim\)3500), and telluric correction procedures often leave high-frequency residual noise in these regions. This is visible in many of the Miranda spectra as an increased scatter in the data points between 2.00 - 2.02 \(\mu\)m, and to a lesser extent between 1.95 - 1.97 and 2.05 - 2.07 \(\mu\)m. These wavelengths ranges overlap the narrow CO\({}_{2}\) ice bands, but this should not prevent the detection of CO\({}_{2}\) ice given the high S/N of this dataset.
The 2.2-\(\mu\)m region experiences weak absorption from narrow atmospheric CH\({}_{4}\) bands, with a somewhat stronger feature at 2.200-\(\mu\)m. However, like CO\({}_{2}\), telluric CH\({}_{4}\) is generally well-mixed in the Earth's atmosphere. We find that the usual telluric standard procedures are effective at correcting absorption features from atmospheric CH\({}_{4}\), as evidenced by the lack of residuals from the other CH\({}_{4}\) absorption lines in K-band, so we do not expect Miranda's
2.2-\(\mu\)m feature to be contaminated to a substantial degree by atmospheric CH\({}_{4}\). Atmospheric CO\({}_{2}\) and H\({}_{2}\)O do not have significant absorption features between 2.10 - 2.35 \(\mu\)m.
OH airglow emission lines are generally well-corrected in our spectra, but stacking many spectra can leave small positive or negative residual features and increased error estimates on certain data points, such as at 2.041, 2.150, and 2.195 \(\mu\)m. The upwards slope of the continuum in the plotted OH spectrum at longer wavelengths (\(>\)2.1 \(\mu\)m) is blackbody radiation from the telescope and atmosphere. This slope is subtracted from the science data during data reduction procedures and does not factor into the final spectra.
We also note that there are mismatches in the strength of stellar absorption features between the G-type telluric standard stars and the reflected solar spectrum. When dividing the observed spectra of the object by the standard star, these mismatches appear as narrow residual'spikes' or 'dips' in the final spectrum. This includes the inconvenient presence of a residual spike at 2.207 \(\mu\)m, which appears to be due to a mismatch in the strength of a Si I metal line (see Figure 7). This identification is supported by other small, but noticeable positive residuals from other Si I lines in K-band at 2.093, 2.135, and 2.188 \(\mu\)m (Mohler et al. 1953). Larger mismatches are also sometimes apparent in the strength of the H I Brackett \(\gamma\) line at 2.166 \(\mu\)m, and any apparent absorption features in the range 2.155 - 2.175 \(\mu\)m should be treated with suspicion. In general, any narrow residual features in the high-resolution Miranda spectra appear to be either due to OH airglow lines or from mismatches in solar/stellar spectra.
Finally, we note that scattered light from the disk of Uranus is a major source of contamination in spectra of Miranda at wavelengths shorter than \(\sim\)1.7 \(\mu\)m. However, Uranus is very faint in K-band due to absorption by CH\({}_{4}\) and the H\({}_{2}\) pressure-induced dipole (Fink & Larson 1979; Baines et al. 1998), which includes the wavelength ranges and absorption features of interest in this work. The contamination of the Miranda spectra by the spectrum of Uranus is therefore negligible for the purposes of this study. |
2309.05582 | Mind the Uncertainty: Risk-Aware and Actively Exploring Model-Based
Reinforcement Learning | We introduce a simple but effective method for managing risk in model-based
reinforcement learning with trajectory sampling that involves probabilistic
safety constraints and balancing of optimism in the face of epistemic
uncertainty and pessimism in the face of aleatoric uncertainty of an ensemble
of stochastic neural networks.Various experiments indicate that the separation
of uncertainties is essential to performing well with data-driven MPC
approaches in uncertain and safety-critical control environments. | Marin Vlastelica, Sebastian Blaes, Cristina Pineri, Georg Martius | 2023-09-11T16:10:58Z | http://arxiv.org/abs/2309.05582v1 | # Mind the Uncertainty: Risk-Aware and Actively Exploring Model-Based Reinforcement Learning
###### Abstract
We introduce a simple but effective method for managing risk in model-based reinforcement learning with trajectory sampling that involves probabilistic safety constraints and balancing of optimism in the face of epistemic uncertainty and pessimism in the face of aleatoric uncertainty of an ensemble of stochastic neural networks. Various experiments indicate that the separation of uncertainties is essential to performing well with data-driven MPC approaches in uncertain and safety-critical control environments.
## 1 Introduction
Data-driven approaches to sequential decision-making are becoming increasingly popular (Yang et al., 2019; Hussein et al., 2017; Polydoros and Nalpantidis, 2017; Schrittwieser et al., 2020). They hold the promise of reducing the number of prior assumptions about the system that are imposed by traditional approaches that are based on nominal models.
Such approaches come in several different flavors (Kober et al., 2013). Model-free approaches attempt to extract closed-loop control policies directly from data, while model-based approaches rely on a learned model of the dynamics to either generate novel data to extract a policy or to be used in a model-predictive control fashion (MPC). This study belongs to the latter line of work.
Model-based methods have several advantages over pure model-free approaches. Firstly, humans tend to have a better intuition on how to incorporate prior knowledge into a model rather than into a policy or value function. Secondly, most model-free policies are bounded to a specific task, while models are task-agnostic and can be applied for optimizing arbitrary cost functions, given sufficient exploration.
Nevertheless, learning models for control come with certain caveats. Traditional MPC methods require the model and cost function to permit a closed-form solution which restricts the function class prohibitively. Alternatively, gradient-based iterative optimization can be employed, which allows for a larger class of functions but typically fails to yield satisfactory solutions for complicated function approximators such as deep neural network models. In addition, calculating first-order or even second-order information for trajectory optimization tends to be computationally costly, which makes it hard to meet the time constraints of real-world settings. This motivates the usage of zero-order, i.e gradient-free or sample-based methods, such as the Cross-entropy Method (CEM) that do not rely on gradient information but are efficiently parallelizable.
Many methods relying on a learned model and zero-order trajectory optimizers have been proposed (Chua et al., 2018; Wang and Ba, 2020; Williams et al., 2015), but all share the same problem:
compounding of errors through auto-regressive model prediction. This naturally brings us to the question of how can we effectively manage model errors and uncertainty to be more data-efficient and safe. Arguably, this is one of the main obstacles to applying data-driven model-based methods to the real world, e.g. to robotics settings.
In this work, we introduce a risk-averse zero-order trajectory optimization method (RAZER) for managing errors and uncertainty in zero-order MPC and test it on challenging scenarios (Fig. 1). We argue that it is essential to differentiate between the two types of uncertainty in the model-predictive setting: the aleatoric uncertainty arising from inherent noise in the system and epistemic uncertainty arising from the lack of knowledge (Hora, 1996; Kiureghian and Ditlevsen, 2009). We measure these uncertainties by making use of probabilistic ensembles with trajectory sampling (Chua et al., 2018) (PETS). Our contributions can be summarized as follows: (i) method for separation of uncertainties in probabilistic ensembles (termed PETSUS); (ii) efficient use of aleatoric and epistemic uncertainty in model-based zero-order trajectory optimizers; (iii) an simple but practical approach to probabilistic safety constraints in zero-order MPC.
## 2 Related Work
Uncertainty Estimation.In the typical model-based reinforcement learning (MBRL) setting, the true transition dynamics function is modeled through an approximator. Impressive results have been achieved by both parametric models (Lenz et al., 2015; Fu et al., 2016; Gal et al., 2016; Hafner et al., 2019), such as neural networks, and nonparametric models (Kocijan et al., 2004; Nguyen-Tuong et al., 2008; Grancharova et al., 2008; Deisenroth et al., 2013), such as Gaussian Processes (GP). The latter inspired seminal work on the incorporation of the dynamics model's uncertainty for long-term planning (Deisenroth et al., 2013; Kamthe and Deisenroth, 2018). However, their usability is limited to low-data, low-dimensional regimes with smooth dynamics (Rasmussen and Kuss, 2003; Rasmussen and Williams, 2006), which is not ideal for robotics applications. Alternative parametric approaches include ensembling of deep neural networks, used both in the MBRL community (Chua et al., 2018; Kurutach et al., 2018), and outside (Osband et al., 2016; Lakshminarayanan et al., 2017). In particular, ensembles of _probabilistic_ neural networks established state-of-the-art results in the MBRL community (Chua et al., 2018), but focus mainly on estimating the expected cost and disregard the underlying uncertainties. In comparison, we propose a treatment of the resulting uncertainties of the ensemble model.
Zero-order MPC.The learned model can be used for policy search like in PILCO (Deisenroth and Rasmussen, 2011; Deisenroth et al., 2013; Kamthe and Deisenroth, 2018; Curi et al., 2020) or for online model-predictive control (MPC) (Morari and Lee, 1999; Williams et al., 2017; Chua et al., 2018). In this work, we do planning in an MPC fashion and employ a zero-order method as a trajectory optimizer, since less sensitive to hyperparameter tuning and less likely to get stuck in local minima of complex objective functions. Specifically, we consider a sample-efficient implementation of the Cross-Entropy method (Rubinstein and Davidson, 1999; Botev et al., 2013) introduced in (Pinneri et al., 2020).
Safe MPC.Separating the sources of uncertainty is of particular importance for AI applications directly affecting humans' safety, as self-driving cars, elderly care systems, or in general any application that involves a physical interaction between the AI system and humans. Disentangling epistemic
Figure 1: Environments considered for uncertainty-aware planning.
from aleatoric uncertainty allows for separate optimization of the two, as they represent semantically different objectives: efficient exploration and risk-awareness. Extensive research on uncertainty decomposition has been done in the Bayesian setting and the context of safe policy search (Mihatsch and Neuneier, 2002; Garcia and Fernandez, 2015; Depeweg et al., 2017, 2018), MPC planning (Arruda et al., 2017; Lee et al., 2020; Abraham et al., 2020), and distributional RL (Clements et al., 2020; Zhang and Weng, 2021). On the other side, a state-of-the-art baseline for ensemble learning like PETS (Chua et al., 2018), despite estimating both uncertainties, only optimizes for the _expected_ cost during action evaluation. Our work aims at filling this gap by explicitly integrating the propagated uncertainty information in the zero-order MPC planner.
## 3 Method
Our approach concerns itself with the efficient usage of uncertainties in zero-order trajectory optimization and is therefore generally applicable to such optimizers. We are interested in modeling noisy system dynamics \(x_{t+1}=f(x_{t},u_{t},w(x_{t},u_{t}))\) where \(f\) is a nonlinear function, \(x_{t}\) the observation vector, \(u_{t}\) applied control input and \(w(x_{t},u_{t})\) a noise term sampled from an arbitrary distribution.
Consequently, in the absence of prior knowledge about the function \(f\), the system needs to be modeled by a complex function approximator such as a neural network. Furthermore, we are interested in managing uncertainties based on our fitted model, which is erroneous. To this end, we use stochastic ensembles of size \(K\), where the output of each model \(\mathbf{\vartheta}^{k}(x_{t},u_{t})\) are parameters of a normal distribution depending on input observation \(x_{t}\) and control \(u_{t}\). As a by-product, our auto-regressive model prediction based on controls \(\mathbf{u}\) becomes a predictive distribution over trajectories \(\tau\); \(\psi^{\tau}(x_{t},\mathbf{u})\coloneqq p(\tau|x_{t},\mathbf{u};\theta)\) where \(\theta\) denotes the parameters of the ensemble. For convenience, from this point toward we will differentiate between multiple usages of \(\psi^{\tau}\). We denote with \(\psi^{\pi}_{\Delta t}\) the distribution \(p(x_{t+\Delta t}|x_{t},\mathbf{u}_{t:t+h};\theta)\) over states at time step \(t+\Delta t\) and \(\psi^{\mathbf{\theta}}_{\Delta t}\) the distribution over the Gaussian parameter outputs \(p(\mathbf{\vartheta}_{t+\Delta t}|x_{t},\mathbf{u}_{t:t+h};\theta)\) at time step \(t+\Delta t\) of the planner.
### Planning and Control
To validate our hypothesis that accounting for uncertainty in the environment and model prediction is essential to develop risk-averse policies, we use the Cross-Entropy Method (CEM) with improvements suggested in Pinneri et al. (2020). Accordingly, at each time step \(t\) we sample a finite number of control sequences \(\mathbf{u}\) for a finite horizon \(H\) from an isotropic Gaussian prior distribution which we evaluate from the state \(x_{t}\) using an auto-regressive forward-model and the cost function. The sampling distribution is refitted in multiple rounds based on good-performing (elite) trajectories. After this optimization step, the first action of the mean of the fitted Gaussian distribution is executed. Since this approach utilizes a predictive model for a finite horizon at each time step, it naturally falls into the category of Model Predictive Control (MPC) methods.
Although we use CEM, our approach of managing uncertainty can generically be applied to other zero-order trajectory optimizers such as MPPI (Williams et al., 2017), by a modification of the trajectory cost function.
### The Problem of Uncertainty Estimation
Since we have a stochastic model of the dynamics, at the model prediction time step \(t\) we observe a distribution over potential outcomes. Indeed, since our model outputs are parameters of a Gaussian distribution, with auto-regressive predictions we end up with a distribution over possible Gaussians for a certain time step \(t\).
Given a sampled action sequence \(\mathbf{u}\) and the initial state \(x_{t}\) we observe a distribution over trajectories \(\psi_{\tau}\). To efficiently sample from the trajectory distribution \(\psi_{\tau}\) we use the technique introduced by Chua et al. (2018) (PETS) which involves prediction particles that are sampled from the probabilistic models and randomly mixed between ensemble members at each prediction step. In this way, the sampled trajectories are used to perform a Monte Carlo estimate of the expected trajectory cost \(\mathbb{E}_{\tau\sim\psi^{\tau}}[c(\tau)]\). However, this does not take the properties of \(\psi^{\tau}\) into account, which might be a high-entropy distribution and may lead to very risky and unsafe behavior. In this work, we alleviate this by looking at the properties of \(\psi^{\tau}\), i.e. different kinds of uncertainties arising from the predictive distribution.
### Learned Dynamics Model
We learn a dynamics model \(f_{\theta}\) that approximates the true system dynamics \(x_{t+1}=f(x_{t},u_{t},w(x_{t},u_{t}))\). As a model class, we use an ensemble of neural networks with stochastic outputs as in Chua et al. (2018). Each model \(k\), parameterizes a multivariate Gaussian distribution with diagonal covariance, \(f_{\theta}^{k}(x_{t},u_{t})=\mathcal{N}(x_{t+1};x_{t}+\mu_{\theta}^{k}(x_{t},u _{t}),\Sigma_{\theta}^{k}(x_{t},u_{t}))\) where \(\mu_{\theta}^{k}(\cdot,\cdot)\) and \(\Sigma_{\theta}^{k}(\cdot,\cdot)\) are model functions outputting the respective parameters.
Iteratively, while interacting with the environment, we collect a dataset of transitions \(\mathcal{D}\) and train each model \(k\) in the ensemble by the following negative log-likelihood loss on the Gaussian outputs:
\[\mathcal{L}(\theta,k)=\mathbb{E}_{x_{t},u_{t},x_{t+1}\sim\mathcal{D}}\Big{[}- \log\mathcal{N}(x_{t+1};x_{t}+\mu_{\theta}^{k}(x_{t},u_{t}),\Sigma_{\theta}^{ k}(x_{t},u_{t}))\Big{]} \tag{1}\]
In addition, we use several regularization terms to make the model training more stable. We provide more details on this in Suppl. A.
### Separation of Uncertainties
In the realm of parametric estimators, two uncertainties are of particular interest. _Aleatoric_ uncertainty is the kind that is irreducible and results from inherent noise of the system, e.g. sensor noises in robots. On the other hand, we have _epistemic_ uncertainty resulting from lack of data or knowledge which is reducible. This begs the question, how can we separate these uncertainties given an auto-regressive dynamics model \(f_{\theta}\)? The way that we efficiently sample from \(\psi^{\tau}\) is by mixing sampled prediction particles, similarly as in PETS(Chua et al., 2018). This process is illustrated by the red lines in Fig. 2.
Simple model prediction disagreement is not a good measure for _aleatoric_ uncertainty since it can be entangled with epistemic uncertainty. Given our assumptions about the system dynamics, we measure _aleatoric_ uncertainty as the entropy of the predicted normal distributions of the ensemble models. More concretely, given a sampled particle state \(\tilde{x}_{t}\), we define the estimated aleatoric uncertainty for ensemble model associated to particle \(b\) at time step \(t\) as:
\[\mathfrak{A}_{b}(x|\tilde{x}_{t},u_{t})=\mathcal{H}_{x\sim\psi^{x}_{\Delta t, b}}(x) \tag{2}\]
Where \(\psi^{x}_{\Delta t,b}\) is the output distribution of ensemble model based on inputs \(\tilde{x}_{t}\), \(u_{t}\). Since in the end we are interested in the aleatoric uncertainty incurred from applying the action sequence \(\mathbf{u}\) from initial state \(x_{t}\), the quantity of interest for us is the expected aleatoric uncertainty for time slice \(t\):
\[\mathfrak{A}(x|u_{t})=\mathbb{E}_{\tilde{x}_{b}\sim\psi^{x}_{\Delta t}}\left[ \mathfrak{A}_{b}(x|\tilde{x}_{b},u_{t})\right] \tag{3}\]
Intuitively, because we only have access to the ensemble for sampling, we take a time-slice in the sampled trajectories from \(\psi^{\tau}\) and compute the output entropies. Moreover, since we assume a Gaussian 1-step predictive distribution this is an expectation over differential Gaussian entropy. An alternative way of computation which we also explore in this work is calculating the expected particle variance for time slice \(t\) of the prediction horizon:
\[\mathrm{Var}^{\mathfrak{A}}_{t+1}=\frac{1}{B}\sum_{b=1}^{B}\Sigma_{\theta}^{b} (\tilde{x}_{t,b},u_{t}) \tag{4}\]
For estimating the _epistemic_ uncertainty, one would be tempted to look at the disagreement between ensemble models in parameter space \(\mathrm{Var}[\theta]\), but this is not completely satisfying, since neural networks tend to be over-parametrized and variance within the ensemble still may exist albeit the optimum has been reached by all ensemble models. An alternative would be to calculate the Fisher information metric \(\mathcal{I}:=\mathrm{Var}[\nabla_{\theta}\log\mathcal{L}(x_{t+1}|x_{t},u_{t})]\) where \(\mathcal{L}\) denotes the likelihood function, but this tends to be expensive to compute.
Figure 2: Probabilistic Ensembles with Trajectory Sampling and Uncertainty Separation (PETSUS)
Given the assumption of local Gaussianity, the true epistemic uncertainty for this case is the predictive entropy over the Gaussian parameters \(\boldsymbol{\vartheta}\) at time step \(t+h\).
\[\mathfrak{E}(x_{t},\boldsymbol{u}_{t:t+h})=\mathcal{H}_{\psi^{\boldsymbol{ \vartheta}}_{\Delta t}}(\boldsymbol{\vartheta}\mid x_{t},\boldsymbol{u}_{t:t +h}) \tag{5}\]
It is easy to verify that this quantity is 0 given perfect predictions of the model. Note that, because of auto-regressive predictions of a nonlinear model, this is a very difficult object to handle. Nevertheless, since our predictive distribution \(p(x\mid x_{t},u_{t};\boldsymbol{\vartheta})\) is parametrized by model outputs, we may utilize disagreement in \(\boldsymbol{\vartheta}_{t}\) to approximate \(\mathfrak{E}\). To get correct estimations, we need to propagate mean predictions \(\bar{x}\) in addition to the particles as illustrated as the yellow lines in Fig. 2. We quantify epistemic uncertainty as ensemble disagreement at time step \(t\):
\[\mathrm{Var}^{\mathfrak{E}}(x_{t+1})=\mathrm{Var}^{e}[\mu^{k}_{\theta}(\bar{x }_{t},u_{t})]+\mathrm{Var}^{e}[\Sigma^{k}_{\theta}(\bar{x}_{t},u_{t})] \tag{6}\]
where \(\mathrm{Var}^{e}\) is the empirical variance over the \(k=1\ldots K\) ensembles.
### Probabilistic Safety Constraints
When applying data-driven control algorithms to real systems, safety is of utmost importance. In the realm of zero-order optimization, safety constraints can be easily introduced by putting an infinite cost on constraint-violating trajectories. Nevertheless, we are dealing with erroneous stochastic nonlinear models which lead to nontrivial predictive distributions of future states, based on the control sequence \(\boldsymbol{u}\). For this reason, we want to control the risk of violating the safety constraints that we, as practitioners, are willing to tolerate. If we denote the observation space as \(\mathbb{X}\), given a violation set \(\mathbb{C}\subset\mathbb{X}\), we define the probability of the control sequence \(\boldsymbol{u}\) to enter the violation set at time \(t+\Delta t\) as \(p(x\in\mathbb{C}\mid x_{t},\boldsymbol{u})=\int_{x\in\mathbb{C}}\psi^{x}_{ \Delta t}(x\mid x_{t},\boldsymbol{u})\). In practice, it is hard to compute this integral efficiently, since our distribution \(\psi^{x}_{\Delta t}\) is nontrivial as a result of nonlinear propagation of uncertainty. Furthermore, the violation set \(\mathbb{C}\) might not have the structure necessary to allow an efficient solution to the integral, in which case one needs to resort to Monte Carlo estimation.
To simplify computation and gain speed, we consider box violation sets resulting in each dimension of \(x\) being constrained to be outside of \([a,b]\in\{a,b\,|\,a,b\in\mathbb{R}^{2},\,a<b\}\). By performing moment matching by a Gaussian in each time-slice \(\psi^{x}_{\Delta t}\), the probability of ending up in state \(x\) at time step \(t+\Delta t\) is given by integrating \(\mathcal{N}(x;\mu_{t+\Delta t},\Sigma_{t+\Delta t})\), where \(\mu\) and \(\Sigma\) are estimated by Monte Carlo sampling. If we further assume a diagonal covariance \(\Sigma\), this integral can be deconstructed into \(d\) univariate Gaussian integrals, which can be computed fast and in closed form (error function). Hence, the probability of a constraint violation happening at time step \(t\) is defined by:
\[p(x\in\mathbb{C}\mid x_{t},\boldsymbol{u})=\prod_{i=0}^{d}\int_{x\in\mathbb{C }}\mathcal{N}(x^{i};\mu^{i}_{t+\Delta t},\sigma^{i}_{t+\Delta t}) \tag{7}\]
### Implementing Risk-Averse ZERO-Order Trajectory Optimization (RAZER)
We assume the task definition is provided by the cost \(c(x_{t},\boldsymbol{u})\). For trajectory optimization, we start from a state \(x_{t}\) and predict with an action sequence \(\boldsymbol{u}\) the future development of the trajectory \(\tau\). Along this trajectory, we want to compute a single cost term which is conveniently defined as the expected cost of all particles \(\tilde{x}\) summed over the planning horizon \(H\):
\[c(x_{t},\boldsymbol{u})=\sum_{\Delta t=1}^{H}\frac{1}{B}\sum_{b=1}^{B}c( \tilde{x}^{b}_{t+\Delta t},u_{t+\Delta t}). \tag{8}\]
The optimizer, in our case CEM, will optimize the action sequence \(\boldsymbol{u}\) to minimize the cost in a probabilistic sense, i.e. \(p(\boldsymbol{u}\mid x)\propto\exp(-\beta\,c(x,\boldsymbol{u}))\) where \(\beta\) reflects the strength of the optimizer (the higher the more likely it finds the global optimum). To make the planner uncertainty-aware, we need to make sure it avoids unpredictable parts of the state space by making them less likely. Using the aleatoric uncertainty provided by PETSUS Eq. 4, we define the aleatoric penalty as
\[c_{\mathfrak{A}}(x_{t},\boldsymbol{u})=w_{\mathfrak{A}}\cdot\sum_{\Delta t=1} ^{H}\sqrt{\mathrm{Var}^{\mathfrak{A}}_{t+\Delta t}}, \tag{9}\]
where \(w_{\mathfrak{A}}>0\) is a weighting constant. The larger the aleatoric uncertainty, the higher the cost.
To guide the exploration to states where the model has epistemic uncertainty Eq. 6 (due to lack of data), we use an epistemic bonus:
\[c_{\mathfrak{E}}(x_{t},\mathbf{u})=-w_{\mathfrak{E}}\cdot\sum_{\Delta t=1}^{H}\sqrt{ \operatorname{Var}_{t+\Delta t}^{\mathfrak{E}}}, \tag{10}\]
where \(w_{\mathfrak{E}}>0\) is a weighting constant. To be able to operate on a real system, the most important part is to adhere to safety constraints. As formulated in Eq. 7, the predicted safety violations need to be uncertainty aware, independent of the source of uncertainty. We integrate this into the planning method by adding:
\[c_{\mathfrak{S}}(x_{t},\mathbf{u})=w_{\mathfrak{S}}\cdot\sum_{\Delta t=1}^{H} \big{[}p(\hat{x}_{t+\Delta t}\in\mathbb{C})>\delta\big{]} \tag{11}\]
where \(\big{[}\!\cdot\!\big{]}\) is Iverson bracket and \(w_{\mathfrak{S}}\) is either a large penalty \(c_{\max}\) or 0 to disable safety. An alternative for implementing safety constraints into CEM is by changing the ranking function (Wen and Topcu, 2018). The overall algorithm used in a model-predictive control fashion is outlined in Suppl. B.
## 4 Experiments
We study our uncertainty-aware planner in \(4\) continuous state and action space environments and compare to naively optimizing the particle-based estimate of the expected cost similarly to Chua et al. (2018). We start by giving a description of the environments.
**BridgeMaze** This toy environment (see Fig. 0(c)) was specifically designed to study the different aspects of uncertainty independently. The agent (blue cube) starts on the left platform and has to reach the goal platform on the right. To reach the goal platform, the agent has to move over one of three bridges without falling into the lava. The upper bridge is safeguarded by walls; hence, it is the safest path to the goal but also the longest. The lower bridge has no walls and therefore is more dangerous for an unskilled agent to cross but the path is shorter. The middle bridge is the shortest path to the goal. However, randomly appearing strong winds perpendicular to the bridge might cause the agent to fall off the bridge with some probability, making this bridge dangerous.
**Noisy-HalfCheetah** This environment is based on _HalfCheetah-v3_ from the OpenAI Gym toolkit. We introduce aleatoric uncertainty to the system by adding Gaussian noise \(\xi\sim\mathcal{N}(\mu,\sigma^{2})\) to the actions when the forward velocity is above \(6\). The action noise translates into a non-Gaussian and potentially very complicated state space noise distribution that makes the control problem very challenging.
**Noisy-FetchPickAndPlace** Based on the _FetchPickAndPlace-v1_ gym environment. Additive action noise is applied to the gripper so that its grip on the box might become tighter or looser. The noise is applied for \(x\)-positions \(<0.8\) which is illustrated in Fig. 0(a) by a blue line causing the agent to drop the box with high probability if it tries to lift the box too early.
**Solo8-LeanOverObject** In this robotic environment, the task of a quadrupedal robot (Grimminger et al., 2020) is to stand up and lean forward to reach a target position (purple markers need to reach green dots in Fig. 0(b)) without hitting an object visualized by the red cube representing the unsafe zone. The robot starts in a laying position as shown in the inset of Fig. 0(b). As in the _Noisy-HalfCheetah_ environment, Gaussian action noise is applied to mimic real-world perturbances.
### Algorithmic Choices and Training Details
For model-predictive planning we use the CEM implementation from Pinneri et al. (2020). Further details about hyperparameters can be found in Suppl. A.2. For planning, we use the same architecture for the ensemble of probabilistic models, both in RAZER and in PETS. The only difference is that in RAZER we also forward propagate the mean state predictions in addition to the sampled state predictions. Further details can be found in Suppl. A.1.
For training the predictive model, we alternate between two phases: data collection and model fitting. In the _BridgeMaze_ environment, we collect \(5\) rollouts of length \(80\) steps and append them to the previous rollouts. Afterwards, we fit the model for \(25\) epochs. For _Noisy-HalfCheetah_, we collect \(1\) rollout and fit for \(50\) epochs. For Noisy-FetchPickAndPlace and Solo8-LeanOverObject we replace the \(\hat{f}\) in Fig. 2 with independent instances of noisy ground truth simulators.
Next, we will present RAZER's exploration and safety behavior in the _BridgeMaze_ environment. Afterwards, we are going to discuss planning with external safety constraints in the _Solo8-LeanOverObject_ environment. We complete this section with results on _Noisy-HalfCheetah_ and _Noisy-FetchPickAndPlace_.
### Active Learning for Model Improvement
If model uncertainties are used for risk-averse planning, they are only meaningful if the model has the right training data. Only from good data can the parameters of the approximate noise model be learned correctly. In case of too little data, the agent might avoid parts of the state space due to an overestimation of the model uncertainties. On the other hand, the agent might enter unsafe regions for which the uncertainties are underestimated. By adding the epistemic bonus to our domain-specific cost, the planner can actively seek states with high epistemic uncertainty, i.e. for which no or only little training data exists.
Figure 3(a) shows this active data gathering process for the _BridgeMaze_ environment. PETS finds one particular solution to the problem of reaching the goal platform. It chooses the path over the safer, lower bridge rather than the dangerous middle path and the longer path via the upper bridge (Fig. 2(b)). Once, one solution is found, the model overfits to it without exploring any other parts of the state space. This is also reflected in the plateauing of the red curve in Fig. 2(a).
In comparison, RAZER actively explores larger and larger parts of the state space with an increasing weight of the epistemic bonus (Fig. 2(a)). RAZER not only finds the easy solution found by PETS but also extensively explores other parts of the state space (Fig. 2(c)). To not get stuck at the middle bridge during exploration due to the inherent noise, it is important to separate between epistemic and aleatoric uncertainties. Only the former should be used for exploration. With enough data, our model can correctly capture the uncertainties of these states resulting in the epistemic uncertainty approaching zero.
Figure 4: Risk-averse planning in the face of aleatoric uncertainty yields higher success rates in noisy environments. For (b) we use ground truth models and a fixed aleatoric penalty weight \(w_{\mathfrak{A}}\).
Figure 3: Active learning setting: The epistemic bonus allows RAZER to seek states for which no or only little training data exists (a,c). Means and standard deviations for (a) were computed over 5 runs. PETS overfits to a particular solution (b). In (b) and (c), the brightness of the dots is proportional to the time when they were first encountered.
### Risk-Averse Planning
Once a good model is learned, it can be used for safe planning. What differentiates RAZER from PETS is that it makes explicit use of uncertainty estimates while in the latter uncertainties only enter planning by taking the mean over the particle costs and not differentiating between different sources of uncertainty.
BridgeMaze.Figure 3(a) shows the success rate of PETS and RAZER in the _BridgeMaze_. In both cases, we use the same model that was trained from data collected during a training run with \(w_{\boldsymbol{\mathscr{E}}}=0.05\). Hence, the model saw enough training data from all parts of the state space. The noise in the environment is tuned such that there is a chance to cross the bridge without falling. While in Fig. 2(b) PETS avoided this path because of an overestimation of the state's value due to a lack of training data and sometimes sees a chance to cross the bridge. However, these attempts are very likely to fail because of stronger winds that occur randomly, resulting in a success rate of only \(58\%\). RAZER does not rely on sampling for the aleatoric part and can thus avoid risk. With a higher penalty constant the success rate increases up to \(96\%\) but only as long as the agent is willing to take a risk at all. For large values of \(w_{\mathfrak{A}}\) the agent becomes so conservative that it only moves slowly (decreasing reward in Fig. 3(a)).
Noisy-HalfCheetah.How does RAZER perform on the _Noisy-HalfCheetah_ environment when models are learned from scratch? Without aleatoric penalty, the planner is optimistic. Risky situations are only detected if a failing particle is sampled. Thus, the noise is mostly neglected and the robot increases its velocity, gets destabilized, and ends up slower than with the aleatoric penalty (Fig. 4(a)).
Noisy-FetchPickAndPlace.In this environment, a 7-DoF robot arm should bring the box to a target position - starting and target positions are at the opposite sides of the table. The shortest path is to lift the box and move in a straight line to the target. However, with noise applied to the gripper action, there is a certain probability to drop the box along the way. When penalizing aleatoric uncertainty, this is avoided and also fewer trajectory samples are "wasted" in high-entropic regions, as presented in Fig. 0(a). Figure 3(b) shows the number of times the box is dropped on the table depending on the aleatoric penalty. RAZER adopts a cautious behavior, preferring to slide the box on the table and lifting it only in the area without action noise, achieving a dropping rate lower than 20%, even when considerable noise is applied.
### Planning with External Safety Constraints
Noisy-HalfCheetah:.We consider a safety constraint on the height of the body above ground simulating a narrow passage. Figure 4(b) shows the number of safety violations. Note that PETS has the same penalty cost for hard violations.
Solo8-LeanOverObject:.In this experiment, the robot has to move to two target points with its front and rear of the trunk while avoiding entering a specified rectangular area (fragile object). The front feet are fixed. To track the points, the robot has to lean forward, such that it can lose balance due
Figure 5: _Noisy-HalfCheetah_ environment (task lengths 300 steps) with learning models from scratch. At 150 iterations we have seen only 45k points. (a) Performance under noisy actions. By applying the aleatoric penalty, RAZER can navigate the uncertainties better – leading to higher returns faster. (b) Safety violations above a certain body height (simulating a low ceiling) for different values of \(\delta\). With increasing \(\delta\), RAZER is seldomly violating constraints in stark contrast to PETS. In (c) the number of violations is averaged over the last 50 iterations (summed over 10 rollouts).
to noisy actions. In contrast to PETS, RAZER successfully manages to satisfy the safety constraints almost always as shown in Fig. 6. However, satisfying the safety constraint comes with the cost of reduced tracking accuracy.
## 5 Conclusion
In this work, we have provided a methodology to separate uncertainties in stochastic ensemble models (PETSUS) which can be used as a tool to build risk-averse model-based planners that are also data-efficient and enforce safety through probabilistic safety constraints (RAZER). This type of risk-averseness can be achieved by a simple modification of the cost function in form of uncertainty penalties in zero-order trajectory optimizers.
Furthermore, the separation of uncertainties allows us to do proper exploration via epistemic bonus which benefits generalization of the model and therefore makes it applicable to more settings. As future work, it would be of interest to see this approach applied to a proper transfer learning setting from simulations to real systems, where risk-averseness combined with exploratory behavior is crucial for efficient learning and safe operation.
## 6 Acknowledgments
The authors thank the International Max Planck Research School for Intelligent Systems (IMPRS-IS) for supporting Marin Vlastelica and Sebastian Blaes. We acknowledge the support from the German Federal Ministry of Education and Research (BMBF) through the Tubingen AI Center (FKZ: 01IS18039B). Georg Martius is a member of the Machine Learning Cluster of Excellence, EXC number 2064/1 - Project number 390727645.
|
2309.10831 | Actively Learning Reinforcement Learning: A Stochastic Optimal Control
Approach | In this paper we propose a framework towards achieving two intertwined
objectives: (i) equipping reinforcement learning with active exploration and
deliberate information gathering, such that it regulates state and parameter
uncertainties resulting from modeling mismatches and noisy sensory; and (ii)
overcoming the computational intractability of stochastic optimal control. We
approach both objectives by using reinforcement learning to compute the
stochastic optimal control law. On one hand, we avoid the curse of
dimensionality prohibiting the direct solution of the stochastic dynamic
programming equation. On the other hand, the resulting stochastic optimal
control reinforcement learning agent admits caution and probing, that is,
optimal online exploration and exploitation. Unlike fixed exploration and
exploitation balance, caution and probing are employed automatically by the
controller in real-time, even after the learning process is terminated. We
conclude the paper with a numerical simulation, illustrating how a Linear
Quadratic Regulator with the certainty equivalence assumption may lead to poor
performance and filter divergence, while our proposed approach is stabilizing,
of an acceptable performance, and computationally convenient. | Mohammad S. Ramadan, Mahmoud A. Hayajnh, Michael T. Tolley, Kyriakos G. Vamvoudakis | 2023-09-18T18:05:35Z | http://arxiv.org/abs/2309.10831v4 | # Actively Learning Reinforcement Learning: A Stochastic Optimal Control Approach
###### Abstract
In this paper we provide a framework to cope with two problems: (i) the fragility of reinforcement learning due to modeling uncertainties because of the mismatch between controlled laboratory/simulation and real-world conditions and (ii) the prohibitive computational cost of stochastic optimal control. We approach both problems by using reinforcement learning to solve the stochastic dynamic programming equation. The resulting reinforcement learning controller is safe with respect to several types of constraints and it can actively learn about the modeling uncertainties. Unlike exploration and exploitation, probing and safety are employed automatically by the controller itself, resulting real-time learning. A simulation example demonstrates the efficacy of the proposed approach.
## I Introduction
The field of reinforcement learning (RL) has shown significant promise across various disciplines ranging from gaming to robotics [28]. At its core, RL tries to solve an optimal control problem through maximizing some notion of a cumulative reward. Yet, with this promise of RL algorithms, a tough challenge arises when applying learned policies to real-world applications [13]. These policies, trained in lab simulations or controlled environments, may suffer a degradation in performance or even pocess an unsafe behavior [24]. This is due to modeling mismatches and discrepancies between the training environment and real-world conditions.
On the other hand, the field of stochastic optimal control [9, ch. 25], also known as dual control [30], highlights two important consequences of applying such control: caution and probing [18]. Caution can be seen as the control actions that prevent undesirable outcomes when the system is under uncertainty. Probing, on the other hand, is the actions that result in gathering information about the system's uncertain parameters and/or states. Both these concepts play a central role in ensuring safety while enhancing the system's learning capabilities and state observability. However, a control with such benefits is not easy to achieve, and in general, stochastic optimal control is computationally prohibitive, except for the simplest cases. Relying on dynamic programming to solve such high dimensional problems is unreasonable [4], due to the curse of dimensionality. In general, previous research has focused on suboptimal approaches [19], which are mostly application specific. In this work however, we focus on establishing a more general, less suboptimal framework.
The challenges faced by both RL and stochastic optimal control hinder their potential. Here we consider if each can serve as a solution to the other's challenge. First, the modeling mismatch problems inherent to RL can potentially be mitigated by the caution and probing effects of stochastic optimal control. Caution imposes restrictions on the RL agent behavior under uncertainty and modelling mismatch, acting as a safeguard against false perception. Moreover, the adaptability and continuous learning of an RL agent, to correct the modelling uncertainty, can be achieved by the means of probing. Second, RL, possibly together with neural nets [20] as function approximators, can be used to mitigate the computational burden of stochastic optimal control. These hypothesized mutual benefits serve as the motivation for the work we present here.
Early work in the RL community recognized the need for stochastic policies when the agent has restricted or distorted access to the states [15]. The randomness introduced by stochastic policies diversifies chosen actions and hence achieves "artificial probing," in a sense analogous to persistence of excitation in control theory and system identification [22]. This random way of achieving learning, in addition to not being necessarily effective in general, can render systems unsafe and even unstable. In this paper, we employ a learning architecture that addresses the fragility of RL due to uncertainties in modeling and tries to mitigate the computational challenges associated with stochastic optimal control. Therefore, we seek a controller which, unlike those that employ stochastic policies, seeks effective probing when learning is needed, and does so cautiously: respecting safety and performance conditions. We use an extended Kalman filter (EKF) to track approximates of system's uncertainties, and we build RL on the resulting uncertainty propagation dynamics of the filter. We use the deep deterministic policy gradient approach to RL [20], which utilizes the policy gradient theorems [29, 17]. We then deploy our approach on a problem carefully designed to reflect the nuances of stochastic control to test the effectiveness of our RL approach under uncertainties.
## II Problem Formulation
Consider a class of discrete-time nonlinear systems described \(\forall k\in\mathbb{N}\) by,
\[x_{k+1} =f(x_{k},u_{k})+w_{k}, \tag{1a}\] \[y_{k} =g(x_{k})+v_{k}, \tag{1b}\]
where \(x_{k}\in\mathbb{R}^{r_{x}}\) is the state, \(u_{k}\in\mathbb{R}^{r_{u}}\) is the control input, \(y_{k}\in\mathbb{R}^{r_{y}}\) is the output signal, and \(w_{k}\in\mathbb{R}^{r_{x}}\), \(v_{k}\in\mathbb{R}^{r_{y}}\) are exogenous disturbances. The stochastic processes, \(\{v_{k}\}\) and \(\{w_{k}\}\), are assumed to be independent and identically distributed (i.i.d.) with continuously differentiable densities of zero means and covariances \(\Sigma_{w}\) and \(\Sigma_{v}\), respectively. These sequences are independent from each other and from \(x_{0}\), the initial state, which has a continuously differentiable density \(\pi_{0|0}\). The functions \(f(\cdot,\cdot)\) and \(g(\cdot)\) are well-defined and continuously differentiable with respect to their arguments.
The primary goal of this work is to construct a causal control law, i.e., a control law that is only dependent upon the data accessible up until the moment of evaluating the control action. That is, \(u_{k}=u_{k}(\mathcal{Z}_{k})\), where \(\mathcal{Z}_{k}=\{y_{0},\ldots,y_{k},u_{0},\ldots,u_{k-1},\pi_{0|0}\}\). This law has to minimize the cost functional
\[J=\mathbb{E}\,\left\{\sum_{k=0}^{N}\gamma^{k}\ell_{k}(x_{k},u_{k})\right\}, \tag{2}\]
while satisfying the input constraints \(u_{k}\in\mathbb{U}\), and with some acceptable chance probability, \(1-\epsilon\), satisfying the state constraints \(x_{k}\in\mathbb{X}\), with \(\epsilon\in[0,1)\). The discount factor \(\gamma\in(0,1)\), and the stage costs, \(\ell_{k}:\mathbb{R}^{r_{x}}\times\mathbb{R}^{r_{u}}\to\mathbb{R}\), are continuously differentiable in their second arguments. The expectation and constraint satisfaction probabilities are taken with respect to all the random variables, i.e., \(x_{0}\), \(w_{k}\), and \(v_{k}\), for all \(k\).
We make no distinction between states and parameters. Since the formulation is nonlinear, if the original system has unknown or time-varying parameters, we use the concept of state augmentation [16, p. 281]. That is, the parameters are included in the state vector and the resulting augmented system is again described by (1), which then can be used for simultaneous parameters learning and control [5].
## III Background: stochastic optimal control
The vector \(x_{k}\) in (1a) retains its Markovian property due to the whiteness of \(\{w_{k}\}_{k}\). Moreover, the observation \(y_{k}\) is conditionally independent when conditioned on \(x_{k}\); \(\{v_{k}\}_{k}\) in (1) is also white. These assumptions are typical in partially observable Markov decision processes [6]. Under these conditions, the state \(x_{k}\) cannot be directly accessed; it can only be inferred through the observation \(y_{k}\), which typically is not equal to \(x_{k}\). The vector \(x_{k}\) is only a state in the Markovian sense, that is
\[p(x_{k+1}\mid x_{k},x_{k-1},\ldots,x_{0},u_{k},\ldots,u_{0})=p(x_{k+1}\mid x_{ k},u_{k}).\]
For a decision maker or a control designer (or the learner as in [15]), an alternative "state" is required. That is, from a practical standpoint, the minimal accessible piece of information adequate to reason about the system's future safety and performance. Consequently, this discussion gives rise to the concept of the _information state_, which, at time-\(k\), is the state filtered density function \(\pi_{k|k}=p(x_{k}\mid\mathcal{Z}_{k})\)[18]. As an "informative statistic," a term used by [27] roughly means a statistic that is sufficiently informative to enable a desired control objective. However, the information state is typically infinite dimensional, which renders its applicability infeasible, computationally. In the next subsections, we shall explain the sources of this infeasibility and provide a framework to alleviate them.
### _Separation_
Adopting the information state \(\pi_{k|k}\), a causal controller has the form \(u_{k}=u_{k}(\pi_{k|k})\). This formulation of the control law allows the interpretation of stochastic optimal control as comprising two distinct steps [10, ch. 25]:
* Tracking \(\pi_{k|k}\), that is, a Bayesian filter that propagates the information state [18].
* A law that assigns a value \(u_{k}\) to each information state provided by the filter, such that this law minimizes (2).
In the linear Gaussian state-space model case, the information state takes an equivalent finite dimensional characterization: the state conditional mean and covariance. If the system is unconstrained and \(\ell_{k}\)s are quadratic, the optimal control is indifferent to the state covariance and is only a function of the mean. This explains the separation principle in LQG control design [3, ch. 8]. This separation principle differs from that in the realm of stochastic optimal control. The latter denotes the two-step interpretation listed above.
As pointed out by [30], tracking the information state \(\pi_{k|k}\) does not solve the problem; a convenient approximation to the Bayesian filter is typically less cumbersome than finding the stochastic optimal control. The next subsection is a brief introduction to the Bayesian filter, which is then used to construct the dynamic programming equation for the stochastic case.
### _Bayesian Filter_
The system (1) can be equivalently described by, \(\forall k\in\mathbb{N}\),
\[x_{k+1} \sim p(x_{k+1}\mid x_{k},u_{k}),\] \[y_{k} \sim p(y_{k}\mid x_{k}),\]
similar to [25] due to the whiteness of \(w_{k}\) and \(v_{k}\). The notation \(p(\cdot)\) denotes different density functions and will be identified by its arguments.
The information state can be propagated, at least in principle, through the Bayesian filter, which consists of the following two steps:
1. Time Update: \[p(x_{k+1}\mid u_{k},\mathcal{Z}_{k})=\\ \int p(x_{k+1}\mid u_{k},x_{k})p(x_{k}\mid\mathcal{Z}_{k})\, \text{d}x_{k}.\] (3)
2. Measurement Update: \[p(x_{k+1}\mid\mathcal{Z}_{k+1})=\\ \frac{p(y_{k+1}\mid x_{k+1})p(x_{k+1}\mid u_{k},\mathcal{Z}_{k})}{ \int p(y_{k+1}\mid x_{k+1})p(x_{k+1}\mid u_{k},\mathcal{Z}_{k})\,\text{d}x_{k+ 1}}.\] (4)
Notice that to move from the filtered density at time-\(k\) to \(k+1\), the values of \(u_{k}\) and \(y_{k+1}\) are used. In order to simplify the notation, \(\pi_{k|k}=p(x_{k}\mid\mathcal{Z}_{k})\) and \(\pi_{k+1|k}=p(x_{k+1}\mid u_{k},\mathcal{Z}_{k})\), define the mapping
\[\pi_{k+1|k+1}=T(\pi_{k|k},u_{k},y_{k+1}), \tag{5}\]
where \(T\) maps \(\pi_{k|k}\) to \(\pi_{k+1|k}\) using \(u_{k}\) in (3), then to \(\pi_{k+1|k+1}\) using \(y_{k+1}\) in (4). In the next section we seek to approximate the above two steps by the EKF, which propagates approximations of the first two moments of \(\pi_{k|k}\)[2].
### _Stochastic Dynamic Programming_
A causal control law uses only the available information up to the moment of evaluating this law. In accordance with the principle of optimality, when making the final control decision, denoted as \(u_{N-1}\), and given the available information then \(\mathcal{Z}_{N-1}\), the optimal cost can be determined as follows
\[\min_{u_{N-1}\in\mathbb{O}(\pi_{N-1|N-1})}\mathbb{E}\left\{\ell_{N -1}(x_{N-1},u_{N-1})+\right.\\ \left.\gamma\ell_{N}(x_{N})\mid u_{N-1},\mathcal{Z}_{N-1}\right\},\]
where the expectation is with respect to \(w_{N-1},\,x_{N-1}\) and \(v_{N}\). The set \(\mathbb{\bar{U}}(\pi_{N-1|N-1})\) contains the control inputs in \(\mathbb{U}\) such that they result in \(\mathbb{P}(x_{N}\in\mathbb{X})\geq 1-\epsilon\). Notice that the optimal cost above is solely a function of the information state \(\pi_{N-1|N-1}\); the random vectors \(v_{N}\) and \(w_{N-1}\) have known densities and are marginalized over via the expectation, and \(u_{N-1}\) is the decision variable of the minimization. Writing explicitly the optimal cost as a function of the information state
\[V_{N-1}(\pi_{N-1|N-1})=\\ \min_{u_{N-1}}\mathbb{E}\,\left\{\ell_{N-1}(x_{N-1},u_{N-1})+ \gamma\ell_{N}(x_{N})\mid u_{N-1},\mathcal{Z}_{N-1}\right\}.\]
Define \(V_{N}(\pi_{N|N})=\mathbb{E}\,\left\{\ell_{N}(x_{N})\mid\mathcal{Z}_{N}\right\}\), then
\[V_{N-1}(\pi_{N-1|N-1})=\min_{u_{N-1}}\mathbb{E}\left\{\ell_{N-1} (x_{N-1},u_{N-1})+\right.\\ \left.\gamma V_{N}(T(\pi_{N-1|N-1},u_{N-1},y_{N}))\right\}. \tag{6}\]
A complete version of this derivation is outlined in [18]. This recursion holds for \(k=0,1,\ldots,N-1\), going backwards from the terminal boundary condition \(V_{N}(\pi_{N|N})\). This is the _stochastic dynamic programming equation_, through which, a value \(u_{k}\) is assigned to each information state \(\pi_{k|k}\).
Except for the simplest cases, solving the stochastic dynamic programming equation is computationally prohibitive, primarily because of the infinite dimensionality of the information state. In the next section, we approximate the Bayesian filter by the EKF. This is primarily due to the finite dimensional representation of the information state it offers, which also tends to be practically convenient for variety of applications. Although this approximation is of a vastly reduced dimensionality, it is still of a relatively high dimensionality for problems of interest. We alleviate the latter burden by the implementation of RL.
## IV Methodology
In this section we outline the EKF algorithm, and its "wide sense" (mean and covariance) approximation of the information state. We then adapt the cost in (2) to the new approximate wide-sense information state. A few, mainly cosmetic, changes to this adapted cost are implemented to make it align with the assumptions/notation of the RL algorithm which will be outlined subsequently.
### _Ekf_
We replace the infinite dimensional information state \(\pi_{k|k}\) by a finite dimensional approximate one, namely, the state conditional mean vector \(\hat{x}_{k|k}\) and covariance matrix \(\Sigma_{k|k}\).
Let
\[\hat{x}_{k|k} =\mathbb{E}\,\left\{x_{k}\mid\mathcal{Z}_{k}\right\},\quad\hat{x }_{k|k-1}=\mathbb{E}\,\left\{x_{k}\mid u_{k-1},\mathcal{Z}_{k-1}\right\},\] \[\Sigma_{k|k} =\mathbb{E}\,\left\{(x_{k}-\hat{x}_{k|k})(x_{k}-\hat{x}_{k|k})^ {\top}\mid\mathcal{Z}_{k}\right\},\] \[\Sigma_{k|k-1} =\mathbb{E}\,\left\{(x_{k}-\hat{x}_{k|k-1})(x_{k}-\hat{x}_{k|k-1} )^{\top}\mid u_{k-1},\mathcal{Z}_{k-1}\right\}.\]
The EKF, similarly to the Bayesian filter, consists of the following two major steps:
1. Measurement-update \[\hat{x}_{k|k} =\hat{x}_{k|k-1}+L_{k}\left(y_{k}-g(\hat{x}_{k|k-1})\right),\] \[\Sigma_{k|k} =\Sigma_{k|k-1}-L_{k}H_{k}\Sigma_{k|k-1}.\]
2. Time-update \[\hat{x}_{k+1|k} =f(\hat{x}_{k|k},u_{k}),\] \[\Sigma_{k+1|k} =F_{k}\Sigma_{k|k}F_{k}^{\top}+\Sigma_{w}.\]
which can be combined to write,
\[\hat{x}_{k+1|k+1} =f(\hat{x}_{k|k},u_{k})+L_{k+1}\left(y_{k+1}-g(\hat{x}_{k+1|k}) \right),\] \[\Sigma_{k+1|k+1} =(I-L_{k+1}H_{k+1})\left(F_{k}\Sigma_{k|k}F_{k}^{\top}+\Sigma_{w} \right),\]
where
\[L_{k} =\Sigma_{k|k-1}H_{k}^{\top}\left(H_{k}\Sigma_{k|k-1}H_{k}^{\top}+ \Sigma_{v}\right)^{-1},\] \[F_{k} =\left.\frac{\partial f(x,u_{k})}{\partial x}\right|_{x=\hat{x}_{k |k}},\quad H_{k}=\left.\frac{\partial h(x)}{\partial x}\right|_{x=\hat{x}_{k|k}},\]
and \(\hat{x}_{0|0}\), \(\Sigma_{0|0}\) are the initial state \(x_{0}\) mean and covariance.
In general, the above conditional means and covariances are not exact; the state conditional densities \(\pi_{k|k}\) are non-Gaussian due to the nonlinearities. Hence, \(\hat{x}_{k|k}\) and \(\Sigma_{k|k}\) are merely approximations to the conditional mean and covariance of \(x_{k}\)[2]. Define
\[\hat{\pi}_{k+1}=\hat{T}(\hat{\pi}_{k},u_{k},y_{k+1}), \tag{7}\]
where \(\hat{\pi}_{k}=\{\hat{x}_{k|k},\Sigma_{k|k}\}\). Here \(\hat{T}\) is a surrogate approximate mapping to \(T\) in (5). The mapping \(\hat{T}\) applies the above steps of the EKF to \(\hat{x}_{k|k},\Sigma_{k|k}\), using \(u_{k}\) and \(y_{k+1}\), and generates \(\hat{x}_{k+1|k+1}\) and \(\Sigma_{k+1|k+1}\).
### _Cost_
The first two moments provided by the EKF are sufficient to evaluate the expectation of the finite-horizon cost function in (2), given that the \(\ell_{k}\)s are quadratic. If not, the \(\ell_{k}\)s can be replaced by their second order Taylor expansion. That is, if \(\ell_{k}(x_{k},u_{k})=x_{k}^{\top}Q_{k}x_{k}+u_{k}^{\top}R_{k}u_{k}\) for \(k=0,\ldots,N\), the cost (2)
\[J =\mathbb{E}\left\{\,\sum_{k=0}^{N}x_{k}^{\top}Q_{k}x_{k}+u_{k}^{ \top}R_{k}u_{k}\right\}\!,\] \[=\mathbb{E}\left\{\,\sum_{k=0}^{N}\big{(}\hat{x}_{k|k}+\tilde{x} _{k}\big{)}^{\top}Q_{k}\left(\hat{x}_{k|k}+\tilde{x}_{k}\right)+u_{k}^{\top}R _{k}u_{k}\right\}\!,\]
where \(\tilde{x}_{k}=x_{k}-\hat{x}_{k|k}\), and hence, cross-terms vanish under expectation, so we have
\[J=\mathbb{E}\left\{\,\sum_{k=0}^{N}\hat{x}_{k|k}^{\top}Q_{k}\hat{x}_{k|k}\!+ \!\text{trace}(Q_{k}\Sigma_{k|k})\!+\!u_{k}^{\top}R_{k}u_{k}\right\}\!, \tag{8}\]
since \(\mathbb{E}\left\{\tilde{x}_{k}^{\top}Q_{k}\tilde{x}_{k}\right\}=\text{trace}( Q_{k}\Sigma_{k|k})\).
If the stage costs are not quadratic in \(x_{k}\), we use their second order Taylor expansion. For a fixed \(u_{k}\in\mathbb{U}\),
\[\ell_{k}(x_{k},u_{k})\approx\ell_{k}(\hat{x}_{k|k},u_{k})+G_{k}(u_{k})\tilde{ x}_{k}+\tilde{x}_{k}^{\top}C_{k}(u_{k})\tilde{x}_{k},\]
where
\[G_{k}(u_{k})=\left.\frac{\partial\ell_{k}(x,u_{k})}{\partial x}\right|_{x= \tilde{x}_{k|k}},C_{k}(u_{k})=\left.\frac{\partial^{2}\ell_{k}(x,u_{k})}{ \partial x^{2}}\right|_{x=\tilde{x}_{k|k}}\!.\]
Substituting back in (2) yields the approximate cost function
\[\hat{J}(\pi_{0|0})=\mathbb{E}\left\{\,\sum_{k=0}^{N}\ell_{k}\left(\hat{x}_{k| k},u_{k}\right)+\text{trace}\left(C_{k}(u_{k})\,\Sigma_{k|k}\right)\right\}\!. \tag{9}\]
The linear term of the expansion vanishes, it is affine in \(\tilde{x}_{k}\) which has zero mean.
### _Constraints for Safety_
Two types of constraints can be considered: probabilistic constraints of polyhedral sets and probabilistic constraints of sets described by grids. The latter can handle more complicated geometries. Each type is handled and implied separately from the other, and only separate, not joint, probabilistic guarantees are provided.
#### Iii-C1 Polyhedral Constraints
Given the following state constraint set
\[\mathbb{X}=\{x_{k}\in\mathbb{R}^{r_{x}}|Tx_{k}\leq\bar{x}\},\,\bar{x}\in \mathbb{R}^{t}, \tag{10}\]
where \(T\in\mathbb{R}^{t\times r_{x}}\) have full row rank. The cost function (2) is to be minimized subject to \(u_{k}\in\mathbb{U}\) for \(k=0,\ldots,N-1\), and to the probabilistic constraints
\[\mathbb{P}(x_{k}\in\mathbb{X}\mid\mathcal{Z}_{k})\geq 1-\epsilon, \tag{11}\]
for all \(k=1,\ldots,N\). The probability measure \(\mathbb{P}(\cdot\mid\mathcal{Z}_{k})\) corresponds to \(x_{k}\) distributed according to the density \(\pi_{k|k}=p(x_{k}\mid\mathcal{Z}_{k})\). The constant \(\epsilon\in[0,1)\) is the tolerance, or the acceptable constraint violation rate.
Using the approximation of the first two moments of the state, the probabilistic constraints in (11) can be shown to be implied by deterministic linear constraints. This replacement of the probabilistic constraints with deterministic ones, with tighter constraints, forms the bases of tube-based stochastic Model Predictive Control methods [31, 8].
**Lemma 1**.: _(**Cantelli's inequality**) For a scalar random variable \(\gamma\) with mean \(\hat{\gamma}\) and variance \(\Gamma\),_
\[\mathbb{P}(\gamma-\hat{\gamma}\geq\eta)\leq\frac{\Gamma}{\Gamma+\eta^{2}},\, \eta\geq 0. \tag{12}\]
**Lemma 2**.: _For \(j=1,\ldots,t\), let \(T(j)\) be the \(j^{\text{th}}\) row of \(T\) and \(\bar{x}(j)\) be the \(j^{\text{th}}\) element of \(\bar{x}\). The probabilistic constraints_
\[\mathbb{P}\Big{(}T(j)x_{k}\leq\bar{x}(j)\mid\mathcal{Z}_{k}\Big{)}\leq 1- \frac{\epsilon}{t}, \tag{13}\]
_are implied by_
\[T(j)\hat{x}_{k|k}\leq\bar{x}(j)-\sqrt{\frac{t-\epsilon}{\epsilon}}\sqrt{T(j) \Sigma_{k|k}T(j)^{\top}}. \tag{14}\]
Proof.: This result is analogous to that in [11], only the probability measure is replaced by a conditional one.
**Lemma 3**.: _Let \((\Omega,\mathcal{B},\mathbb{P})\) be a probability space and \(E_{i}\in\mathcal{B}\) for \(i=1,\ldots,n\). If \(\mathbb{P}(E_{i})\geq 1-\epsilon/n\), for all \(i=1,\ldots,n\), then \(\mathbb{P}(\bigcap_{i=1}^{n}E_{i})\geq 1-\epsilon\). _
**Proposition 1**.: _The following deterministic polyhedral constraints_
\[T\hat{x}_{k|k}\leq\bar{x}-\sqrt{\frac{t-\epsilon}{\epsilon}}\sqrt{\mathrm{ diag}(T\Sigma_{k|k}T^{\top})}, \tag{15}\]
_imply the probabilistic constraints in (10), where: \(t\) is the number of rows of \(T\), the square root is defined element-wise, and the function \(\text{diag}(\cdot)\) maps a matrix to its diagonal terms._
Proof.: Notice that the state constraint sets in (10) can be written as intersection of sets
\[\mathbb{X} =\{x_{k}\in\mathbb{R}^{r_{x}}|Tx_{k}\leq\bar{x}_{k}\},\] \[=\bigcap_{j=1}^{\top}\{x_{k}\in\mathbb{R}^{r_{x}}|T(j)x_{k}\leq \bar{x}_{k}(j)\},\]
since all rows are to be enforced simultaneously [21]. By Lemma 3, \(\mathbb{P}(x_{k}\in\mathbb{X}_{k}\mid\mathcal{Z}_{k})\geq 1-\epsilon\) is implied by \(\mathbb{P}(\{x_{k}\in\mathbb{R}^{r_{x}}|T(j)x_{k}\leq\bar{x}_{k}(j)\}\mid \mathcal{Z}_{k})\geq 1-\epsilon/t\). The latter is implied by (14) in Lemma 2. Stacking the inequalities in (14) for all of the \(t\) rows of \(T\), we get (15). Notice that with an increase in the number of rows \(t\), the constraints become tighter and the approximation more conservative.
**Remark 1**.: _Proposition 1 is when the sets described by the rows \(Tx_{k}\leq\bar{x}\) are to be satisfied jointly [14], which results in further constraints tightening in Lemma 3. If this is to be
relaxed to satisfying these sets separately, condition in (15) can be replaced by the condition_
\[T\hat{x}_{k|k}\leq\bar{x}-\sqrt{\frac{1-\epsilon}{\epsilon}}\sqrt{\mathrm{diag} (T\Sigma_{k|k}T^{\top})}.\]
#### Iii-C2 Grid-based Constraints
For state constraint sets of more complicated geometries, we use a grid of nodes representing the sets that the state is to avoid. These nodes can be generated, for instance, through rejection sampling using a uniform proposal density enveloping these sets.
Let \(\Xi\subset\mathbb{R}^{rx}\), a subset in the state-space, be the set to be avoided with high probability. That is,
\[\mathbb{P}(x_{k}\notin\Xi\mid\mathcal{Z}_{k})=1-\epsilon. \tag{16}\]
If \(\mathbb{G}=\{\xi_{j},\,j=1,\ldots,L\}\) is a set of points, forming a grid in \(\Xi\), we can replace the probabilistic constraint (16), by one described by the grid, by letting every point in the grid be an outlier with respect to the density \(\pi_{k|k}\). For the purpose of identifying a point as an outlier, we apply the Mahalanobis distance [7], using the conditional mean \(\hat{x}_{k|k}\) and covariance \(\Sigma_{k|k}\) provided by the EKF. This distance is defined as
\[d_{k}(z)=\sqrt{(z-\hat{x}_{k|k})^{\top}\Sigma_{k|k}^{-1}(z-\hat{x}_{k|k})}.\]
Therefore, we replace (16) with
\[d_{k}(\xi_{j})>\delta,\,j=1,\ldots,L,\]
where \(\delta>0\) can be related to an ellipsoidal confidence region about the mean \(\hat{x}_{k|k}\), as in [12]. For example, a Mahalanobis distance of 2 for a univariate standard normal density corresponds to the two standard deviations confidence region about its mean. However, relating the Mahalanobis distance to probability is not clear. Probability is defined over sets, while Mahalanobis distance is over points. We can mitigate this problem by taking \(\delta\) sufficiently large or having it as a hyper parameter to be tuned during simulation phase.
We assume the points of the grid are dense enough such that the ellipsoid defined by \(x\) and \(\delta\), \(d_{k}(x)=\delta\), under any isometry, cannot be contained in \(\Xi\) while it contains no points of the grid. In other word, this ellipsoid cannot be "squeezed" between points of the grid. Therefore, the amount of uncertainty dictates the density of sampling required to represent irregular sets. If the uncertainty is large, this ellipsoid is large, and hence, the gird can be less dense.
#### Iii-C3 Soft Constraints
The following numbers, representing the state and input violations count, are added as a penalty, with a Langrange multiplier-like expression, to the cost function
\[\mathcal{A}_{k}^{1}=1\Big{\{}T\hat{x}_{k|k}>\bar{x}-\sqrt{\frac{t -\epsilon}{\epsilon}}\sqrt{\mathrm{diag}(T\Sigma_{k|k}T^{\top})}\Big{\}}, \tag{17}\] \[\mathcal{A}_{k}^{2}=1\Big{\{}d_{k}(\xi_{j})>\delta,\,j=1,\ldots,L \Big{\}},\] for \[k=1,2,\ldots,N,\] \[\mathcal{A}_{k}^{3}=1\Big{\{}u_{k}\in\mathbb{U}\Big{\}},\] for \[k=1,2,\ldots,N-1,\]
such that the cost becomes
\[\bar{J}(\pi_{0|0},u_{0:N-1})=\mathbb{E}\,\left\{\sum_{k=0}^{N}\gamma^{k}\bar{ \ell}_{k}(x_{k},u_{k})\right\}, \tag{18}\]
where \(\bar{\ell}_{k}(x_{k},u_{k})=\ell_{k}(x_{k},u_{k})+\lambda\mathcal{A}_{k}\), \(\mathcal{A}_{k}=\mathcal{A}_{k}^{1}+\mathcal{A}_{k}^{2}+\mathcal{A}_{k}^{3}\), and \(\lambda\) is large positive number, acting as a Langrange multiplier.
The hard input constraint \(u_{k}\in\mathbb{U}\), instead, can be enforced by limiting the control law to a specific family, for instance, using a saturated parameterized family for a rectangular \(\mathbb{U}\). However, the input constraint penalty \(\mathcal{A}_{k}^{3}\) can be practical for more complicated input constraint set \(\mathbb{U}\).
### _Deterministic Policy Gradient_
RL is an umbrella of algorithms that are rooted in the concept of stochastic approximation [6, 28, 24]. The main objective of these algorithms is to solve optimal control problems through simulating interactions between the controller and the environment. Among the numerous algorithms within the scope of RL, we opt for the Deterministic Policy Gradient algorithm [26], in particular, its deep neural net implementation [20]. While we find this algorithm convenient within the context of this paper--primarily due to its ability to handle continuous action spaces--our choice does not impose strong preferences on the selection of other RL algorithms.
#### Iii-D1 Reward Signal and the Infinite Horizon Cost
The convention in RL literature is to deal with rewards, typically in infinite-horizon, rather than finite-horizon costs as in model predictive controllers. This adaptation is straight-forward, the reward signal can be defined as a negative saturated/bounded version of the stage-cost, \(\bar{\ell}\). This saturation is done for two reasons. The first is to guarantee the convergence of the value function [6, ch. 7], hence, the infinite-horizon stochastic DPE as \(N\to\infty\) is valid. From numerical stability perspective, to avoid exploding gradients, this will prove beneficial in the learning step of RL. The negative sign is to switch from the cost-minimization convention in optimal control to value-maximization in RL. Let
\[r(\hat{\pi}_{k},u_{k})=-\max\left\{\mathbb{E}\,\bar{\ell}(x_{k},u_{k}),M \right\},\]
where \(M<\infty\). Consequently, the new form of the DPE in (6) is
\[\bar{V}_{N-1}(\hat{\pi}_{N-1})=\max_{u_{N-1}}\Big{\{}r(\hat{\pi} _{N-1},u_{N-1})+\] \[\gamma\mathbb{E}\,\bar{V}_{N}(T(\pi_{N-1},u_{N-1},y_{N}))\Big{\}}.\]
As discussed above, by letting \(N\to\infty\), \(\bar{V}_{N}\to\bar{V}\), we have
\[\bar{V}(\hat{\pi}_{k})=\max_{u_{k}}\Big{\{}r(\hat{\pi}_{k},u_{k})+\gamma \mathbb{E}\,\bar{V}(T(\pi_{k},u_{k},y_{k+1}))\Big{\}}.\]
To unify with RL notation, the state-action value function \(Q\)[6],
\[Q(\hat{\pi}_{k},u_{k})=r(\hat{\pi}_{k},u_{k})+\gamma\mathbb{E}\,\left(Q(\hat{T} (\hat{\pi}_{k},u_{k},y_{k+1})\right).\]
We use a parameterized control policy \(u_{k}=\mu_{\theta}(\hat{\pi}_{k})\), a neural net in this paper, and \(\theta\) denotes the set of gains and biases of this net. If the total accumulated reward of this policy is
\[J_{\theta}(\hat{\pi}_{0})=\mathbb{E}_{\mu_{\theta}}\bigg{\{}\sum_{k=0}^{\infty} \gamma^{k}r(\hat{\pi}_{k},\mu_{\theta}(\hat{\pi}_{k})\bigg{\}},\]
where \(\mathbb{E}_{\mu_{\theta}}\) corresponds to a probability measure \(\mathbb{P}_{\mu_{\theta}}\) defined over all the possible trajectories of \(\hat{\pi}_{k}\)s under the policy \(\mu_{\theta}\).
The policy gradient theorems seek to find a description of the gradient \(\nabla_{\theta}J_{\theta}\) which is convenient for computation. The Deterministic Policy Gradient Theorem of [26], adapted for the case of stochastic Dynamic Programming, provides the following identity
\[\nabla_{\theta}J_{\theta}=\mathbb{E}_{\mu_{\theta}}\Big{[}\nabla_{u}Q(\hat{ \pi},u)\nabla_{\theta}\mu_{\theta}(\hat{\pi})\Big{]},\]
given the assumptions listed in Section II, \(r(\hat{\pi}_{k},u_{k})\) is continuously differentiable in \(u_{k}\) almost everywhere, and the Jacobian matrix \(\nabla_{\theta}\mu_{\theta}(\hat{\pi})\) is continuous in \(\hat{\pi}\). These conditions imply the satisfaction of Assumption A.1 in [26], almost-surely-\(\mathbb{P}_{\mu_{\theta}}\).
In the deep deterministic policy gradient (DDPG) algorithm [20], \(Q\) is approximated by a neural net \(Q_{\psi}\) with parameters \(\psi\) updated via temporal difference methods. While the above gradient is used to update the control policy neural net \(\mu_{\theta}\) gains. Algorithm 1 is the DDPG algorithm adapted for the information state (instead of the state).1 It does not include the target networks as in [20], which can be augmented to Algorithm 1 to improve learning stability.
Footnote 1: The code of this implementation can be found at: [https://github.com/msramada/Active-Learning-Reinforcement-Learning](https://github.com/msramada/Active-Learning-Reinforcement-Learning)
**Remark 2**.: _The information state \(\hat{\pi}_{k}\) contains repeated elements, since \(\Sigma_{k|k}\) is symmetric [2]. We can consider the upper triangular portion only or the diagonal elements if the cross dependencies are to be ignored. \(\Box\)_
## V Numerical example
In this section we implement Algorithm 1 on a scalar state-space system with varying state observability over the state-space \(\mathbb{R}\). This simple example demonstrates the behavior of the RL controller when equipped with concepts from stochastic control.
Consider this simple integrator with nonlinear measurement equation
\[x_{k+1} =x_{k}+u_{k}+w_{k}, \tag{19}\] \[y_{k} =\frac{1}{9}x_{k}^{3}+v_{k}, \tag{20}\]
where \(w_{k}\) and \(v_{k}\) obey the assumptions listed under (1), and moreover, \(w_{k}\sim\mathcal{N}(0,\Sigma_{w})\) and \(w_{k}\sim\mathcal{N}(0,\Sigma_{v})^{2}\), and \(\Sigma_{v}=2\), \(\Sigma_{w}=2\). We start with this system since it is unstable and has varying observability.
If \(x_{k}\) vanishes, \(y_{k}\)'s sensitivity with respect to it vanishes too. In general, the system becomes less observable as \(x_{k}\) is close to the origin. Therefore, the stability of the origin and the observability of the system are important goals but in conflict. The balance we expect out of a stochastic dual controller is to drive the state close to the origin, while at the same time actively gathering information by visiting the more observable outer neighborhood.
We use a running cost \(\ell(x_{k},u_{k})=x_{k}^{2}+u_{k}^{2}\), and a discount factor \(\gamma=0.95\). Together, they construct the cost \(J_{\theta}\) and the state-action value function \(Q\). The constraints are \(u_{k}\in\mathbb{U}=[-5,5]\), which we enforce by using saturated parameterized policy, rather than a Lagrangian penalty term. The latter can be the better option if \(\mathbb{U}\) cannot be enforced as the range of a parameterized policy.
```
Randomly initialize the weights \(\theta,\psi\) of the neural nets \(Q_{\psi}\) and \(\mu_{\theta}\); Initialize replay buffer \(\mathcal{R}\); for episode \(=1,2,\ldots\)do Randomly sample an initial information state \(\hat{\pi}_{0}=\{\hat{x}_{0},\Sigma_{0}\}\) and a true state \(x_{0}\); for\(k=0,\ldots,N-1\)do Sample control actions \(u_{k}=\mu_{\theta}(\hat{\pi}_{k})+\eta\) where \(\eta\) is an exploration noise; Apply \(u_{k}\) in (1a) to sample the true \(x_{k+1}\); Using \(x_{k+1}\) in (1b), sample the true \(y_{k+1}\); Using \(\hat{\pi}_{k}\), \(u_{k}\) and \(y_{k+1}\), evaluate \(\hat{\pi}_{k+1}\) using (7); Calculate the reward \(r(\hat{\pi}_{k},u_{k})\); Store the tuple \((\hat{\pi}_{k},u_{k},r(\hat{\pi}_{k},u_{k}),\hat{\pi}_{k+1})\) in \(\mathcal{R}\); Sample a minibatch \(\{(\hat{\pi}_{i},u_{i},r(\hat{\pi}_{i},u_{i}),\hat{\pi}_{i+1}),\,i=1,\ldots,M\}\) of \(\mathcal{R}\); Set \(z_{i}=r(\hat{\pi}_{i},u_{i})+\gamma Q_{\psi}(\hat{\pi}_{i+1},\mu_{\theta}(\hat {\pi}_{i+1}))\); Update the critic network \(Q_{\psi}\) by minimizing the loss \(\frac{1}{M}\sum_{i=1}^{M}{(z_{i}-Q_{\psi}(\hat{\pi}_{i},u_{i}))^{2}}\), w.r.t. \(\psi\); Update the policy network \(\mu_{\theta}\) using the sample average policy gradient \[\nabla_{\theta}J_{\theta}\approx\frac{1}{M}\sum_{i=0}^{M}\nabla_{u}Q_{\psi}(\hat {\pi}_{i},u)\mid_{u=\mu_{\theta}(\hat{\pi}_{i})}\nabla_{\theta}\mu_{\theta}( \hat{\pi}_{i});\] endfor endfor
```
**Algorithm 1** Actively Learning RL via DDPG
Deterministic policy gradient methods [26] use the actor-critic learning architecture: the actor being the control policy, and the critic is its corresponding policy evaluation in the shape of an action-value function. In this example, both networks are feedforward neural nets, with one hidden layer of size \(64\). The actor network receives two inputs: the information state elements \(\{\hat{x}_{k|k},\Sigma_{k|k}\}\). The output of this network is the control \(u_{k}\). The critic network, approximating the action value function, takes three input values: both of the information state elements as well as the corresponding control action \(u_{k}\), and it outputs \(Q(\hat{\pi}_{k},u_{k})\). We use mini-batch learning, with batches of size \(64\) of tuples \((\hat{\pi}_{k|k},u_{k},r_{k},\hat{T}(\hat{\pi}_{k},u_{k},y_{k+1}))\), and with learning rate \(10^{-4}\). Figure 1 shows the convergence of the normalized accumulative reward. The normalized accumulative reward of value
\(1\) corresponds to \(J_{\theta}=0\).
A Linear Quadratic Gaussian (LQG) control is first applied: \(u_{k}=K\hat{x}_{k|k}\), where \(\hat{x}_{k|k}\) is the conditional mean provided by the EKF and \(K\) designed according to \(Q^{LQR}=R^{LQR}=1\). The result of this LQG control is shown in Figure 2, which shows system instability and _filter divergence_. The state covariance is blowing up and the true state is a random walk (system is an integrator with a white noise disturbance). The concept of filter divergence is inherently qualitative, making it not only difficult to delineate the conditions causing it, but also challenging to provide a clear definition. According to [2] and [16], filter divergence is when the error covariance, \(\Sigma_{k|k}\), of a Kalman filter (or an EKF), becomes large and therefore the filter is insensitive to new measurements. This may result in the filter's state estimate deviating significantly from the true state of the actual system. This filter divergence seen in Figure 2 is caused mainly by the LQG controller insisting on driving the state to the origin. This issue has been handled by the RL control resulting from our approach. In Figure 3, this controller does not prioritize only driving the state estimate to the origin, but also doing so while conserving some level of observability. This balance between caution and probing is what results in the overall system stability.
## VI Conclusion
The presented framework is to produce an RL agent with attributes from stochastic optimal control. These attributes equip the agent with caution: safety and probing, i.e., active online learning. This is done via using RL to solve the stochastic DPE. To track the information state, we use the EKF. If it performs poorly--that is, it is a poor approximation to the Bayesian filter--the premise, on which this control design approach is built, is then inaccurate. This issue can be mitigated by using a filter that is generally "adequate" for the problem at hand. Some alternatives include: a Gaussian sum filter [1], or the incorporation of the estimation of higher order modes into the EKF [23]. Generally, what qualifies as a "sufficient" approximation to the information state is a rather complicated question.
In addition to the choice of the Bayesian filter approximation, the design of the reward signal can produce different balances of safety and exploration/exploitation. For instance,
* Prioritizing filter stability (or its accuracy) by including the true (in simulation) estimation error or by further penalizing the state covariance term in the reward signal;
* Substituting fixed or scaled state covariance in operation;
* Tuning the state disturbance \(w_{k}\) covariance \(\Sigma_{w}\), or in general: the EKF design.
These are some of the questions we seek to answer in our future research.
|
2306.17703 | Evaluation of the Benefits of Zero Velocity Update in Decentralized
EKF-Based Cooperative Localization Algorithms for GNSS-Denied Multi-Robot
Systems | This paper proposes the cooperative use of zero velocity update (ZU) in a
decentralized extended Kalman filter (DEKF) based localization algorithm for
multi-robot systems. The filter utilizes inertial measurement unit (IMU),
ultra-wideband (UWB), and odometry velocity measurements to improve the
localization performance of the system in the presence of a GNSS-denied
environment. The contribution of this work is to evaluate the benefits of using
ZU in a DEKF-based localization algorithm. The algorithm is tested with real
hardware in a video motion capture facility and a Robot Operating System (ROS)
based simulation environment for unmanned ground vehicles (UGV). Both
simulation and real-world experiments are performed to show the effectiveness
of using ZU in one robot to reinstate the localization of other robots in a
multi-robot system. Experimental results from GNSS-denied simulation and
real-world environments show that using ZU with simple heuristics in the DEKF
significantly improves the 3D localization accuracy. | Cagri Kilic, Eduardo Gutierrez, Jason N. Gross | 2023-06-30T14:35:06Z | http://arxiv.org/abs/2306.17703v1 | Evaluation of the Benefits of Zero Velocity Update in Decentralized EKF-Based Cooperative Localization Algorithms for GNSS-Denied Multi-Robot Systems
###### Abstract
This paper proposes the cooperative use of zero velocity update (ZU) in a decentralized extended Kalman filter (DEKF) based localization algorithm for multi-robot systems. The filter utilizes inertial measurement unit (IMU), ultra-wideband (UWB), and odometry velocity measurements to improve the localization performance of the system in the presence of a GNSS-denied environment. The contribution of this work is to evaluate the benefits of using ZU in a DEKF-based localization algorithm. The algorithm is tested with real hardware in a video motion capture facility and a Robot Operating System (ROS) based simulation environment for unmanned ground vehicles (UGV). Both simulation and real-world experiments are performed to show the effectiveness of using ZU in one robot to reinstate the localization of other robots in a multi-robot system. Experimental results from GNSS-denied simulation and real-world environments show that using ZU with simple heuristics in the DEKF significantly improves the 3D localization accuracy.
cooperative localization, multi-robot systems, ROS 2021
Cagri Kilic, Eduardo Gutierrez, Jason N. Gross
## 1 Introduction
Mobile robots rely on accurate localization estimates to perform certain tasks, such as exploration, navigation, object detection and tracking, map building, and autonomous movement through space. In a localization application, robots can enhance their ability to locate themselves accurately within the environment by fusing information from multiple sources. One common method of estimating positioning information is using the Global Navigation Satellite System (GNSS), which includes GPS and other similar systems. However, the availability of this system is often unreliable in urban, forested, and indoor areas because of obstructions that block signals from satellites (Merry & Bettinger, 2019).
Given the challenges posed by limited GNSS availability in certain environments, cooperative localization emerges as a valuable alternative for mobile robots to achieve accurate positioning. Cooperation among multiple robots is desirable for many tasks, as the robots can perform several tasks more efficiently and robustly than a single robot (Gautam & Mohan, 2012). Cooperative localization is a technique in which multiple robots share information and perform relative measurements of one another to obtain a more accurate estimate of their location, compared to what would be possible by a single robot (Gao et al., 2019). Robots can achieve this level of cooperation when they detect each other and share state estimates and covariances in the presence of relative measurements, as shown by Luft et al. (2018). The individual state estimates of the robots are commonly
obtained by fusing the measurements from proprioceptive (i.e., IMU) and exteroceptive (i.e., satellite signal receiver, laser scanner, camera) sensors.
In a cooperative multi-robot system, an agent can still benefit from Global Navigation Satellite System (GNSS) information even without direct access to it if their counterparts have access to GNSS information (Qu & Zhang, 2011). The fusion of IMU-based dead reckoning and visual odometry (VO) is useful in solving this problem in GNSS-denied/degraded environments and can be used cooperatively (Queralta et al., 2022). However, relying on characteristics of the environment as in visual-based methods, accuracy is not reliable on uniform terrains with few landmarks.
ZU is widely used to aid pedestrian inertial navigation () and is one of the important techniques of error suppression and compensation for high-precision positioning system (Skog et al., 2010). The main advantages of ZU for the localization task are that these updates can bound the velocity error, accurately calibrate the IMU sensor biases, and limit the rate of INS localization drift (Groves, 2013). ZU can be used in wheeled robots when stationary conditions are detected. ZU can be utilized passively, as an opportunistic navigational update, such as when wheeled robots need to stop for external reasons (Gutierrez et al., 2022); or actively, with periodic stopping (Kilic et al., 2019) or by deciding when to stop autonomously (Kilic et al., 2021). Since ZU can only be used when the robot is stationary, enforcing all the robots to stop in the system may be challenging in some cases. However, in a cooperative localization system, only some of the robots may need to stop actively, and the others can leverage ZU in an opportunistic way, such as when they need to stop for other reasons (e.g., avoiding obstacles, planning, waiting for pedestrians, stopping at traffic lights).
In the case of multi-robot systems, cooperative localization can be classified into centralized and decentralized methods. In centralized methods, each robot in the system transmits its measurements to a central server, and the localization estimations of all robots are estimated on this server. This usually results in high communication and computational costs at the server (Bailey et al., 2011). Also, a failure of the central processing unit in a centralized method usually leads to catastrophic results. In addition, decentralized methods are more resilient to failures and decrease the computational and communication costs (Bailey et al., 2011). For example, a multi-robot system may still function in the event of a malfunction of one individual robot in a decentralized method.
In decentralized localization methods, the impact of individual updates could be noticeably beneficial to the localization performance of the entire system. For example, having part of the multi-robot system able to perform GNSS updates could enhance the state estimates and the entire group. For example, suppose that some of the agents in the multi-robot system are in an area where the GNSS signal is interrupted, and others can obtain sufficient signals. In this situation, robots with adequate positioning information can share this information among the system during relative updates. This notion could easily be extrapolated to the use of other updates. For example, when a robot is in a stationary condition, it can perform ZU to calibrate the IMU biases to keep the INS-based localization reliable when other sensor measurements are not available (e.g., GNSS and VO), then the other robots in the system can benefit from this update even they do not use ZU.
## 2 Problem Statement
This paper explores the potential of leveraging the benefits of ZUs in decentralized cooperative localization. In this paper, we assume that each agent in a multi-robot system is able to perform INS-based dead-reckoning. The localization algorithm can leverage ZU in certain conditions depending on the robot type, such as landing (Groves, 2013) and hovering (Gross et al., 2019) for aerial robots or stopping and using non-holonomicity (Kilic et al., 2019) for ground robots. These specific conditions will allow the algorithm to perform ZU. We adopt a decentralized architecture for the filter estimator, which enables robots to decouple their states, reducing computational costs through distributed computation (Bailey et al., 2011). In order to perform relative ranging measurement updates, the robots can utilize Ultra Wide Band (UWB) sensors. This is done by coupling the states and covariances of the robots participating in the update. The individual robots take advantage of ZUs, which improve their localization performance in feature-poor areas without significant changes to robot operations. This enhancement, similar to the effect of using a GNSS update for a single robot, benefits the overall localization performance of the multi-robot system during relative updates.
It is assumed that all sensors have noise, and the robot's knowledge of its state is based on assumptions about the consistency of its representations of the real world. As a result, the error in the robot state can accumulate over time (). A large error accumulation increases the robot's uncertainty in localization estimation and can also generate a false belief of where it is located (Choset et al., 2005). In literature, the kidnapped robot problem is a case when a robot is moved to another position
without being told (). Many authors have explored the kidnapped robot problem in many different contexts (). In our case, similar to the kidnapped robot problem, we adopted lost robot cases, where the robots have significant position error and covariances without being moved to another position. Some of the lost robot cases can be observed when the primary localization sensors of the robot are disabled, or the error accumulation in the localization is larger than a sustainable level. In this study, we will focus on one specific implementation of this problem: keeping a robot's localization reliable to reinstate the localization of the other lost robots in a cooperative DEKF architecture.
In our previous work, several schemes for deciding when to utilize the information from pseudo-measurements (e.g., ZU) to improve the localization performance in a DEKF based cooperative localization system is analyzed and compared in a simulation environment with ground robots (Gutierrez et al., 2022).
In this work, our specific contributions are summarized as follows:
* We further generalize the method and significantly expand upon by implementing the algorithm into real-world experiments with small ground robots as presented in Gutierrez et al. (2022)
* We add an additional algorithm to use velocity information from odometry (wheel and/or visual) in DEKF.
* We demonstrate the benefits of using ZU for the lost robot cases in a multi-robot DEKF system by qualitatively comparing the localization performance with video motion capture system solution.
* We make our software and datasets publicly available1. Footnote 1: [https://github.com/wvu-navLah/coop_smart](https://github.com/wvu-navLah/coop_smart)
The rest of this paper is organized as follows. Section 3 details and explains the implementation of the algorithm used. Section 4 describes the simulation and real-world experiments set up with the robots used. Section 5 provides insights into the results of simulation experiments. Finally, Section 6 provides contributions and insights for future works to improve the system.
## 3 Methodology
In this work, the base D-EKF algorithm is implemented closely following the work in Luft et al. (2016). This algorithm is embedded into ROS for localizing \(N\) vehicles navigating in a GNSS-denied/degraded environment. Apart from the re-implementation of the base D-EKF algorithm into ROS, an error-state EKF is used instead of a total-state EKF. The reason to use an error-state EKF is to keep the accumulated error more sustainable over longer sequential predictions without measurement updates. Also, modeling the error states is more straightforward than the highly non-linear total states since the error states behave less complicated than the total state ().
Additionally, the available sensor modalities are reduced such that the sensor fusion is done only using IMU, UWB, wheel encoders, and zero velocity updates. For instance, we assume that the robots cannot acquire bearing measurements of each other through exteroceptive sensors, are unable to identify any landmarks within the environment, and operate in a GNSS-denied setting. This differs from the approach presented in the work of Luft et al. (2016). The algorithm uses the IMU as the primary source for localization estimation. Since the INS-based localization is prone to drift over time, the estimation needs to be improved or corrected. Using wheel encoders and cameras for velocity information is widely utilized to improve the INS-based localization by sensor fusion techniques (). GNSS updates can provide reliable information to correct for the localization drift; however, these updates are not always available. ZU can improve the localization estimation by calibrating the IMU and bounding the velocity error estimation, which can be available whenever the robot is stationary. The relative update is performed only using UWB-ranging information. This update is also utilized as a communication bridge between robots to share the state information pairwise.
In the error-state D-EKF architecture, each robot can perform three different algorithms to estimate and improve the state:
1. INS-based dead-reckoning, where each robot propagates its error state and updates its total state using the IMU measurements
2. Private updates2, where each robot receives a measurement update, based on the sensor availability and robot-environment constraints, which is not shared directly with the group, (e.g., odometry velocity, ZU, and GNSS)
3. Relative update2, which is performed when two robots are within specified proximity, allowing for coupling the states, covariances, and cross-correlated values.
Footnote 2: [https://github.com/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/j/journals/journals/journals/j/journals/journals/journals/j/journals/journals/j/journals/journals/journals/j/journals/journals/j/journals/journals/journals/journals/journals/j/journals/journals/journals/j/journals/journals/j/journals/journals/journals/j/journals/journals/journals/journals/j/journals/journals/journals/j/journals/journals/journals/journals/j/journals/journals/journals/journals/j/journals/journals/j/journals/journals/j/journals/journals/j/journals/journals/journals/j/journals/journals/j/journals/journals/j/journals/journals/j/journals/j/journals/j/journals/j/journals/journals/j/journals/j/journals/journals/journals/j/journals/journals/journals/j/journals/journals/journals/journals/j/journals/journals/j/journals/j/journals/journals/j/journals/j/journals/j/journals/journals/j/journals/j/journals/j/journals/journals/j/journals/journals/j/journals/j/journals/journals/j/journals/journals/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/journals/j/journals/j/journals/j/journals/journals/j/journals/journals/journals/j/journals/j/journals/j/journals/j/journals/journals/j/journals/j/journals/j/journals/journals/j/journals/journals/j/journals/j/journals/j/journals/j/journals/j/journals/journals/j/journals/j/journals/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/j/journals/j/journals/j/j/journals/j/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/j/journals/j/journals/j/j/journals/j/journals/j/journals/j/journals/j/journals/j/j/journals/j/journals/j/j/journals/j/journals/j/journals/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/journals/j/j/journals/j/j/journals/j/journals/j/j/journals/j/j/journals/j/j/journals/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/j/journals/j/j/journals/j/j/j/journals/j/j/journals/j/j/journals/j/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/j/journals/j/j/journals/j/j/j/journals/j/j/j/journals/j/j/journals/j/j/j/journals/j/j/journals/j/j/j/journals/j/j/j/journals/j/j/j/journals/j/j/journals/j/j/journals/j/j/j/journals/j/j/j/journals/j/j/journals/j/j/j/journals/j/j/journals/j/j/j/journals/j/j/journals/j/j/j/journals/j/j/j/journals/j/j/j/journals/j/j/journals/j/j/j/journals/j/j/j/journals/j/j/journals/j/j/j/journals/j/j/journals/j/j/j/journals/j/j/j/journals/j/j/j/journals/j/j/j/journals/j/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/j/journals/j/j/j/journals/j/j/j/journals/j/j/j/journals/j/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/j/journals/j/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/journals/j/j/journals/j/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/journals/j/j/journals/j/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/journals/j/j/journals/j/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/j/journals/j/j/journals/j/journals/j/j/journals/j/j/journals/j/j/journals/j/journals/j/journals/j/j/journals/j/j/journals/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/journals/j/j/journals/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/journals/j/j/j/journals/j/journals/j/j/journals/j/j/journals/j/journals/j/j/j/journals/j/j/journals/j/journals/j/j/journals/j/j/journals/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/journals/j/j/journals/j/j/journals/j/journals/j/journals/j/journals/j/journals/j/j/journals/j/journals/j/j/journals/j/journals/j/j/journals/j/j/journals/j/journals/j/j/journals/j/journals/j/journals/j/j/journals/j/j/journals/j/journals/j/journals/j/journals/j/j/journals/j/journals/j/j/j/journals/j/journals/j/journals/j/j/journals/j/j/journals/j/journals/j/journals/j/j/journals/j/j/journals/j/journals/j/j/journals/](https://github.com/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/j/journals/journals/journals/j/journals/journals/journals/j/journals/journals/j/journals/journals/journals/j/journals/journals/j/journals/journals/journals/journals/journals/j/journals/journals/journals/j/journals/journals/j/journals/journals/journals/j/journals/journals/journals/journals/j/journals/journals/journals/j/journals/journals/journals/journals/j/journals/journals/journals/journals/j/journals/journals/j/journals/journals/j/journals/journals/j/journals/journals/journals/j/journals/journals/j/journals/journals/j/journals/journals/j/journals/j/journals/j/journals/j/journals/journals/j/journals/j/journals/journals/journals/j/journals/journals/journals/j/journals/journals/journals/journals/j/journals/journals/j/journals/j/journals/journals/j/journals/j/journals/j/journals/journals/j/journals/j/journals/j/journals/journals/j/journals/journals/j/journals/j/journals/journals/j/journals/journals/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/journals/j/journals/j/journals/j/journals/journals/j/journals/journals/journals/j/journals/j/journals/j/journals/j/journals/journals/j/journals/j/journals/j/journals/journals/j/journals/journals/j/journals/j/journals/j/journals/j/journals/j/journals/journals/j/journals/j/journals/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/j/journals/j/journals/j/j/journals/j/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/j/journals/j/journals/j/j/journals/j/journals/j/journals/j/journals/j/journals/j/j/journals/j/journals/j/j/journals/j/journals/j/journals/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/journals/j/j/journals/j/j/journals/j/journals/j/j/journals/j/j/journals/j/j/journals/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/j/journals/j/j/journals/j/j/j/journals/j/j/journals/j/j/journals/j/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/j/journals/j/j/journals/j/j/j/journals/j/j/j/journals/j/j/journals/j/j/j/journals/j/j/journals/j/j/j/journals/j/j/j/journals/j/j/j/journals/j/j/journals/j/j/journals/j/j/j/journals/j/j/j/journals/j/j/journals/j/j/j/journals/j/j/journals/j/j/j/journals/j/j/journals/j/j/j/journals/j/j/j/journals/j/j/j/journals/j/j/journals/j/j/j/journals/j/j/j/journals/j/j/journals/j/j/j/journals/j/j/journals/j/j/j/journals/j/j/j/journals/j/j/j/journals/j/j/j/journals/j/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/j/journals/j/j/j/journals/j/j/j/journals/j/j/j/journals/j/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/j/journals/j/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/journals/j/j/journals/j/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/journals/j/j/journals/j/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/journals/j/j/journals/j/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/j/journals/j/j/journals/j/journals/j/j/journals/j/j/journals/j/j/journals/j/journals/j/journals/j/j/journals/j/j/journals/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/journals/j/j/journals/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/journals/j/j/j/journals/j/journals/j/j/journals/j/j/journals/j/journals/j/j/j/journals/j/j/journals/j/journals/j/j/journals/j/j/journals/j/journals/j/j/journals/j/j/journals/j/j/journals/j/j/journals/j/journals/j/j/journals/j/j/journals/j/journals/j/journals/j/journals/j/journals/j/j/journals/j/journals/j/j/journals/j/journals/j/j/journals/j/j/journals/j/journals/j/j/journals/j/journals/j/journals/j/j/journals/j/j/journals/j/journals/j/journals/j/journals/j/j/journals/j/journals/j/j/j/journals/j/journals/j/journals/j/j/journals/j/j/journals/j/journals/j/journals/j/j/journals/j/j/journals/j/journals/j/j/journals/)
where, \(\delta\mathbf{\Psi}_{nb}^{n}\) is the attitude error, \(\delta\mathbf{v}_{eb}^{n}\) is the velocity error, \(\delta\mathbf{p}_{b}\) is the IMU acceleration bias, and \(\mathbf{b}_{g}\) is the IMU gyroscope bias. Once the inertial biases \(\mathbf{b}_{a}\) and \(\mathbf{b}_{g}\) are estimated, they are used in our implementation by removing them from the raw IMU measurements before being used in the filter time propagation update. The error-state vector is assumed to be defined by (1) and the total state vector, \(\mathbf{x}\in\mathbb{R}^{9}\), is
\[\mathbf{x}^{n}=\left(\mathbf{\Psi}_{nb}^{n}\ \ \mathbf{v}_{eb}^{n}\ \ \mathbf{r}_{b}\right)^{ \mathbf{T}} \tag{2}\]
where each of the nine total states correspond to the first nine error-states. In the inertial navigation equations, following the notation in (Groves, 2013), the symbols \((-)\) and \((+)\) are employed to indicate the values at the beginning and end of the navigation equations processing cycle, respectively. The attitude update is given with the assumption of neglecting the rotation of Earth and transport rate as
\[\mathbf{C}_{b}^{n}(+)\approx\mathbf{C}_{b}^{n}(-)\big{(}\mathbf{I}_{3}+ \mathbf{\Omega}_{ib}^{b}\Delta t_{i}\big{)} \tag{3}\]
where \(\mathbf{C}_{b}^{n}\) is the coordinate transformation matrix from the body frame to the locally level frame, \(\mathbf{I}_{3}\) is a 3-by-3 identity matrix, \(\mathbf{\Omega}_{ib}^{b}\) is the skew symmetric matrix of the IMU angular rate measurement, and \(\Delta t_{i}\) is the IMU sampling interval. The velocity update is given as,
\[\mathbf{v}_{eb}^{n}(+)\approx\mathbf{v}_{eb}^{n}(-)+(\mathbf{C}_{b}^{n} \mathbf{a}_{IMU}+\mathbf{g}_{b}^{n})\Delta t_{i} \tag{4}\]
where \(\mathbf{v}_{eb}^{n}\) is the velocity update, \(\mathbf{a}_{IMU}\) is the acceleration measurements from the IMU sensor, \(\mathbf{g}_{b}^{n}\) is the gravity vector defined as \(\mathbf{g}_{b}^{n}=[0,\ \ 0,\ -9.81]^{\mathbf{T}}\).
The position update is given as
\[\mathbf{r}_{b}(+)\approx\mathbf{r}_{b}(-)-\frac{\Delta t_{i}}{2}\bigg{(} \mathbf{v}_{eb}^{n}(-)+\mathbf{v}_{eb}^{n}(+)\bigg{)}. \tag{5}\]
The error-state dynamics is linearized and given as
\[\mathbf{F}=\left[\begin{array}{ccccc}\mathbf{0}_{3}&\mathbf{0}_{3}& \mathbf{0}_{3}&\mathbf{0}_{3}&\mathbf{C}_{b}^{n}\\ \wedge\left(-\mathbf{C}_{b}^{n}\mathbf{a}_{IMU}\right)&\mathbf{0}_{3}&\mathbf{ 0}_{3}&\mathbf{C}_{b}^{n}&\mathbf{0}_{3}\\ \mathbf{0}_{3}&\mathbf{I}_{3}&\mathbf{0}_{3}&\mathbf{0}_{3}&\mathbf{0}_{3}\\ \mathbf{0}_{3}&\mathbf{0}_{3}&\mathbf{0}_{3}&\mathbf{0}_{3}&\mathbf{0}_{3}\\ \mathbf{0}_{3}&\mathbf{0}_{3}&\mathbf{0}_{3}&\mathbf{0}_{3}&\mathbf{0}_{3} \end{array}\right] \tag{6}\]
where \(\wedge\) is the skew-symmetric matrix of a vector.
Then, the first order error-state transition matrix, \(\Phi\), is given as
\[\mathbf{\Phi}=\mathbf{I}+\mathbf{F}\Delta t \tag{7}\]
where it is used for propagating the error state vector, \(\mathbf{x}_{err}\), the covariance, \(\mathbf{P}\), and the cross-correlated terms, \(\mathbf{\sigma}_{AB}\), between robots. The state estimates and covariance that are propagated over time are denoted using the superscript "\({}^{-}\)", and those after the measurement updates later described in the following subsections are indicated using the superscript "\({}^{+}\)".
\[\mathbf{x}_{err}^{-}=\mathbf{\Phi}\mathbf{x}_{err}^{+} \tag{8}\]
\[\mathbf{P}^{-}=\mathbf{\Phi}\mathbf{P}^{+}\mathbf{\Phi}^{T}+\mathbf{Q}_{ \mathbf{INS}} \tag{9}\]
\[\mathbf{\sigma}_{AB}^{-}=\mathbf{\Phi}\mathbf{\sigma}_{AB}^{+} \tag{10}\]
where \(\mathbf{Q}_{\mathbf{INS}}\) is the INS system noise covariance matrix which is generated by assuming the propagation intervals are small based on the work in Groves (2013). The input power spectral density for the error states is modeled by considering not only white noise in the accelerations and angular rates but also biases and scale factor errors inherent in inertial sensors. The construction of the \(\mathbf{Q}_{\mathbf{INS}}\) matrix involves accounting for bias and noise sources such as gyro in-run bias stability, angular random walk (ARW), accelerometer in-run bias stability, and velocity random walk (VRW). These are converted to their respective power spectral densities (PSDs) and scaled according to the time interval. The resulting covariance matrix, \(\mathbf{Q}_{\mathbf{INS}}\), is formed by populating the diagonal blocks with the computed PSDs for each noise source, ensuring an accurate representation of noise characteristics in the system.
### Private Updates
Following the nomenclature in Luft et al. (2016), private updates are performed individually by each robot and are not shared with the group directly. These updates depend on the robot's individual relations with the environment. The information from some of these updates may not be available based on the mission environment profile.
#### 3.2.1 GNSS and Motion Capture Update
One common way to use the private updates is leveraging the GNSS update which are assumed to correct the localization drift and provide a reliable position estimate. As in our previous work (Gutierrez et al., 2022), the GNSS update can be performed in a loosely coupled manner with the following structure. In this work, GNSS update is not used due to the indoor testing setting. However, we provided the structure of this update in the following for the sake of completeness.
The measurement innovation is given as
\[\mathbf{z}_{PU}=[\mathbf{v}_{PU}-\mathbf{v}_{eb}^{n},\mathbf{r}_{PU}- \mathbf{r}_{b}] \tag{11}\]
where \(\mathbf{v}_{PU}\) represents the velocity and \(\mathbf{r}_{PU}\) represents the position measurement obtained from GNSS measurement. In our case, the velocity and position measurement coordinate frames are set to match with the corresponding state and the lever arm for the GNSS antenna is assumed to mount to the location of the IMU sensor. The Kalman gain for the private update is calculated as
\[\mathbf{K}_{PU}=\mathbf{P}^{-}\mathbf{H}_{PU}^{T}(\mathbf{H}_{PU}\mathbf{P} ^{-}\mathbf{H}_{PU}^{T}+\mathbf{R}_{PU})^{-1} \tag{12}\]
where \(\mathbf{H}_{PU}\) represents the Jacobian of the GNSS measurement model and can be given as
\[\mathbf{H}_{PU}=\begin{bmatrix}\mathbf{0}_{6x3}&-\mathbf{I}_{6x6}&\mathbf{0}_ {6x6}\end{bmatrix} \tag{13}\]
where \(\mathbf{I}\) represents the identity matrix. The measurement noise covariance matrix, \(\mathbf{R}_{PU}\), given as
\[\mathbf{R}_{PU}=\text{diag}(\sigma_{v_{z}}^{2},\sigma_{v_{y}}^{2},\sigma_{v_{ z}}^{2},\sigma_{v_{z}}^{2},\sigma_{v_{y}}^{2},\sigma_{v_{z}}^{2}) \tag{14}\]
where \(\sigma_{v_{z}}^{2}\), \(\sigma_{v_{y}}^{2}\), and \(\sigma_{v_{z}}^{2}\) are the variances of the velocity measurement noise; \(\sigma_{r_{x}}^{2}\), \(\sigma_{r_{y}}^{2}\), and \(\sigma_{r_{z}}^{2}\) are the variances of the position measurement noise in the \(x\), \(y\), and \(z\) directions, respectively. Using the calculated Kalman gain, the error-state and covariance are updated as
\[\mathbf{x}_{err}^{+}=\mathbf{x}_{err}^{-}+\mathbf{K}_{PU}(\mathbf{z}_{PU}- \mathbf{H}_{PU}\mathbf{x}_{err}^{-}) \tag{15}\]
\[\mathbf{P}^{+}=(\mathbf{I}-\mathbf{K}_{PU}\mathbf{H}_{PU})\mathbf{P}^{-}( \mathbf{I}-\mathbf{K}_{PU}\mathbf{H}_{PU})^{T}+\mathbf{K}_{PU}\mathbf{R}_{PU }\mathbf{K}_{PU}^{T} \tag{16}\]
Lastly, the cross-correlated terms with the rest of the robots in the system are updated as
\[\mathbf{\sigma}_{AB}^{+}=(\mathbf{I}-\mathbf{K}_{PU}\mathbf{H}_{PU})\mathbf{ \sigma}_{AB}^{-}(\mathbf{I}-\mathbf{K}_{PU}\mathbf{H}_{PU})^{T}+\mathbf{K}_{ PU}\mathbf{R}_{PU}\mathbf{K}_{PU}^{T} \tag{17}\]
where the \(\mathbf{\sigma}_{AB}\) represents the cross-correlation terms from the individual robot to the rest of the robots in the system.
A video motion capture system (VICON) positioning update can be used as a private update to have reliable localization estimation for indoor cases and as a testing proxy for GNSS inside a laboratory setting. The VICON update can be used similar to the GNSS update framework. In our tests, we only used the VICON system to have a ground truth and also to initialize the robot position. Assuming the VICON solution frame matches the robot navigation frame,
\[\mathbf{z}_{icom}=[\mathbf{r}_{icom}-\mathbf{r}_{b}] \tag{18}\]
where \(\mathbf{r}_{icom}\) represents the position measurement obtained from VICON measurement. The Jacobian of the measurement model and the measurement noise covariance matrix for VICON can be given as;
\[\mathbf{H}_{icom}=\begin{bmatrix}\mathbf{0}_{3x3}&\mathbf{0}_{3x3}&-\mathbf{ I}_{3x3}&\mathbf{0}_{3x3}&\mathbf{0}_{3x3}\end{bmatrix} \tag{19}\]
\[\mathbf{R}_{vicom}=\text{diag}(\sigma_{icom_{*}}^{2},\sigma_{icom_{*}}^{2}, \sigma_{icom_{*}}^{2}) \tag{20}\]
where \(\sigma_{icom_{*}}^{2}\), \(\sigma_{icom_{*}}^{2}\), and \(\sigma_{icom_{*}}^{2}\) are the variances of the VICON measurement noise.
#### 3.2.2 Odometry Velocity Update
The odometry velocity update can be utilized to further improve the state estimation of individual robots. In this update, the algorithm only takes the velocity information from the associated sensor. For example, wheeled robots can use wheel encoders to obtain this information. Also, any external odometry solution can be utilized in this update such as the velocity information from the visual odometry or this update can be omitted if there is no sensor available.
After the frame rotation between the velocity sensor to the navigation frame, i.e., \(\mathbf{v}_{PU}\) is in the navigation frame, the measurement innovation is given as;
\[\mathbf{z}_{PU}=[\mathbf{v}_{PU}-\mathbf{v}_{eb}^{n}] \tag{21}\]
where \(\mathbf{v}_{\mathit{VU}}\) represents the velocity measurement obtained from the odometry source. Updating the error-state, covariance, and cross-correlated terms follow the same structure through Equations 12-17. Jacobian of the measurement model, \(\mathbf{H}_{\mathit{VU}}\), can be given as;
\[\mathbf{H}_{\mathit{VU}}=\begin{bmatrix}\mathbf{0}_{3x3}&-\mathbf{I}_{3x3}& \mathbf{0}_{3x3}&\mathbf{0}_{3x3}&\mathbf{0}_{3x3}\end{bmatrix} \tag{22}\]
and the measurement noise covariance matrix, \(\mathbf{R}_{\mathit{VU}}\), can be constructed by the variances of the velocity measurement noise.
#### Zero Velocity Update
In this work, ZU is applied using a combination of linear and angular velocity. To properly use this update, stationary conditions must be detected accurately. Otherwise, the rover's state yields incorrect updates leading to poor localization performance (Ramanandan et al., 2012). To detect stationary conditions, we used two different indicators, the velocity command provided by the autonomous controller that determines the movement of the robot and the wheel encoders measurements. We assume that robots do not slip under these conditions to determine stationary conditions and the robots do not perform any turning maneuver when they stop. The measurement innovation for ZU is given as
\[\mathbf{z}_{\mathit{ZU}}=[-\mathbf{\omega}_{\mathit{IMU}},-\mathbf{v}_{ \mathit{eb}}^{n}] \tag{23}\]
where \(\mathbf{\omega}_{\mathit{IMU}}\) represents gyro-rate measurements. Similarly, updating the error-state, covariance, and cross-correlated terms follow the same structure through Equations 12-17. The Jacobian of the ZU measurement model is described as;
\[\mathbf{H}_{\mathit{ZU}}=\begin{bmatrix}\mathbf{0}_{3x3}&\mathbf{0}_{3x3}& \mathbf{0}_{3x3}&\mathbf{0}_{3x3}&-\mathbf{I}_{3x3}\\ \mathbf{0}_{3x3}&-\mathbf{I}_{3x3}&\mathbf{0}_{3x3}&\mathbf{0}_{3x3}&\mathbf{ 0}_{3x3}\end{bmatrix} \tag{24}\]
The measurement noise covariance matrix for ZU, \(\mathbf{R}_{\mathit{ZU}}\), which is a 6-by-6 matrix, and it can be constructed similarly as provided previously in other private updates.
### Relative Update
Relative updates are performed with pairwise ranging whenever two robots are within a specific range and are assumed to occur only when the robots are within a specific separation distance. This relative update model is based on the work in Luft et al. (2016). In our work, UWB range measurements are used to trigger the relative updates. Whenever relative update is performed, there is one robot that detects the other robot present in the update and receives the state, covariance and cross-correlated terms from it. The decentralized architecture of the error-state EKF allows the robots to decouple their individual states, covariances, and cross-correlated terms to construct a combined covariance matrix as
\[\mathbf{P}_{global}^{-}=\begin{bmatrix}\mathbf{P}_{A}^{+}&\mathbf{\Sigma}_{ AB}\\ \mathbf{\Sigma}_{AB}^{T}&\mathbf{P}_{B}^{+}\end{bmatrix},\quad\mathbf{\Sigma}_{ AB}=\mathbf{\sigma}_{AB}^{+}\mathbf{\sigma}_{BA}^{+}\mathbf{\tau},\quad\mathbf{ \Sigma}_{AB}^{T}=\mathbf{\Sigma}_{BA} \tag{25}\]
where \(\mathbf{P}_{global}\) represents the combined covariance matrix, \(\mathbf{P}_{A}\) represents the individual covariance matrix for robot \(A\), \(\mathbf{P}_{B}\) represents the individual covariance matrix for robot \(B\), \(\Sigma_{AB}\) represents the combined correlated values from robot \(A\) and \(B\).
The Jacobian of the ranging measurement model is described as
\[\mathbf{H}_{\mathit{range}}=\begin{bmatrix}\mathbf{0}_{1x6},\left[\frac{( \mathbf{r}_{A}-\mathbf{r}_{B})}{h_{range}}\right],\mathbf{0}_{1x12},\left[ \frac{(\mathbf{r}_{B}-\mathbf{r}_{A})}{h_{range}}\right],\mathbf{0}_{1x6} \end{bmatrix}^{T} \tag{26}\]
where \(\mathbf{r}_{A}\) and \(\mathbf{r}_{B}\) represent the 3D position values for robot \(A\) and \(B\), respectively, such that \(\mathbf{r}_{A}=[x_{A},y_{A},z_{A}]\) and \(\mathbf{r}_{B}=[x_{B},y_{B},z_{B}]\). The non-linear measurement model is described as
\[h_{range}=\sqrt{[x_{B}-x_{A}]^{2}+[y_{B}-y_{A}]^{2}+[z_{B}-z_{A}]^{2}}. \tag{27}\]
The Kalman gain for the relative update is calculated as
\[\mathbf{K}_{\mathit{range}}=\mathbf{P}_{global}^{-}\mathbf{H}_{\mathit{range }}^{T}(\mathbf{H}_{\mathit{range}}\mathbf{P}_{global}^{-}\mathbf{H}_{ \mathit{range}}^{T}+\mathbf{R}_{\mathit{range}})^{-1}. \tag{28}\]
Using the calculated Kalman gain, the error state is updated as
\[\begin{bmatrix}\mathbf{x}_{\mathit{Aer}}^{+}\\ \mathbf{x}_{\mathit{Ber}}^{+}\end{bmatrix}=\begin{bmatrix}\mathbf{x}_{ \mathit{Aer}}^{-}\\ \mathbf{x}_{\mathit{Ber}}^{-}\end{bmatrix}+\mathbf{K}_{\mathit{range}}\left(z_ {\mathit{UWB}}-\mathbf{H}_{\mathit{range}}\begin{bmatrix}\mathbf{x}_{ \mathit{Aer}}^{-}\\ \mathbf{x}_{\mathit{Ber}}^{-}\end{bmatrix}\right) \tag{29}\]
where \(\mathbf{x}_{Aerr}\) and \(\mathbf{x}_{Berr}\) represent the error state of the robots performing relative update and \(z_{UWB}\) represents the ranging measurement. Also, the covariance is updated as
\[\mathbf{P}_{global}^{+}=\left(\mathbf{I}-\mathbf{K}_{range}\mathbf{H}_{range} \right)\mathbf{P}_{global}^{-}. \tag{30}\]
Once the relative update is completed, the error-state and covariance estimates are sent back to the robot that was detected. The decomposition of the updated correlated values for the robot that performing the update is selected as \(\sigma_{AB}\leftarrow\mathbf{I}\) and the robot detecting and performing the update will keep the updated correlated term. The decomposition of the updated correlated values for the robot being detected is selected as \(\sigma_{BA}=\mathbf{\Sigma}_{BA}\). The robot receiving the update will get the identity matrix, following the same decomposition used in Luft et al. (2016).
Lastly, the cross-correlated terms of the robots performing the update with the robots not present in the update is updated with Equation 31.
\[\mathbf{\sigma}_{AC}^{+}=\mathbf{P}_{A}^{-}\mathbf{P}_{A}^{+-1}\mathbf{\sigma }_{AC}^{-} \tag{31}\]
where \(\mathbf{\sigma}_{AC}\) represents the cross-correlated terms with the rest of the robots not present in the update.
## 4 Experiments
In order to evaluate the algorithm performance, two different experiments are performed. One set in a simulation environment, and another set a real-world environment using wheeled robots. The simulation environment is built in Gazebo/ROS and the real world experiment is performed in a video motion capture facility with instrumented i-Robot Create platforms. In both experiments, the motivation is observing the efficiency of using ZU to reinstate the localization of lost robots. In the following subsections, both experiment settings, robots, and sensors used are explained in detail.
### Simulation Experiment Setup
In the simulation tests, the motivation is reinstating the localization of lost robots in a subterranean environment. It is assumed that two robots became unreliable due to the erroneous localization estimates and they stop moving. Meanwhile, another robot is deployed for a mission to restore the localization estimates of the lost robots. The scenario starts with a robot, named Robot 2, entering a cave just after receiving reliable GNSS signals. The other robots (i.e., Robot 0 and Robot 1) are unable to use either visual perception or GNSS for localization purposes due to the poor lighting conditions of the cave and the blockage of the GNSS signals. Robot 2 traverses a straight line on the \(x\)-axis using its IMU sensor and wheel encoders to estimate its position while leveraging ZU under stationary conditions. Robots can perform relative update pairwise, whenever they are in a close proximity to each other which is set by a predetermined threshold. This motivating case is illustrated in Fig. 2, where the grey area in the bird-eye view (right sub-figure) represents the GNSS-denied subterranean environment.
The tests in the simulation environment include three simulated TurtleBot3 (Amsters & Slaets, 2020) with the sensor models for wheel encoders, IMU, GNSS, and UWB. Additive white Gaussian noise values are added to the default outputs of the provided sensors to simulate the effect of random process. The rate of the sensors and the noise characteristics of the used sensors are provided in Table 1.
In simulation tests, communication and ranging measurements are limited to distances shorter than 2.5 meters to simulate an obstructed line of sight capability and all robots move with a velocity of 0.2 m/s, and the simulation environment is assumed to be a flat region, whereas the algorithm is able to provide 3D position estimation. The Robot 2 is able to stop autonomously to perform
\begin{table}
\begin{tabular}{l l l l} \hline Sensor & Measurement & Noise 1\(\sigma\) & Rate \\ \hline
**IMU** & Acceleration & 0.001 \(m/s^{2}\) & 50 Hz \\ & Gyro Rate & 0.001 \(rad/s\) & \\
**Encoder** & Velocity & 0.01 \(m/s\) & 30 Hz \\
**GNSS** & Position & 0.1 \(m\) & 1 Hz \\ & Velocity & 0.02 \(m/s\) & \\
**UWB** & Range & 0.05 \(m\) & 1 Hz \\ \hline \end{tabular}
\end{table}
Table 1: TurtleBot3 Sensor Parameters
ZU whenever any of the diagonal elements of the position error covariance reach the predetermined threshold of 5 m\({}^{2}\). Given the sensor parameters and scenario settings, this threshold is set based on engineering judgment with respect to the localization reliability and traversal rate. For example, if the robot is not able to reduce the position error covariance lower than the threshold, then the robot starts performing periodic ZU to keep the localization performance as reliable as possible. In this respect, the decision of which ZU scheme to utilize depends on several factors, as elaborated in our previous work (Gutierrez et al., 2022). Using autonomous stopping heuristic is advantageous over periodic stopping to increase the traversal rate, which allows robots to make use of the ZU when the estimated position error covariance exceeds a certain threshold. This is particularly useful in mitigating the effects of IMU drift while minimizing interruptions to the robot's mission. However, the robot's ability to estimate its position error covariance reliably may diminish due to prolonged external aiding outages, sensor errors, or environmental conditions. In such cases, performing ZU at fixed time intervals irrespective of the estimated error may provide consistent and regular correction to the IMU drift with a cost of traversal rate decreasing. For these reasons, we opted to use the autonomous stopping criteria first, and then switched to the periodic stopping criteria. This allows us to take advantage of the benefits of both schemes while minimizing the drawbacks.
The stopping condition is verified through the robot's wheel encoders and velocity command output after receiving a stop command. Then, after verifying the robot is completely stopped, the robot waits 0.5 seconds to utilize zero updates. This stopping duration is conservatively selected based on engineering judgment and considerations of reliable ROS message delivery during the stopping phase. After waiting time, the robot resumes moving. The actions the robot takes, from verifying its stopping condition to resuming movement, are dictated by a Boolean-based state machine framework. The framework models the behavior of a system using a finite number of states, each associated with a set of actions. The system then transitions between these states based on certain conditions.
In the context of the rover stopping mechanism, it is important to mention potential challenges such as false positives and negatives. If a robot mistakenly assumes it has stopped (false positive), this could introduce errors, while if it fails to recognize a stop (false negative), potential drift correction opportunities could be missed, leading to less accurate localization. To handle these challenges, the state machine framework uses Boolean indicator flags in ROS to determine when the robot should stop and start moving again based on the feedback from the wheel encoders and velocity command output. The threshold parameters for autonomous ZU, the duration of movement during periodic ZU, and the stopping duration can be easily customized in the provided code based on the mission scenario and robot constraints.
Figure 2: Simulation environment with a transparent view. Robot 0, 1 and 2 initial locations in the simulation are shown with black circles.
### Real-World Experiment Setup
In the real-world experiments, using a similar motivation with simulation experiments, the aim is restoring the localization of the lost robots. The difference from the simulation setup, the lost robots do not stay in stationary conditions before their first relative update. The real-world experiments are performed in a 3x3 meters room with video motion capture system. Also, VICON is used to simulate a GNSS update in the indoor test settings for initializing the Robot 2 pose and generating the truth solution for all robots. Robots are assumed that they cannot use visual odometry due to poor lighting conditions and lack of sufficient features in the environment; however, robots can use their IMU, wheel encoders, and UWB sensors.
In these experiments, similar to the cave simulation experiments, it is assumed that Robot 2 starts moving with a known and accurate localization whereas the Robot 0 and 1 are lost during their patrols in the area. Robot 2 patrols the \(x\)-axis, Robot 0 makes a diagonal movement across the room, and Robot 1 patrols the \(x\)-axis with an offset in \(y\)-axis to make sure that Robot 2 or Robot 1 can only perform relative update with Robot 0 (i.e., Robot 2 and Robot 1 cannot detect or communicate with each other). With this setup, the localization performance of Robot 1 depends on the localization performance of Robot 0, and the performance change on Robot 2 using or not using ZU will affect the entire system. Robots have a velocity of 0.2 m/s and the detection limit for robots with range is set to 1 meter due to dimension restrictions. The test environment is given in Fig. 3.
The i-Robot Create robots are used in the real-world experiments. To share the information between robots and record the VICON solution data, the robots and the VICON system are connected to the same network via Wi-Fi. The specification and noise characteristics of the IMU, ADIS16405, are given in Table 2. UWB sensor, DWM1001-DEV, measurement specification are given as \(\pm\)15 cm and \(\pm\)30 cm for 2D and 3D noise, respectively. In these experiments, we followed the same autonomous stopping heuristic previously described for Robot 2 in the simulation experiments. In this case, the threshold for stopping is predetermined to 2 m\({}^{2}\) to observe the effectiveness of ZU before the first relative update between robots due to dimension restrictions. Note that, while the environment used in our experiments was rigid, flat, and benign, with well-behaved dynamics on the i-Robot Create and slow velocity inputs, we acknowledge that in harsher environments or with excessive slippage, additional indicators may be needed to ensure the robot stops completely.
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline & & Gyroscope & & & Accelerometer & & Physical Specs \\ IMU & Range & In-run bias & ARW\({}^{*}\) & Range & In-run bias & VRW\({}^{*}\) & Dimensions & Weight \\ & (deg/sec) & (deg/hr) & (deg/\(\sqrt{hr}\)) & (g) & (mg) & (m/\(s/\sqrt{hr}\)) & (mm) & (g) \\ \hline \hline ADIS16405BMIZ & \(\pm\)350 & 25.2 & 2.0 & \(\pm\)18 & 0.2 & 0.2 & 32 x 23 x 23 & 16 \\ \hline \hline \end{tabular}
* ARW: Angular Random Walk, VRW: Velocity Random Walk
\end{table}
Table 2: Inertial Measurement Unit Specifications
## 5 Evaluation
### Simulation Results
For the simulation experiments, we first analyze the localization performance of Robot 2 and whether ZU is being leveraged through its traversal or not. The position errors for both cases and the average improvement for each metric are shown in Table 3. Applying ZU can bound the velocity error, calibrate IMU sensor biases (see Fig. 4), and limit the rate of INS localization error growth Groves, 2013.
In environments where robots are not able to use their visual sensors or rely on GNSS signals, using ZU provides a significant improvement and shows the effectiveness of keeping the localization estimation reliable in all axes as in Table 3.
2 allows for more positioning correction for Robot 0. Once the relative update between Robot 2 and Robot 0 is done, Robot 0 traverses through the \(y\)-axis to Robot 1. After the relative update between Robot 0, Robot 1 gains a better localization estimation and start moving through the \(x\)-axis. Note that only Robot 2 performs ZU, Robot 1 and Robot 0 do not leverage ZU; however, Robot 0 corrects more error when Robot 2 uses ZU, which also benefits to the Robot 1 when it performs relative update with Robot 0. In other means, the benefit of utilizing ZU in Robot 2 is carried by Robot 0 to Robot 1, even Robot 2 and Robot 1 do not share information.
To better visualize the effects of ZU, the position estimations of the robots in ENU frame with the \(3\sigma\) error covariance values and the truth are given in Fig. 6. It can be observed that the error for Robot 2 does not significantly change even when performing relative updates with lost robots. This is because of the filter properly considers which robot's localization to believe more based on the scale of the covariance. Leveraging ZU increases the effectiveness of relative update between robots. For example, since Robot 0 starts the test with a wrong estimation and high covariance, the position error and the covariance can only be reduced
\begin{table}
\begin{tabular}{l l c|c c|c c c|c c c} \hline \hline & \multicolumn{4}{c}{Robot 1 Initial Error: 31.62} & \multicolumn{4}{c}{Robot 0 Initial Error: 14.14} \\ \hline & & \multicolumn{2}{c}{w/o ZU used in Robot 2} & \multicolumn{4}{c}{w/o ZU used in Robot 2} & \multicolumn{4}{c}{w/ ZU used in Robot 2} & \multicolumn{4}{c}{w/ ZU used in Robot 2} \\ \hline \multirow{4}{*}{
\begin{tabular}{l}
when this robot is able to perform a relative update with Robot 2. This also affects the Robot 1's localization performance. The dominant errors in the vertical direction can be primarily attributed to the fact that the robots are able to estimate their positioning based on IMU and WO measurements. While WO measurements provide information about the robot's translational motion on the x and y axes, they do not account for the vertical motion (z-axis) directly. The robots rely on IMU measurements to estimate their positioning in the vertical direction. As a result, the localization system is inherently more dependent on IMU measurements for the estimation of the vertical position, making it more susceptible to errors due to sensor noise, biases, and inaccuracies in the gravity model. Other factors that can contribute to the vertical estimation error include the alignment of the gravity vector with the vertical axis, which can lead to increased drift in vertical measurements. This drift may be more pronounced in the vertical direction compared to horizontal ones due to the inherent sensitivity of vertical motion to inaccuracies in the gravity model and sensor biases. Imperfect gravity modeling can also contribute to the observed vertical error. Using a more accurate gravity model and a simulated IMU sensor model could help reduce the vertical error in the estimation.
### Real-World Experiment Results
To further verify and evaluate the algorithm, we analyze the localization performance of Robot 2 in an indoor environment with real robots and real sensors for the cases where the ZU is being leveraged through its traversal or not. In Table 5, the 3D position error for both cases are shown. In the same table, the average improvement is also shown for each metric. Applying ZU can bound the velocity error, calibrate IMU sensor biases, and limit the rate of INS localization error growth, even in a leveled, low-slip environment and short-distance scenarios such as ours, the improvement is a clear indication of the effectiveness of keeping localization estimation reliable by using ZU for a single robot. This can also be observed in Table 5.
As in the simulation results, the real-world experiments show a similar trend on correcting the errors, such that the benefit of using ZU by one robot is improving the overall localization performance of the multi-robot system. In Fig. 7, the same color coded representation is used as in simulation experiments (i.e., red dots represent the estimation error without using ZU while
Figure 6: East, North, and Up position estimation performance comparison for the cases where ZU is used versus not used by Robot 2 in simulation experiment (Test 1). The red dots represent the position estimation when ZU is not used, the blue dots represent the position estimation when ZU is used. The green shaded areas show the approximate duration when Robot 2 performs a relative update with Robot 0, and the orange shaded area show when Robot 0 performs a relative update with Robot 1.
the blue dots represent the estimation error using ZU in Robot 2). The experiment starts with a good initial estimation and smaller covariance error. It can be observed in Fig. 7 that the horizontal error for Robot 2 stays stable even when performing relative update with lost robots. This is because the covariance of the lost robots is much higher than the covariance of the Robot 2. The benefit of ZU can be clearly observed, since robots are able to exchange better information whenever one of them is able to perform ZU.
The position estimations of the robots in ENU frame with the \(3\sigma\) error covariance values and the truth are given in Fig. 8 to visualize the covariance changes during relative updates. For example, since Robot 0 starts the test with a wrong estimation and
\begin{table}
\begin{tabular}{c c c c|c c c|c c} \hline \hline \multicolumn{8}{c}{Robot 1 Initial Error: 36.05m} & \multicolumn{4}{c}{Robot 0 Initial Error: 14.14} \\ \hline \multicolumn{8}{c}{w/o ZU in Robot 2} & \multicolumn{2}{c}{w/ ZU Robot 2} & \multicolumn{2}{c}{w/o ZU in Robot 2} & \multicolumn{2}{c}{w/ ZU in Robot 2} \\ \hline \multicolumn{8}{c}{Correction (m) Improvement (\%)} & Correction (m) & Improvement (\%) & Correction (m) & Improvement (\%) & Correction (m) & Improvement (\%) \\ \hline \multirow{8}{*}{T1} & T1 & 33.29 & 92.32 \% & 35.85 & **99.42**\% & 11.12 & 78.64 \% & 13.70 & **96.88**\% \\ & T2 & 33.23 & **92.16**\% & 32.79 & 90.94 \% & 11.73 & **82.94**\% & 10.66 & 75.40 \% \\ & T3 & 29.88 & **82.87**\% & 32.72 & **90.76**\% & 7.10 & 50.23 \% & 11.06 & **78.20**\% \\ & T4 & 21.90 & 60.73 \% & 31.70 & **87.91**\% & 10.33 & **73.08**\% & 10.33 & 73.04 \% \\ & T5 & 30.05 & 83.33 \% & 33.35 & **92.49**\% & 7.80 & 55.18 \% & 9.97 & **70.49**\% \\ & T6 & 32.11 & 89.07 \% & 33.16 & **91.96**\% & 9.14 & 64.67 \% & 10.36 & **73.24**\% \\ & T7 & 33.55 & **93.05**\% & 32.97 & 91.43 \% & 11.56 & **81.76**\% & 10.28 & 72.69 \% \\ & T8 & 32.66 & 90.58 \% & 34.14 & **94.70**\% & 9.88 & 69.87 \% & 11.42 & **80.75**\% \\ & T9 & 30.46 & 84.49 \% & 33.67 & **93.38**\% & 6.93 & 49.02 \% & 10.79 & **76.33**\% \\ & T10 & 31.69 & 87.88 \% & 34.12 & **94.63**\% & 7.90 & 55.84 \% & 12.97 & **91.69**\% \\ \hline \end{tabular}
\end{table}
Table 6: Robot 1 and Robot 0 relative update performance analysis in real-world experiment
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline & \multicolumn{2}{c}{**Robot 2**} & w/o ZU & w/ ZU & Improvement\({}^{*}\) \\ \cline{3-6} & & \(\text{Max}_{E}\) & 2.22 & 1.60 & 27.76 \% \\ & & \(\text{Max}_{N}\) & 2.24 & 1.86 & 16.98 \% \\ & & \(\text{Max}_{U}\) & 58.21 & 2.11 & 96.38 \% \\ & & \(\text{RMSE}_{E}\) & 1.26 & 0.94 & 25.23 \% \\ & & \(\text{RMSE}_{U}\) & 1.26 & 1.03 & 18.19 \% \\ & & \(\text{RMSE}_{U}\) & 23.57 & 0.95 & 95.96 \% \\ \hline \multicolumn{6}{c}{w/o ZU} & \multicolumn{4}{c}{w/ ZU} \\ \hline \hline \multicolumn{6}{c}{**Robot 2**} & Best & Worst & Average\({}^{\text{\text{\textregistered}}}\) & Best & Worst & Average\({}^{\text{\text{\textregistered}}}\) & Improvement\({}^{\text{\text{\textregistered}}}\) \\ \hline \multirow{8}{*}{T1} & Median & 6.27 & 20.79 & 13.06 & 0.61 & 2.39 & 1.57 & 87.98 \% \\ & Max & 13.93 & 164.58 & 58.39 & 1.56 & 3.61 & 3.05 & 94.77 \% \\ \cline{1-1} & STD & 4.37 & 47.37 & 15.83 & 0.34 & 0.91 & 0.85 & 94.66 \% \\ \cline{1-1} & RMSE & 7.59 & 63.89 & 23.75 & 0.67 & 2.27 & 1.76 & 92.59 \% \\ \hline \hline \end{tabular}
* The values are the average of 10 tests, improvement is based on the average values.
\end{table}
Table 5: Localization estimation comparison of Robot 2 in real world experiments
Figure 7: Horizontal error estimation performance comparison for the cases where ZU is used versus not used by Robot 2 in real world experiment (Test 6). The red dots represent the estimation error when ZU is not used, the blue dots represent the estimation error when ZU is used. The green shaded areas show the approximate duration when Robot 2 performs relative update with Robot 0, and the orange shaded area show when Robot 0 performs relative update with Robot 1.
high covariance, the position error and the covariance can only be reduced when this robot is able to perform a relative update with Robot 2. This also affects the Robot 1's localization performance. It can also be seen that the position error in the \(U\)-axis is largely reduced when Robot 2 uses ZU, which leads to a significant amount of correction after relative update with Robot 0 in the same axis. Since the used robots in this study can only constrain the velocity in \(E\) and \(N\) axes by using wheel encoder measurements, the position error in \(U\)-axis can only be reduced by either ZU or relative update.
## 6 Conclusion
In this paper, we have proposed an error-state DEKF algorithm for cooperative localization of mobile robots in GNSS-denied/degraded environments using ZU, IMU, UWB, and odometry measurements. This work significantly expands upon our previous work in Gutierrez et al. (2022), generalizing the utilization of ZU to improve the localization performance in a DEKF based cooperative localization system. The proposed algorithm was implemented and tested with real hardware in a video motion capture facility and a ROS-based simulation environment for unmanned ground vehicles (UGV), aiming to re-localize the lost robots in the system. The main contributions of this work are: (1) a novel method to leverage ZU in a decentralized cooperative localization framework, (2) the integration of odometry velocity measurements into the DEKF algorithm, (3) the use of ZU for reinstating lost robots in a multi-robot system, and (4) the real-world validation of the algorithm with multiple robots. Analyses and results demonstrate that using ZU in a cooperative D-EKF algorithm greatly benefits the localization estimation performance, making it a potential failsafe condition for other methods that might fail or be unreliable, such as in warehouse stocking, factory automation, and retail spaces.
While ZU provides significant benefits to localization, it is worth noting potential misuse scenarios. For instance, misuse could occur if ZU is overly relied upon in environments where determining stationary conditions may be challenging. Overuse
Figure 8: East, North, and Up (ENU) position estimation performance comparison for the cases where ZU is used versus not used by Robot 2 in real world experiment (Test 6). The red dots represent the position estimation when ZU is not used, the blue dots represent the position estimation when ZU is used. The green shaded areas show the approximate duration when Robot 2 performs a relative update with Robot 0, and the orange shaded area show when Robot 0 performs a relative update with Robot 1.
of ZU could also potentially affect the robot's traversability rate. Additionally, the necessity and impact of ZU depend on the quality and availability of external aids. In high-quality systems with uninterrupted positioning data (e.g., high-end GNSS, lidar SLAM), the reliance on ZU may decrease. However, in environments with unreliable or absent external aids, using ZU becomes more effective, as highlighted in this study featuring a system equipped with an IMU, wheel encoders, and UWB.
For future work, we plan to apply different constraints (e.g., non-holonomicity, hovering, landing) for other locomotion types which will allow for observing the performance of the algorithm in various situations. Additionally, we plan to incorporate obstacle avoidance strategies to ensure the safety of the robots and prevent any potential collisions, especially in cases where the position uncertainty is larger than the UWB detection range. Moreover, exploring the use of other types of sensors, such as cameras or lidars, to further enhance localization performance, and applying the proposed method to different types of robots, such as aerial or underwater vehicles, could be investigated. Finally, incorporating adaptive stopping strategies (e.g., determining optimal frequency and duration of stopping) for robots to perform ZU and using machine learning techniques to optimize the cooperative localization performance are potential avenues for further research.
## Acknowledgments
This work was supported in part through a subcontract with Kinnami Software Corporation under the STTR project FA864921P1634. The authors thank to Dr. Yu Gu for allowing us to use the instrumented iRobot Create platforms; Jonas Bredu and Shounak Das for assisting with the tests.
## Conflict of Interest
The authors declare no potential conflict of interests.
|
2307.00156 | Convex Optimization in Legged Robots | Convex optimization is crucial in controlling legged robots, where stability
and optimal control are vital. Many control problems can be formulated as
convex optimization problems, with a convex cost function and constraints
capturing system dynamics. Our review focuses on active balancing problems and
presents a general framework for formulating them as second-order cone
programming (SOCP) for robustness and efficiency with existing interior point
algorithms. We then discuss some prior work around the Zero Moment Point
stability criterion, Linear Quadratic Regulator Control, and then the feedback
model predictive control (MPC) approach to improve prediction accuracy and
reduce computational costs. Finally, these techniques are applied to stabilize
the robot for jumping and landing tasks. Further research in convex
optimization of legged robots can have a significant societal impact. It can
lead to improved gait planning and active balancing which enhances their
ability to navigate complex environments, assist in search and rescue
operations and perform tasks in hazardous environments. These advancements have
the potential to revolutionize industries and help humans in daily life. | Prathamesh Saraf, Mustafa Shaikh, Myron Phan | 2023-06-30T22:22:27Z | http://arxiv.org/abs/2307.00156v1 | # Convex Optimization in Legged Robots
###### Abstract
Convex optimization is crucial in controlling legged robots, where stability and optimal control are vital. Many control problems can be formulated as convex optimization problems, with a convex cost function and constraints capturing system dynamics. Our review focuses on active balancing problems and presents a general framework for formulating them as second-order cone programming (SOCP) for robustness and efficiency with existing interior point algorithms. We then discuss some prior work around the Zero Moment Point Stability criterion, Linear Quadratic Regulator Control, and then the feedback model predictive control (MPC) approach to improve prediction accuracy and reduce computational costs. Finally, these techniques are applied to stabilize the robot for jumping and landing tasks.
Further research in convex optimization of legged robots can have a significant societal impact. It can lead to improved gait planning and active balancing which enhances their ability to navigate complex environments, assist in search and rescue operations and perform tasks in hazardous environments. These advancements have the potential to revolutionize industries and help humans in daily life.
convex optimization, legged robots, model predictive control, stability
## I Introduction
Control problems can be formulated as optimization problems by defining an objective function that quantifies the desired behavior of the system, and a set of constraints that capture the physical limitations of the system and any other relevant constraints. The objective function can be some sort of performance measure, such as minimizing energy consumption or maintaining the stability of the system. The constraints include the dynamics of the system, input/output constraints, and state constraints. Once the control problem is formulated as an optimization problem, the goal is to find the inputs or control actions that optimize the objective function while satisfying the constraints. Well-known optimization techniques can be used to solve the resulting optimization problem and find the optimal inputs or control actions that achieve the desired behavior of the system. This approach enables the design of controllers that can handle complex and nonlinear systems and can provide improved performance and robustness compared to traditional control design approaches. Attaining efficient and robust solutions for the optimal control sequence is key in legged robots as the computation power available is limited, and trajectory generation is only one of many tasks the on-board computer must complete. Therefore, by formulating the control problems as convex problems, designers can take advantage of highly developed and efficient solvers to obtain the optimal solution.
## II Convex Optimization Applications
We start with some literature and initial works on convex optimization applications in legged robots, which lay the foundation to the most widely used optimization methods, Model Predictive Control.
### _Zero Moment Point_
We first briefly introduce some background on the necessary dynamics needed to understand legged robot motion. The Zero Moment Point (ZMP) criterion offers several advantages for analyzing the stability of legged robots. It is a point/projection on the ground at which all the moments and the forces acting on the robot sum up to zero. It is a simple and intuitive method that is easy to understand and apply. It enables real-time monitoring, allowing legged robots to adapt to changes in their environment and maintain balance in dynamic scenarios [13]. Additionally, the ZMP criterion can be combined with whole-body control optimization algorithms to compute optimal joint angles and control inputs, ensuring stability across different types of legged robots [14]. Figure 1 shows the support polygon cases and Figure 2 illustrates the ZMP criterion and friction cone constraints in simulation with respect to the
Fig. 1: Support Polygon for ZMP
ANYmal quadrup robot. The ZMP stability optimization equations are given below:
\[\min\left\|(\boldsymbol{x}_{\mathrm{zmp}}-\boldsymbol{x}_{\mathrm{cp}}),( \boldsymbol{y}_{\mathrm{zmp}}-\boldsymbol{y}_{\mathrm{cp}})\right\| \tag{1}\]
subject to
\[z\ddot{x}-(x-x_{zmp})\left(\ddot{z}+g\right)=0 \tag{2}\]
where,
\[\begin{split} X_{zmp}&=X\mathrm{com}-\frac{Z \mathrm{com}}{g}X\ddot{com}\\ Y_{zmp}&=Y\mathrm{com}-\frac{Z\mathrm{com}}{g} \ddot{Y}\mathrm{com}\end{split} \tag{3}\]
However, the ZMP criterion has certain limitations. It assumes a simplified model of a rigid body with a fixed center of mass on a flat and rigid surface, which may not accurately represent real-world conditions. Legged robots often encounter uneven or slippery surfaces and may have flexible components that affect their balance. Moreover, in dynamic environments, such as crowded streets or rough terrain, the ZMP criterion may not account for unexpected obstacles or external forces that can impact stability. Additionally, the ZMP algorithm relies on precise measurements of position, velocity, dynamics, and control inputs, making it sensitive to sensor noise, communication delays, and other sources of uncertainty. While the ZMP criterion provides valuable insights into the stability of legged robots, its limitations in considering real-world complexities and uncertainties emphasize the need for more advanced algorithms and techniques to ensure robust and reliable stability control in legged robot applications. A more improved version of the ZMP problem gives rise to the Linear Quadratic Regulator control which is described in the next section.
### _Linear Quadratic Regulator_
LQR control, or Linear Quadratic Regulator control, is a popular control strategy used in legged robots to achieve stability. By optimizing a quadratic cost function, LQR control computes control inputs that minimize the deviation from desired states, ensuring stability and smooth movements. LQR control is known for its simplicity and ease of implementation, making it a widely adopted approach in legged robot research. The optimization cost function is given below:
\[\begin{split} J&=\int_{0}^{\infty}\left(x^{T}Qx+u^{ T}Ru\right)dt\\ A^{T}P+PA-PBR^{-1}B^{T}P+Q=0\\ K&=R^{-1}B^{T}P\\ \tau&=\tau_{0}-Kx.\end{split} \tag{4}\]
where J is the cost function to be optimized, Q and R are PSD weight matrices and x is the state of the system with u being the system input. \(\tau\) represents the torque commands required for leg joints. One advantage in terms of legged robots is their robustness, as LQR control can be designed to handle disturbances and uncertainties in the environment, allowing legged robots to maintain stability even in the presence of unexpected changes [11]. Additionally, LQR control is relatively easy to implement and tune, making it a popular choice for researchers and engineers working on stability control in legged robots [12].
However, LQR control is limited to linear systems [24][25], which can restrict its effectiveness in complex and nonlinear systems commonly encountered in legged robots. Another drawback is the requirement for an accurate model of the legged robot's dynamics for LQR control to be effective, which can be challenging to obtain. Furthermore, LQR control can be computationally intensive for large and complex-legged robots, leading to reduced real-time performance and limited adaptability to sudden changes in the environment.
Though LQR control offers advantages such as robustness and ease of implementation, its limitations in handling nonlinear systems, reliance on accurate models, and computational complexity highlight the need for more advanced control strategies for stability control in legged robots. We thus move on to the Model Predictive control technique which is widely used in current stability algorithms.
### _Model Predictive Control_
In the paper [2], Jared Di Carlo, et al. explain how the control problem for the MIT Cheetah 3 robot is formulated as an optimization problem using model-predictive control (MPC). The objective of the MPC is to minimize a cost function that captures the desired behavior of the robot, such as maintaining stability and achieving high-speed locomotion. The cost function is subject to a set of constraints that capture the dynamics of the robot, control constraints, and state constraints. The MPC problem is solved using a linear-quadratic program (LQP) formulation, which is a type of convex optimization problem that involves minimizing a quadratic objective function subject to linear constraints. The LQP formulation incorporates a simplified dynamic model of the MIT Cheetah 3, which includes a set of nonlinear constraints that capture the physical limitations of the robot. The authors use a convex approximation of these nonlinear constraints, which allows them to formulate the MPC problem as an LQP that can be efficiently solved using standard solvers.
Fig. 2: ZMP visualization on a quadruped robot
This approach enables the design of controllers that can achieve stable and dynamic locomotion in the MIT Cheetah 3 robot, even under challenging conditions such as uneven terrain and disturbances.
The objective of the LQP is to minimize a cost function that captures the desired behavior of the robot, such as maintaining stability and achieving high-speed locomotion. The cost function is subject to a set of linear constraints that capture the dynamics of the robot, control constraints, and state constraints. The authors demonstrate the effectiveness of the convex MPC approach by testing it on the MIT Cheetah 3 robot, which is a highly dynamic quadrupedal robot capable of high-speed locomotion and agile maneuvers. The results show that the convex MPC approach is able to achieve stable and dynamic locomotion in the robot, even under challenging conditions such as uneven terrain and disturbances.
In this paper [3], the authors formulate the MPC problem as a quadratic program, which is a convex optimization problem that can be efficiently solved using standard solvers. The objective of the quadratic program is to minimize a cost function subject to a set of constraints, which include the dynamics of the system, control constraints, and state constraints. The authors use a simplified linear model of the system dynamics for prediction, which allows them to formulate the MPC problem as a quadratic program that can be efficiently solved. However, the accuracy of the predictions is improved by incorporating feedback from a low-level controller, which corrects the predicted trajectories and reduces the computational complexity of the MPC. The use of convex optimization techniques, such as quadratic programming, enables the authors to solve the MPC problem efficiently and effectively and to achieve better control performance compared to traditional MPC approaches. The use of convex optimization in this paper is an important contribution to the field of legged robot control and demonstrates the power of optimization techniques in addressing challenging control problems in robotics. The MPC formulation goes as below:
\[\min_{\mathbf{x},\mathbf{u}} \sum_{i=0}^{k-1}\left\|\mathbf{x}_{i+1}-\mathbf{x}_{i+1,\text{ ref}}\right\|_{\mathbf{Q}_{i}}+\left\|\mathbf{u}_{i}\right\|_{\mathbf{R}_{i}}\] (5) subject to \[\mathbf{x}_{i+1} =\mathbf{A}_{i}\mathbf{x}_{i}+\mathbf{B}_{i}\mathbf{u}_{i},i=0 \ldots k-1 \tag{6}\] \[\mathbf{c}_{i} \leq\mathbf{C}_{i}\mathbf{u}_{i}\leq\overline{\mathbf{c}}_{i},i=0 \ldots k-1\] \[\mathbf{D}_{i}\mathbf{u}_{i} =0,i=0\ldots k-1\] \[f_{\min}\leq f_{z}\leq f_{\max}\] \[-\mu f_{z}\leq f_{x}\leq\mu f_{z}\] \[-\mu f_{z}\leq f_{y}\leq\mu f_{z}\]
The constraints describe the system dynamics and the friction cone constraints for each foot in contact with the ground. The optimization equation minimizes the error in the current state and the reference trajectory which is computed offline initially. Q and R are positive semi-definite weight matrices.
Thus we see that Model Predictive Control (MPC) offers several advantages for stability control in legged robots [23]. First, it enables optimized control by minimizing a cost function that represents stability. This allows for the generation of control inputs that ensure the robot maintains stability and achieves desired performance. Additionally, MPC is more robust to disturbances and uncertainties in the environment compared to Linear Quadratic Regulator (LQR) control. It can handle nonlinear systems, providing more flexibility in modeling the complex dynamics of legged robots.
Though, one of the main challenges is its computational complexity, especially for large and complex-legged robots. The computational demands of MPC can affect real-time performance and the robot's ability to quickly respond and adapt to changes in the environment. The look-ahead horizon control techniques can mitigate this challenge by allowing the robot to pre-plan its behavior and optimize control inputs in advance, thus improving the robot's ability to handle real-time control tasks efficiently.
### _Sequential Linear Quadratic - Model Predictive Control_
Sequential Linear Quadratic (SLQ) Model Predictive Control (MPC) [19] is an advanced control strategy utilized in legged robots for stability control. SLQ-MPC approximates the nonlinear dynamics of the legged robot with a linear model and computes a linear feedback control law at each time step. By optimizing a quadratic cost function over a finite time horizon, SLQ-MPC enables the computation of optimal control inputs to ensure stability. This approach combines the advantages of MPC, such as optimized control and adaptability to changes in the environment, with the computational efficiency of linear approximations. SLQ-MPC is particularly well-suited for legged robots as it addresses the challenges of modeling and controlling the complex dynamics involved in legged locomotion. By leveraging linear approximations, SLQ-MPC provides a practical and efficient solution for stability control in legged robots while still achieving high-performance results. This also laid down the foundation for the use of second-order convex problems which have much faster computation and robustness compared to the classical optimization algorithms.
### _Non-Linear Model Predictive Control_
Non-Linear Model Predictive Control (NMPC) is a sophisticated control strategy employed for the motion control of quadruped robots. NMPC is an optimization-based approach that deals with non-linearities and constraints more effectively than traditional control strategies. In the case of quadruped robots, the system dynamics are non-linear due to the multibody mechanical structure and its interaction with the environment. NMPC applies an internal model to predict the system's future behavior over a finite prediction horizon, then calculates the control inputs that minimize a specified cost function.
The formulation of NMPC involves determining a cost function that is minimized over the prediction horizon. This cost function represents the discrepancy between the predicted output and the desired output. The non-linear system's dynamics are represented by a set of non-linear differential equations. The control inputs are calculated by solving a non-linear optimization problem at each time step. An important feature of this approach is that it can handle constraints, for example,
on the control inputs and states, making it a QCQP problem [20][21].
The general NMPC formulation can be expressed as follows:
\[\underset{u(.),x(.)}{\text{min}} \int_{t}^{t+T}L(x(t),u(t))dt+V(x(t+T))\] (7) subject to \[\dot{x}(t)=f(x(t),u(t)), \tag{8}\] \[x(t)\in X,;u(t)\in U,\] \[x(t+T)\in X_{f},\] \[x(t)=x_{0}.\]
Where \(L\) is the Lagrangian (running cost), \(V\) is the terminal cost, \(f\) is the system dynamics, \(x\) are the states, \(u\) are the controls, \(X\) and \(U\) are the state and control constraints respectively, \(X_{f}\) is the set of terminal states, and \(x_{0}\) is the initial state.
One significant advantage of NMPC for quadruped robots is its ability to account for the system's non-linear dynamics and constraints in a principled way. This can result in more precise, efficient, and stable robot movement, especially in complex or uncertain environments. It can handle the dynamics and physical constraints of a quadruped robot more effectively than traditional linear control methods, such as PID, LQR, and Linear MPC as it constantly re-evaluates and adapts to new inputs and changes in the environment.
However, the major disadvantage of NMPC is its computational complexity. The need to solve a non-linear optimization problem at each time step can be computationally intensive, particularly for systems with high dimensionality like quadruped robots. This can result in high latency in control commands, making NMPC less suitable for real-time control applications without significant computational resources or simplified approximations. Moreover, the performance of NMPC highly depends on the accurate modeling of the system dynamics, which can be challenging in real-world applications due to uncertainties and disturbances.
In [22], the authors have formulated the non-linear MPC problem using the ZMP criterion. The Zero Moment Point (ZMP) combined with Model Predictive Control (MPC) provides a powerful formulation for posture correction and trajectory optimization in legged robots [22]. ZMP is a dynamic stability criterion used extensively in bipedal and quadrupedal robot locomotion, representing the point on the ground where the total moment of the inertial forces and the gravity forces is zero. By maintaining the ZMP within the support polygon of the robot (the area enclosed by the feet in contact with the ground), one can ensure dynamic balance. With MPC, the robot's future behavior is predicted over a finite horizon, and the controls are iteratively updated by solving an optimization problem, ensuring the ZMP stays within the support polygon.
In the ZMP+MPC formulation, the primary objective is to minimize the deviation of the ZMP from a reference trajectory while also considering factors like energy consumption, joint torques, and smoothness of motion. This problem can be posed as a Quadratically Constrained Quadratic Programming (QCQP) problem, where the objective function and constraints are all quadratic. The ZMP constraints ensure the ZMP stays within the support polygon, and the MPC optimization will continually adjust the robot's posture and trajectory to maintain stability, even when the robot is executing dynamic maneuvers or responding to external disturbances. The approach provides a framework for planning dynamically stable and feasible trajectories, which are crucial for successful navigation in complex environments. The equations below depict the non-linear constraints and frame the QCQP optimization problem for MPC stabilization:
\[J=\phi(\boldsymbol{x}(t+T))+\int_{t}^{t+T}L(\boldsymbol{x}( \tau),\boldsymbol{u}(\tau))\text{d}\tau, \tag{9}\] \[\phi(\boldsymbol{x}(t))=Q_{\text{1f}}\frac{1}{S(\boldsymbol{x}( t))}+Q_{\text{2f}}e^{2}(\boldsymbol{x}(t))\] \[+Q_{\text{3f}}\left(h_{\text{ref}}-h_{\text{b}}(\boldsymbol{x}( t))\right)^{2},\] \[L(\boldsymbol{x}(t),\boldsymbol{u}(t))=Q_{\text{1}}\frac{1}{S( \boldsymbol{x}(t))}+Q_{\text{2}}e^{2}(\boldsymbol{x}(t))\] \[+Q_{\text{3}}\left(h_{\text{ref}}-h_{\text{b}}(\boldsymbol{x}( t))\right)^{2}\] \[+\boldsymbol{u}(t)^{T}\boldsymbol{R}\boldsymbol{u}(t).\] \[e=\sqrt{\left(x_{\text{ZMP}}-x_{\text{CoG}}\right)^{2}+\left(y_ {\text{ZMP}}-y_{\text{CoG}}\right)^{2}}.\]
### _Active Balancing_
We have seen that convex optimization plays an important role in several types of robotics control problems. A particular, general example of this is active balancing problems. In the field of legged robots, it is crucial to ensure the robot remains balanced throughout the course of its trajectory. Active balancing refers to a class of problems that corrects and ensures the stability of the actual robot motion given a trajectory that may not be dynamically feasible or stable. For example, a control input that would cause the robot to fall over should be modified so that the robot remains balanced. This is done by minimizing an objective function that aims to keep the actual trajectory as close as possible to a pre-planned input, while adding constraints that model the dynamics of the problems and keep the robot balanced. In the main paper we will explore in this section, [1] the authors present a general framework
Fig. 3: ZMP+MPC approach for posture control - QCQP
into which many active balancing problems can be framed as second-order cone programming (SOCP) problems.
#### Iii-B1 Previous work
Previous optimization-based approaches generally use quadratic programs and involve methods such as minimizing least squares tracking errors while maintaining the center of mass at a predefined point. For example, in [9], the authors use a quadratic program to optimize the actions a robot takes to maintain stability in the presence of an external impact such as a push or a shove. They optimize an objective function that minimizes the square sum of acceleration of joints, while also ensuring the robot remains standing and does not sit down to maintain stability in the presence of the impact. The objective is given as follows:
\[\text{minimize }\tilde{q}C_{q}\tilde{q}-s_{\tilde{y}} \tag{10}\]
subject to various (generally non-convex) dynamics constraints, and where \(\tilde{q}\) represents joint acceleration and \(s_{\tilde{y}}\) is the acceleration in the y-direction to ensure the robot remains standing during the optimal solution. Note that in this approach, a limitation is that the authors did not formulate the problem as a convex problem and acknowledge that the optimization has to be done locally, and therefore a global solution is not guaranteed. To build on this work, and to obtain guarantees about global optimality, the aim is to formulate such balancing problems as convex problems and to solve them using known algorithms. The main paper we consider formulates this problem and many others as a convex problem to allow for globally optimal and efficient solutions.
#### Iii-B2 Current work
In [1], the authors show that many such problems can be formulated as SOCP problems, often by introducing a new variable that helps cast the problem as a SOCP program.
First, we introduce the underlying concepts. A second order cone in \(\mathbb{R}^{p+1}\) is defined as \(K_{p}=\{(x,y)\in\mathbb{R}^{p+1}:||x||_{2}\leq y\}\). This set is convex since it is the intersection of an (infinite) number of halfspaces: \(K_{p}=\cap_{u:||u||_{2}\leq 1}\{(x,y)\in\mathbb{R}^{p+1}:x^{T}u\leq y\}\).
SOCP problems are a class of problems formulated as follows:
\[\min_{x}f^{T}x\text{ subject to }\qquad||A_{i}x+r_{i}||\leq l_{i}^{T}x+m_{i}, \quad i=1,\ldots,N \tag{11}\]
The objective function is convex (linear) and the constraints define a convex set in that the constraints are of the exact form of the second-order cone definition so it is equivalent to requiring the solution to remain within the second-order cone [1]. Since the second-order cone is convex, so is the constraint set. Due to the generality of SOCPs, many types of convex problems can be reformulated as SOCPs, for example, QCQPs using rotated cones, problems with sums of norms, problems with hyperbolic constraints, and robust least squares (i.e. when there is uncertainty in the data), among others.
As discussed earlier, the main objective in many active balancing is to minimize the tracking error of the robot's joint accelerations. A reference joint acceleration trajectory \(\tilde{q}_{ref}\) is provided, and the actual trajectory is desired to be as close as possible to the reference under the relevant constraints. This leads to an objective of the form:
\[\min_{q}||\tilde{q}-\tilde{q}_{ref}|| \tag{12}\]
To formulate this as a SOCP, we must take some further steps. First, we introduce a dummy variable to minimize and move the objective to a constraint:
\[\min_{t}\text{ subject to }\qquad\min_{q}||\tilde{q}-\tilde{q}_{\text{ref}}||\leq t \tag{13}\]
and several other dynamics constraints that will be presented next. The final step is to convert the objective into the form \(f^{T}x\), and so the authors introduce a variable \(\mathbf{x}\) and a selection vector \(f=\begin{bmatrix}1&0&0&....&0\end{bmatrix}\), where \(\mathbf{x}=(t,\tilde{q},F_{1},...,F_{m},\lambda_{1},...,\lambda_{m})\). Now \(\min f^{T}x\) gives us our original problem of \(\min t\). The final remaining piece is to ensure the constraints are either conic or affine. While a detailed discussion of dynamics is out of the scope of this paper, we demonstrate a representative example, and then state the full constraint set with the final SOCP formulation. The constraint that represents the requirement that the ZMP remain within the support polygon can be expressed as a linear inequality of \(\mathbf{\hat{\text{MCQ}}}\mathbf{\hat{\text{MCQ}}}\mathbf
shown above in the formulation of the SOCP problem. The final form of the SOCP formulation is as follows:
\[\min f^{T}x\text{ subject to }\quad||S_{\bar{q}}x-\bar{q}\text{ref}||\leq S_{t}x \ A_{s}\bar{S}\bar{q}x\leq b_{s}\quad\quad(A^{s}A_{\text{amp}} \tag{15}\]
The constraint set above has been reformulated as conic constraints of the form \(||A_{i}x+r_{i}||\leq l_{i}^{T}x+m_{i}\), \(Dx=g\)
#### Iii-B3 Benefits of formulating as SOCPs
The benefit of formulating balancing problems as SOCP programs is that several robust and efficient interior point algorithms exist to solve such problems. These programs have been shown to converge in 5-50 iterations regardless of the problem dimension [6]. Though there is an even more general type of program than SOCP, called a Semi-Definite Program (SDP), solvers for these are not as efficient as those for SOCPs, and so it is desirable to formulate a problem as a SOCP to obtain good generality and known efficient solutions. The authors explore the computational performance of their algorithms and show that they can achieve good performance in various classes of active balancing problems.
#### Iii-B4 Limitations and Future Work
Most work currently tracks and minimizes error between input reference trajectories and actual trajectories of joint angles. However, as computational power increases, these optimization problems can incorporate more variables that can provide more appropriate solutions. A limitation of this paper was that the authors only considered a front kick and side kick motion to assess the performance of their SOCP formulation for active balancing. In the future, more complex trajectories, such as running or jumping, should be considered in order to understand the effectiveness of this formulation under general robot motion. Furthermore, the authors acknowledge the need to improve the computational efficiency of the optimizations. We further discuss these ideas in the next sections.
One other important topic of future research is the handling of non-convex constraints with guarantees of global optimality. For example, convexifying an objective or constraint can often involve relaxing an equality constraint to an inequality, or a non-convex inequality constraint to a convex inequality constraint by introducing a slack variable [18]. In these situations, the global optimality of the resulting problem is not guaranteed except under certain specific conditions. These conditions have been formulated for certain domains such as aerospace, in which common lower bound constraints on rocket thrust can be relaxed and global optimal solutions still guaranteed under linear system dynamics. In fact, even under non-linear system dynamics, which would lead to a non-linear program, global optimal solutions can be guaranteed if the system dynamics can be approximated by a piecewise affine function. Similar research must be carried out for legged robots - this will enable optimal solutions to be quickly found even under relaxed constraints in non-convex problems.
### _Gait Planning_
Gait planning is a vital part of mobile robots as gaits are the motion patterns by which these robots move. Gaits for robots are similar to how animals or how humans move, with running, jumping, and landing. The papers to be reviewed are focused on optimized jumping approaches on quadrupeds.
Finding an optimal, real-time approach to gait planning will serve as a step forward toward robust robots in general. Robots that are robust to unknown environments are a crucial functionality for robots to move out of the world of academia towards more real-life applications. There is a particular motivation for gait planning on legged robots because of their inherent ability to navigate difficult terrain and obstacles as opposed to wheeled robots.
In general, I think it is hard to understand the importance of this particular subfield of gait planning. Without optimized approaches to planning, we would have to constantly manually tune hyperparameters which will most likely lead to suboptimal results or even failures.
#### Iii-B1 Online Planning for Autonomous Running Jumps Over Obstacles in High-Speed Quadrupeds
This paper [15] focused on the real-time optimization problem of jumping. In general, the jumping problem for quadrupeds is a very complicated problem so much of the current research performs offline optimization and stores the results for specific inputs in a controlled test environment. In the context of quadrupeds, this contribution is significant as it was the first to experimentally validate a framework for online planning of running jumps.
The dynamics of this problem are the first subject to address. This paper decides to only tackle the problem in 2D as the search space for a 3D problem is often too large for real-time optimization. This means that, theoretically, the quadruped can really only perform well on a flat surface. Most of the testing was done on a flat surface because of this.
The equations of motion are:
\[m\ddot{x}=F_{x} \tag{16}\] \[m\ddot{z}=F_{z}-mg\] (17) \[I\ddot{\theta}=-xF_{x}+zF_{z} \tag{18}\]
Fig. 6: MIT Cheetah 3 Jumping
which can be generalized into:
\[\ddot{x}=u_{x} \tag{19}\] \[\ddot{z}=u_{z}-mg\] (20) \[\ddot{\theta}=-\alpha xu_{x}+\alpha zu_{z} \tag{21}\]
To make these equations have an analytical solution, we can express these equations in terms of Bezier curves. With these force profiles in terms of Bezier curves, it becomes easy to integrate them to find an analytical trajectory for these equations of motion. With the analytical trajectory, the problem turns into optimizing for the Bezier coefficients to scale the curves.
\[u_{x}(s)=\sum_{i=0}^{n}\beta_{i,x}b_{i,n}(s) \tag{22}\] \[u_{z}(s)=\sum_{i=0}^{n}\beta_{i,z}b_{i,n}(s) \tag{23}\]
Once these Bezier coefficients have been optimized, then the analytical solution uses these coefficients (which represent the force profile itself) to generate the actual trajectory.
To make the algorithm run in real-time, the optimization foregoes a cost function. Essentially, this becomes a feasibility problem.
\[\min 0 \tag{24}\]
with constraints:
\[|\theta(t)|\leq\Theta\ z_{\text{foot}}^{f}(t) \leq h_{\text{obs}}(x_{\text{foot}}^{f}(t))\ z_{\text{foot}}^{h}( t)\leq h_{\text{obs}}(x_{\text{foot}}^{h}(t))\ x_{\text{foot}}^{f}(t) \tag{25}\]
The constraints make intuitive sense. Since our state consists of just tracking the center of mass (COM), the leg information is not captured there. To constrain the leg from hitting the obstacle, the first two constraints maintain that the z position of each foot must be above the obstacle. This, of course, is one particular limitation of this algorithm as it relies on knowing prior information about the obstacles themselves.
We also want to maintain a certain distance from the obstacle after we land. This is captured by the next two constraints where the x position of each foot must be greater than the distance to the obstacle plus some scaling factor.
The next three constraints are related to the physical constraints of the quadruped. The constraints, in words, mean that the position of the legs must be in the workspace of the robot. As in, the position of the legs has to be physically possible for the robot to reach.
The next two relate to providing a safe landing configuration for the robot. The final two relate to constraining the ground reaction force that the robot feels on liftoff and landing.
Overall, this paper provides a good framework for generating real-time trajectories but there are certainly limitations such as the simplified dynamics and fixed force curve profile. While the fixed force profile speeds up convergence, there could potentially be more energy-efficient jump profiles that the robot can execute. A final disadvantage is that the jumps aren't necessarily optimal, in a sense other than the fixed curve profile. Since this is posed as a feasibility problem, the quality of the solution won't be high compared to more complex objective functions.
In terms of advantages, the obvious one is that this was the only paper, out of the papers we reviewed, that had an online optimization scheme. There is also another advantage in that this framework allowed the quadruped to make running jumps instead of the more common standing jump. This advantage is also unique and not found in the other papers.
In terms of unanswered questions and future works, the authors are unsure of the performance in outdoor terrain obstacles. There are also questions on whether a trajectory of obstacles could lead to better jump results. The authors had implemented a model predictive controller that attempts to place the quadruped in an optimal takeoff position so earlier detection could potentially help.
#### Iii-B2 Optimized Jumping on the MIT Cheetah 3 Robot
This paper [16] focuses not just on the feasibility of jumping over an obstacle, it focuses on optimizing the jump itself. This problem is relevant because the problem of there will be many obstacles in outdoor terrain that are much taller than the quadruped itself so it is important to generate an efficient trajectory for high jumping. Out of all the optimized jumping papers that we reviewed, this paper's results had jumped more than 2 times higher than the other results.
Similar to the last paper, the authors chose to constrain this problem in 2D to limit the scope of the problem. The state variables are the 2D position, orientation, and joint angles. The equations of motion are:
\[H(Q)\ddot{Q}+C(Q,\dot{Q})\dot{Q}+g(Q)= \tag{26}\] \[B\tau+B_{fric}\tau_{fric}(\dot{Q})+\sum_{i}J_{i}^{T}(Q)F_{i} \tag{27}\]
where H is the mass matrix, C contains the Coriolis and centrifugal terms, B is how torques are entered into the equation, \(J_{i}\) is the spatial Jacobian and \(F_{i}\) is the spatial forces at each foot. This is essentially a generalized force balance where the forces at each foot are mapped to torques.
Fig. 7: Example Force Profile
These equations of motion will generate reference trajectories for the optimization problem. Although, the dynamics of the model will change depending on what phase of the jump it is in (all legs on the ground, two legs on the ground, etc.). This, in the previous paper, was not addressed as heavily because performance was not the main concern.
Thus, the optimization is split up into three different phases, depending on what phase of the jump it is in. This is because the constraints change. Particularly:
\[J_{i,stance}(Q)\ddot{Q}+\dot{J}_{i,stance}\dot{Q}=0 \tag{28}\]
This is the constraint that enforces that a foot is on the ground. This, in words, means that the spatial acceleration of the foot is zero. The actual cost function is a least squares problem:
\[J=\sum_{k=1}^{N-1}w_{q}(q_{k}-q_{ref})^{T}(q_{k}-q_{ref})+w_{\tau} \tau_{k}^{T}\tau_{k}\\ +w_{N}(q_{N}-q_{N}^{d})^{T}(q_{N}-q_{N}^{d}) \tag{29}\]
With these constraints:
\[q_{min}\leq q_{k} \leq q_{max} \tag{30}\] \[|\dot{q}_{k}| \leq\dot{q}_{max}\] (31) \[|\tau_{k}| \leq\tau_{max}\] (32) \[|T_{k}^{x}/T_{k}^{z}| \leq\mu\] (33) \[T_{k}^{z}\geq T_{min}^{z}\] (34) \[q_{0}=q_{0,d},\dot{q}_{0} =0\] (35) \[x_{0}=0,z_{0}=0,q_{pitch,0}=0,\dot{x}_{0}=0,\dot{z}_{0}=0,\dot{ q}_{pitch,0} =0\] (36) \[q_{k}=q_{N,d},\dot{q}_{k} =0\] (37) \[x_{N}\geq d_{jumping},\ z_{N}=h_{platform},q_{pitch,N} =0 \tag{38}\]
These constraints are physical constraints that limit the torque, joint angles, joint velocities, and ground reaction forces.
The first term of the cost function is minimizing the tracking error - how close is the current trajectory from the predicted. The second term seeks to constrain actuator effort. The third term constraints that our final position is where we want it to be.
This cost function has some advantages and disadvantages. The advantages are that it allows for the possibility of achieving a higher jump that is optimized for tracking error and actuator effort. Compared to the feasibility problem, this will yield a higher quality solution.
A disadvantage is that this algorithm has to be run offline because the optimization is too slow to be run in real-time. There are ways to circumvent this, which the other papers will address, but this paper does not attempt to address them. Another disadvantage is that this paper assumed a planar case, like the last paper. This of course is a disadvantage as the algorithm really only works well on flat surfaces.
In terms of unanswered questions / future works, the authors state that they plan to implement vision for autonomous high performance jumping in the future. To do this, a possible unanswered question would be to see if it is possible to modify/implement this algorithm into a real time scenario.
Iii-B3 Autonomous Navigation for Quadrupedal Robots with Optimized Jumping through Constrained Obstacles
This paper [17] aims to perform offline optimization for jumping quadrupeds so that they can jump through a window-shaped obstacle. This contribution is significant/relevant since they seemed to be the first to consider jumping through a constrained obstacle like a window.
The dynamics in this paper are largely similar to the previous two papers so I will not review it for this paper. There is some notation to take into account though:
\[q:=[q_{x},q_{z},q_{\theta},q_{F1},q_{F2},q_{B1},q_{B2}]^{T}\in \mathbf{R}^{7} \tag{39}\] \[x:=[q;\dot{q}]\in\mathbf{R}^{14}\] (40) \[u:=[\tau_{F1},\tau_{F2},\tau_{B1},\tau_{B2}]^{T}\in\mathbf{R}^{4} \tag{41}\]
Where \(q_{x},q_{z},q_{\theta}\) represent the planar state and the rest of \(\mathbf{q}\) are the joint angles of the knee and hip. \(\mathbf{x}\) is the state vector that concatenates \(\mathbf{q}\) and its derivative. \(\mathbf{u}\) are the torques for the joints represented in \(\mathbf{q}\).
The optimization formulation is a little bit different than both of the previous papers. It is still a constrained least squares problem but with additional safety constraints and prediction constraints.
\[\min_{x,x,u,T}J(x,\dot{x},u,T) \tag{42}\] \[s.t\ x(t_{k+1})=x(t_{k})+\frac{\Delta t^{(i}}{2}(\dot{x}(t_{k+1} )+\dot{x}(t_{k}))\] (43) \[\dot{x}(t_{k+1}=f^{(i)}(x(t_{k}),u(t_{k}),T(t_{k}))\] (44) \[x(t_{0})=x_{0} \tag{45}\]
Where the cost function is defined as:
\[J=(q(t_{N+1})-q_{0})^{T}P_{f}(q(t_{N+1})-q_{0})+\\ \sum_{k=0}^{N+1}(\dot{q}^{T}(t_{k})Q_{d}\dot{q}(t_{k})+\ddot{q}^ {T}(t_{k})Q_{\dot{q}}\ddot{q}(t_{k}))\\ +T(t_{k})Q_{T}T(t_{k})+u^{T}(t_{k})R_{u}u(t_{k})+\sum_{i=1}^{2}P_ {i}T_{i}+\sum_{i=1}^{N_{2}}P_{\dot{\delta}} \tag{46}\]
The equality constraints maintain that our solution satisfies our predicted trajectory at the next time step. The first term of the cost function tries to minimize the joint angle error between the takeoff and landing position. This is because the authors defined the starting and end of the jump as a standing position.
The second term seeks to minimize joint velocity and acceleration for all time steps. The third and fourth term seeks to minimize ground reaction forces and actuator effort, respectively. The fifth and final terms seek to minimize air time and leg contact, respectively. In short, these terms make sense for the jumping task since minimal joint velocities and accelerations generate smoother trajectories and we always want to constrain actuator effort to some capacity.
The authors chose to physically constrain the problem in the usual way of joint angle limits, torque limits, force limits, etc. The constraints are largely the same as the previous papers but more interesting constraints are the ones they chose to prevent the quadruped from jumping into the obstacle ceiling.
In essence, the robot itself and obstacles are framed as a convex object if a bounding box is wrapped around it. The robot bounding box must maintain a certain signed distance between itself and any of the obstacles. To achieve this, a new cost function and constraint are added.
\[J_{w,h}(t_{k})=\sum_{k}w(t_{k})^{2}+h(t_{k})^{2} \tag{47}\]
\[-l^{T}\mu(t_{k})+(A_{O}P(t_{k})-b_{O})^{T}\lambda(t_{k})>d_{min} \tag{48}\]
Since the robot configuration is always changing, there is a cost function to minimize the bounding box. The constraint can be thought of as mapping the current translation into the obstacle space and seeing how far away it is from the set.
In terms of limitations of this paper's approach, the jumping optimization is also performed offline. This is expected, as this optimization is more complicated than the last paper's optimization. Another limitation of this approach is that jumping through obstacles can only be performed statically. In essence, the robot has to be in a certain starting position and a certain landing position, kind of like if a human jumped in place. An advantage of this approach, that the other papers didn't have, is that this was the only paper to go through obstacles instead of just over them. Also, in terms of jumping performance, the quadruped was able to experimentally jump higher than the feasibility approach from the first paper on optimized jumping.
In terms of unanswered questions and future works, the authors want to perform more research on optimizing jumps where the landing and takeoff heights are not the same. Another possible unanswered question would be if this approach is valid for a running jump. As in, would the quadruped still be able to avoid the obstacle if the standing jump constraint was removed?
#### Iii-B4 An Optimal Motion Planning Framework for Quadruped Jumping
This paper focuses on a different approach to optimization. While the previous papers focused on gradient-based methods, this paper focuses on heuristic-based methods to overcome the highly nonlinear objective function with greater convergence quality. Similarly to the previous papers, this approach also requires an offline optimization but the authors pre-trained different kinds of jumps for the quadruped to use online. This is a step further than the previous papers where their experiments were in highly controlled environments whereas this pre-training methodology allows for a more robust robot.
The optimization formulation is poised as:
\[\min_{x}\ \ \textbf{Fitness}(x) \tag{49}\] \[x(t_{k+1})=x(t_{k})+\Delta t\dot{x}(t_{k})\] (50) \[\dot{x_{k+1}}=f(u_{k},x_{k},p_{k})\] (51) \[x_{k}\in\textbf{X},k=1,2,...,N\] (52) \[u_{k}\in\textbf{U},k=1,2,...,N-1\] (53) \[x(t_{0})=x_{0},x(t_{end})=x_{end} \tag{54}\]
The constraints are kinematic except for \(x_{k}\in\textbf{X}\) and \(u_{k}\in\textbf{U}\) where they are kino-dynamic in nature. Essentially, they relate to physical and obstacle constraints where **X** and **U** are the feasible sets for these constraints. The Fitness objective function is proportional the number of constraints violated. Given some state, x, that violates many constraints then it would have a relatively high Fitness value. Thus, we seek to minimize this value.
The optimization variables are reformulated as ground reaction force (GRF) profiles because the GRF at each foot provides all the information needed to generate a trajectory. Similarly to the first jumping paper presented, these force profiles can be modeled as a time-varying polynomial:
\[f_{i}=\begin{cases}\eta_{1}\lambda_{1}[t\ \ \ 1]^{T}&t\in[0,T_{1}]\\ \eta_{2}\lambda_{2}[t^{2}\ \ \ \ t\ \ 1]^{T}&t\in[T_{1},T_{2}]\\ 0&t\in[T_{2},T_{3}]\end{cases} \tag{55}\]
Where \(f_{i}\) is the GRF for each foot. \(T_{i}\) represents the unknown time that each phase of jumping and these are concatenated into a vector \(T_{p}\). For example, from time 0 to \(T_{1}\), this is the phase when the quadruped has both feet on the ground. The actual optimization variables are the coefficients \(\eta_{i}\lambda_{i}\). Since these have no physical meaning, these are again reformulated in terms of the state and state velocities. In the end, the entire decision vector is:
\[D^{*}_{opt}=[T_{p}\ \ \ x(T_{1})\dot{x}(T_{3})]\in\textbf{R}^{12} \tag{56}\]
The actual optimization scheme is a differential evolution algorithm. Essentially, we initialize many random guesses as a candidate solution to the objective function. Then, we keep perturbing each of these random guesses and we only keep the best solutions (the candidates that have the lowest
Fig. 8: Illustration of jumping through obstacle
objective function). We keep perturbing and keeping the best solutions until convergence is achieved. Put simply, this is a survival of the fittest optimization scheme. More formally:
_Randomly initialize population vector_;
**for**\(g\gets 1\)**to**\(\mathbf{M}\)**do**
**for**\(i\gets 1\)**to**\(\mathbf{N}\)**do**
**Mutation and Crossover**;
**for**\(j\gets 1\)**to**\(\mathbf{W}\)**do**
\(v_{i,j}\gets M(x_{i,j}(g))\);
\(u_{i,j}\gets C(x_{i,j}(g),v_{i,j}(g))\);
**end**
_Selection_;
**if**\(Fitness(Ui(g),k)<Fitness(Xi(g),k)\)
**then**
\(X_{i}(g)\gets U_{i}(g)\);
**if**\((Fitness(Xi(g),k)<Fitness(Dopt(g),k))\)
**then**\(D_{opt}\gets X_{i}(g)\);
**else**
\(X_{i}(g)\gets X_{i}(g)\);
**end**
\(g\gets g+1\);
**end**
In terms of limitations of this paper, it can only perform jumps if it is pre-trained on the jump and added onto its internal library of motion planning. Another limitation, similar to previous papers, is that we still assume a planar version of dynamics. In terms of advantages, compared to the other papers, the convergence quality is much better because of the heuristic-based optimization approach. With a large variance in initial guesses, the solver can pass over local minima which is an advantage over gradient-based methods. Another advantage over the other papers is that it is robust against window-shaped obstacles and does not use prior knowledge like reference trajectories and contact scheduling.
In terms of unanswered questions and future works, this approach does not take landing accuracy into account so some future work can be revolved around generalizing the landing approach. One possible approach to solve this problem is using MPC. By utilizing a predictive model of the robot's dynamics, MPC optimizes a sequence of control actions over a finite time horizon. The process involves system modeling, state estimation, trajectory planning, cost function design, optimization, and real-time implementation. The dynamic model captures the robot's physical properties, while state estimation provides accurate information about the robot's state. Trajectory planning incorporates task-specific requirements and constraints, and a carefully designed cost function quantifies the desired performance. Optimization techniques are used to find the sequence of control actions that minimize the cost function while satisfying system dynamics and constraints. The optimized control actions are applied in real-time, with the MPC algorithm continuously re-planning and adapting to changes. This enables legged robots to achieve safe and stable landings by anticipating future behavior, accounting for uncertainties, and actively adjusting control actions.
#### Iii-B5 Contact-timing and Trajectory Optimization for 3D Jumping on Quadruped Robots
This paper focuses on programmatically optimizing the contact scheduling of each jump. This is a further improvement on the previous papers as it is common to manually tune a predefined interval for each jumping phase (e.g. 0.5 seconds for the first phase, 0.6 seconds for the second phase).
This contribution is relevant because the manual tuning of contact scheduling is often time-consuming and nonoptimal. Not just that, but optimal contact schedules play a role in reducing actuator effort as well. The contact timing optimization is formulated as:
\[\min_{x,f,e_{R}}\sum_{k=1}^{N}\epsilon_{\Omega}\Omega_{k}^{T} \Omega_{k}+\epsilon_{f}f_{k}^{T}f_{k}^{T}+\epsilon_{R}e_{R_{k}}^{T}e_{R_{k}} \tag{57}\] \[[R,p,p_{f}^{s}](k=1)=[R_{0},p_{0},p_{f,0}^{s}]\] (58) \[[\Omega,\dot{\Omega},\dot{p}](k=1)=0\] (59) \[[R,p,p_{f}^{s}](k=N^{c})=[R_{g},p_{g},p_{f,g}^{s}]\] (60) \[|R(k)[p_{f}^{s}(k)-p(k)]-\tilde{p}_{f}^{s}(k)|\leq r\] (61) \[|f_{k}^{s,x}/f_{k}^{s,z}|\leq\mu,|f_{k}^{s,y}/f_{k}^{s,z}|\leq\mu\] (62) \[f_{k,min}^{s}\leq f_{k}^{s,z}\leq f_{k,max}^{s}\] (63) \[p_{k,min}\leq p_{k}\leq p_{k,max}\] (64) \[\gamma(x_{k},x_{k+1},f(k),p_{f}^{s}(k))=0\] (65) \[R_{k+1}=R_{k}\exp(\Omega_{k}T_{i}/N_{i})\] (66) \[\sum_{i=1}^{n_{p}}T_{i}\in[T_{min},T_{max}],N^{c}=\sum_{i=1}^{n_{ p}}N_{i}\] (67) \[for\quad k=1,2,...,N^{c} \tag{68}\]
Position/velocity, rotation/angular velocity, and contact timings are all optimized in this section. In this least squares optimization, the objective function seeks to minimize the angular velocity, ground reaction force for each leg, and the error between the reference rotation trajectory. The optimized ground reaction forces are able to generate a trajectory of the center of mass so the output of this optimization will be a reference trajectory for the jump as well as contact timings for each phase of the jump.
These are then fed into the trajectory optimization where the torques are optimized for the least effort. The objective function is:
\[J=\sum_{h=1}^{N-1}\epsilon_{q}(q_{h}-q_{ref})^{T}(q_{h}-q_{ref})+\\ \epsilon_{\tau}\tau_{h}^{T}\tau_{h}+\epsilon_{N}(q_{N}-q_{N}^{d} )^{T}(q_{N}-q_{N}^{d}) \tag{69}\]
with constraints:
* Full body dynamics constraints
* Initial configuration: \(q(h=0)=q_{0},\dot{q}(h=0)=0\)
* Pre-landing configuration: \(q_{j,h}=q_{h,N}^{d},\dot{q}_{j,h}=0\)
* Final Configuration: \(q_{h}(h=N)=q_{N}^{d}\)
* Joint angle constraints: \(q_{j,min}\leq q_{j,h}\leq q_{j,max}\)
* Joint velocity constraints: \(|q_{j,h}|\leq q_{j,max}\)
* Joint torque constraints: \(|\tau_{max}|\leq\tau_{max}\)
* Friction cone limits: \(|F_{h}^{x}F_{h}^{z}|\leq\mu,|F_{h}^{y}/F_{h}^{z}|\leq\mu\)
* Minimum GRF: \(F_{h}^{z}\geq F_{min}^{z}\)
This formulation is similar to the contact timings optimization, it's a constrained weighted least squares problem to solve for optimized torques. Since this is a torque optimization, the constraints are centered around the hardware limitations like joint angles and initial/final configurations of the quadruped. Once these torques are solved, then these reference profiles are fed into the controller to interface with the hardware.
In terms of limitations of this approach, contact timing optimization on the full nonlinear dynamics is a highly complex problem. In the authors' experience, their implementation does not produce a feasible solution for many possible 3D jumps. Because of this, the authors opted to use simplified dynamics to reduce complexity. In terms of advantages, the contact timing is now automated and no longer needs to be manually tuned. This is a huge advantage over previous papers where a key part of the process is finding the correct contact scheduling that works with specific jumps.
In terms of unanswered questions and future works, the authors hope to implement a vision to autonomously detect and avoid obstacles in this optimized contact timings framework.
#### Ii-B6 Future Directions for Gait Planning
Some future directions, in general, for the subfield of gait planning include implementing a vision for autonomous gait planning and the online viability of these algorithms. Right now, most of these algorithms have to be optimized offline and ported into a controlled testing environment. Only for the most simple objective functions, where it is just a feasibility problem, is where online computation remotely feasible. Even in this edge case, the optimization is fast but the performance is suboptimal. I believe that a problem formulation or hardware innovation that enables high performance while optimizing online is the future of this field as it is a quality that every paper lacks so far.
Another path, coupled with online optimization, is implementing vision along with these optimization algorithms. This is probably a hurdle that many papers have not addressed because it is already a large computational cost to implement software that provides an intermediate data representation of video footage that can be passed along to the backend optimization. If these two problems of vision and online optimization can be both solved, then truly robust robots will be created.
## III Societal Impact
The potential societal impact of humanoid robots is vast. Though still in the early stages of development, these robots can assist our society with a wide array of tasks. These can be categorized into situations where there is danger to humans, or where the task requires lots of repetitive action. For example, search and rescue during natural disasters or the collapse of mines requires humans to venture into dangerous environments, as does dealing with leaks or catastrophic failures in a nuclear power plant. In these situations, humanoid robots can provide a net positive impact. For cases where repetitive and simple tasks need to be performed, for example in factories or on farms, robots can be used to increase productivity as there will be less need to hire people to do those tasks. However, this leads to potential negative impacts on society such as the replacement of humans and therefore job losses. These factors will have to be carefully weighed by society as the capabilities of humanoid robots expand to a point where they can be reliably deployed in real-world settings.
## IV Possible solutions to the unanswered questions
Convex optimization offers potential solutions for the unresolved questions around the stability and control of legged robots. As a mathematical tool, convex optimization is well-suited to handle uncertainties, disturbances, and efficiency in control systems. For instance, we can formulate robust control strategies as convex optimization problems that seek to minimize the impact of uncertainties and disturbances. Furthermore, the computational efficiency of optimization algorithms can enhance real-time control performance. For dealing with uneven terrains, a convex problem can be framed to minimize the deviation of the robot's foot placements from optimal values. Energy efficiency in control systems can also be targeted by formulating a cost function that considers both stability and energy usage. Convex optimization can help integrate learning-based methods into traditional control strategies by defining suitable cost functions and constraints that ensure system stability while allowing for adaptation. Slippage can be handled by formulating constraints in the optimization problem that limit the feasible region of the foot forces. Similarly, switching between different locomotion modes and transitioning between terrains can be managed by defining mode-specific and terrain-specific constraints and cost functions. Force feedback control can be incorporated by adding additional constraints to the optimization problem to ensure balance and stability. Finally, convex optimization allows us to explore various design and control parameters systematically, balancing stability, performance, and efficiency for specific tasks or environments.
|
2305.19518 | Label-Retrieval-Augmented Diffusion Models for Learning from Noisy
Labels | Learning from noisy labels is an important and long-standing problem in
machine learning for real applications. One of the main research lines focuses
on learning a label corrector to purify potential noisy labels. However, these
methods typically rely on strict assumptions and are limited to certain types
of label noise. In this paper, we reformulate the label-noise problem from a
generative-model perspective, $\textit{i.e.}$, labels are generated by
gradually refining an initial random guess. This new perspective immediately
enables existing powerful diffusion models to seamlessly learn the stochastic
generative process. Once the generative uncertainty is modeled, we can perform
classification inference using maximum likelihood estimation of labels. To
mitigate the impact of noisy labels, we propose the
$\textbf{L}$abel-$\textbf{R}$etrieval-$\textbf{A}$ugmented (LRA) diffusion
model, which leverages neighbor consistency to effectively construct
pseudo-clean labels for diffusion training. Our model is flexible and general,
allowing easy incorporation of different types of conditional information,
$\textit{e.g.}$, use of pre-trained models, to further boost model performance.
Extensive experiments are conducted for evaluation. Our model achieves new
state-of-the-art (SOTA) results on all the standard real-world benchmark
datasets. Remarkably, by incorporating conditional information from the
powerful CLIP model, our method can boost the current SOTA accuracy by 10-20
absolute points in many cases. | Jian Chen, Ruiyi Zhang, Tong Yu, Rohan Sharma, Zhiqiang Xu, Tong Sun, Changyou Chen | 2023-05-31T03:01:36Z | http://arxiv.org/abs/2305.19518v2 | # Label-Retrieval-Augmented Diffusion Models for Learning from Noisy Labels
###### Abstract
Learning from noisy labels is an important and long-standing problem in machine learning for real applications. One of the main research lines focuses on learning a label corrector to purify potential noisy labels. However, these methods typically rely on strict assumptions and are limited to certain types of label noise. In this paper, we reformulate the label-noise problem from a generative-model perspective, _i.e._, labels are generated by gradually refining an initial random guess. This new perspective immediately enables existing powerful diffusion models to seamlessly learn the stochastic generative process. Once the generative uncertainty is modeled, we can perform classification inference using maximum likelihood estimation of labels. To mitigate the impact of noisy labels, we propose the **L**abel-**R**etrieval-**A**ugmented (LRA) diffusion model 1, which leverages neighbor consistency to effectively construct pseudo-clean labels for diffusion training. Our model is flexible and general, allowing easy incorporation of different types of conditional information, _e.g._, use of pre-trained models, to further boost model performance. Extensive experiments are conducted for evaluation. Our model achieves new state-of-the-art (SOTA) results on all the standard real-world benchmark datasets. Remarkably, by incorporating conditional information from the powerful CLIP model, our method can boost the current SOTA accuracy by 10-20 absolute points in many cases.
Footnote 1: Code is available at [https://github.com/punar-playground/LRA-diffusion](https://github.com/punar-playground/LRA-diffusion)
## 1 Introduction
Deep neural networks have achieved extraordinary accuracy on various classification tasks. These models are typically trained through supervised learning using large amounts of labeled data. However, large-scale data labeling could cost huge amount of time and effort, and is prone to errors caused by human mistakes or automatic labeling algorithms [1]. In addition, research has shown that the ability of deep neural network models to fit random labels can result in reduced generalization ability when learning with corrupted labels [2]. Therefore, robust learning methods using noisy labels are essential for applying deep learning models to cheap yet imperfect datasets for supervised learning tasks.
There are multiple types of label noise investigated by previous research. More recent research has focused on studying the more realistic feature-dependent label noise, where the probability of mislabeling a given instance depends on its characteristics. This type of noise is more consistent with label noise in real-world datasets [3, 4, 1, 5, 6, 7]. To address this type of noise, a model is expected to be able to estimate the uncertainty of each training label. Many state-of-the-art methods have primarily relied on the observation that deep neural networks tend to learn simple patterns before memorizing random noise [8, 9]. This means a temporary phase exists in the learning process, where the model has learned useful features but has yet not started overfitting corrupt labels. At this stage, the model predictions can be used to modify the training labels, so that they are more consistent with model predictions [10, 11, 12, 13, 14, 15]. By correctly modifying
the labels, the number of clean training sample increases, which can further benefit the training. These type of approaches, however, is inherently risky because the point at which the model starts to overfit varies with the network structure and dataset. Starting too early can corrupt the training data, while starting too late may not prevent overfitting [16]. Therefore, it is vital to carefully tune the hyper-parameters of the training strategy, such as the number of epochs for warm-up training, learning rate, and uncertainty threshold, to achieve successful training results.
Another class of methods adopts the assumption of label propagation in semi-supervised learning [17, 18], where nearby data points in a feature space tend to have the same label. Therefore, they use _neighbor consistency2_ regularization to prevent overfitting of the model [19, 20]. The performance highly depends on the quality of the encoder that maps the data to the feature space, as retrieving a neighbor that belongs to a different class could further mislead the training process. Encoders are therefore required to first learn high-level features of the data that can be used for classification, which could be trained simultaneously with the classifier using noisy labels. However, the training can also lead to overfitting or underfitting.
Footnote 2: Nearby data points tend to have the same label.
In this paper, by contrast, we formulate the label noise problem from a generative-model perspective, which naturally provides new insights into approaching the problem. Our intuition is to view the noisy labeling process as a stochastic label generation process. Thus, we propose to adopt the powerful diffusion model as the generative building block. Figure 1 illustrates our intuition. In the generative process, we start with a noisy estimation of the label, then gradually refine it to recover the clean label, which is equivalent to the reverse denoising process of the diffusion model.
Specifically, the diffusion model takes a noisy label and some useful conditional information (to be specified) as inputs, and learns to recover/generate the ground-truth labels as outputs. One challenge in this setting is that only noisy labels are available in practice for training. To overcome this issue, we adopt the principle of _neighbor consistency_, and propose _label-retrieval augmentation_ to construct pseudo clean labels for diffusion model training, where a pre-trained image encoder is used to define the neighborhood of a sample. It is worth noting that the pre-trained image encoder would not be affected by the label noise, because they can be trained in a self-supervised manner [21, 22] or on an additional clean dataset [23, 24]. In fact, pre-training can tremendously improve the model's adversarial robustness [25] and has been used to improve model robustness to label corruption [26]. Another merit of our design is that it is general enough to allow natural incorporation of powerful large pre-trained model such as the CLIP model to further boost the performance.
In addition, the probability nature of diffusion models can also be better equipped to handle uncertainty in the data and label, thus providing more robust and accurate predictions. We call our model LRA-diffusion (label-retrieval-augmented diffusion).
Our main contributions are summarized as follows:
* We formulate learning from noisy labels as modeling a stochastic process of conditional label generation, and propose to adopt the powerful diffusion model to learn the conditional label distribution.
* We incorporate the neighbor consistency principle into the modeling, and design a novel label-retrieval-augmented diffusion model to learn effectively from noisy label data.
Figure 1: Label denoising as a reverse noising process.
* We further improve our model by incorporating auxiliary conditional information from large pre-trained models such as CLIP.
* Our model achieves the new state-of-the-art (SOTA) in various real-world noisy label benchmarks, _e.g._, 20% accuracy improvement on noisy CIFAR-100 benchmark.
## 2 Preliminary
Diffusion models were initially designed for generative modeling. Recently, it has been extended for classification and regression problems. In this section, we introduce the Classification and Regression Diffusion Models (CARD) [27], which our model is based on.
The CARD model transforms deterministic classification into a conditional label generation process, allowing for more flexible uncertainty modeling in the labeling process [27]. Similar to the standard diffusion model, CARD consists of a forward process and a reverse process. In the forward process, an \(n\)-dimensional one-hot label \(\mathbf{y}_{0}\) is gradually corrupted to a series of intermediate random vectors \(\mathbf{y}_{1:T}\), which converges to a random variable with a multi-variant Gaussian distribution \(\mathcal{N}(f_{q}(\mathbf{x}),\mathbf{I})\) (latent distribution) after \(T\) steps, where the mean is defined by a pre-trained \(n\)-dimensional image encoder \(f_{q}\). The transition steps between adjacent intermediate predictions is modeled as Gaussian distributions, \(q(\mathbf{y}_{t}|\mathbf{y}_{t-1},f_{q})=\mathcal{N}(\mathbf{y}_{t};\mathbf{\mu}_ {t},\beta_{t}\mathbf{I})\), with mean values \(\mathbf{\mu}_{1},\cdots,\mathbf{\mu}_{T}\) and a variance schedule \(\beta_{1},\cdots,\beta_{T}\), where \(\mathbf{\mu}_{t}=\sqrt{1-\beta_{t}}\mathbf{y}_{t-1}+(1-\sqrt{1-\beta_{t}})f_{q}( \mathbf{x})\). This admits a closed-form sampling distribution, \(q(\mathbf{y}_{t}|\mathbf{y}_{0},f_{q})=\mathcal{N}(\mathbf{y}_{t};\mathbf{\mu}_{ t},(1-\bar{\alpha}_{t})\mathbf{I},\) with an arbitrary timestep \(t\) and \(\mathbf{\mu}_{t}=\sqrt{\bar{\alpha}_{t}}\mathbf{y}_{0}+(1-\sqrt{\bar{\alpha}_{t}} )f_{q}(\mathbf{x})\). The mean term can be viewed as an interpolation between true data \(\mathbf{y}_{0}\) and the mean of the latent distribution \(f_{q}(\mathbf{x})\) with a weighting term \(\bar{\alpha}_{t}=\prod_{t}(1-\beta_{t})\).
In the reverse (generative) process, CARD reconstructs a label vector \(\mathbf{y}_{0}\) from an \(n\)-dimensional Gaussian noise \(y_{T}\sim\mathcal{N}(f_{q}(\mathbf{x}),\mathbf{I})\) by approximating the denoising transition steps conditioned on the data point \(\mathbf{x}\) and another pre-trained image encoder \(f_{p}\) in an arbitrary dimension. The transition step is also Gaussian for an infinitesimal variance \(\beta_{t}\)[28] (define \(\tilde{\beta}_{t}=\frac{1-\bar{\alpha}_{t-1}}{1-\bar{\alpha}_{t}}\beta_{t}\)):
\[p_{\theta}(\mathbf{y}_{t-1}|\mathbf{y}_{t},\mathbf{x},f_{p})=\mathcal{N}( \mathbf{y}_{t-1};\mathbf{\mu}_{\theta}(\mathbf{y}_{t},\mathbf{x},f_{p},t),\tilde {\beta}_{t}\mathbf{I}). \tag{1}\]
The diffusion model is learned by optimizing the evidence lower bound with stochastic gradient descent:
\[\mathcal{L}_{\text{ELBO}}=\mathbb{E}_{q}\left[\mathcal{L}_{T}+\sum_{t>1}^{T} \mathcal{L}_{t-1}+\mathcal{L}_{0}\right],\text{ where} \tag{2}\]
\[\mathcal{L}_{0}=-\log p_{\theta}(\mathbf{y}_{0}|\mathbf{y}_{1}, \mathbf{x},f_{p}),\ \ \mathcal{L}_{t-1}=D_{\text{KL}}\left(q(\mathbf{y}_{t-1}|\mathbf{y}_{t}, \mathbf{y}_{0},\mathbf{x},f_{q})||p_{\theta}(\mathbf{y}_{t-1}|\mathbf{y}_{t},\mathbf{x},f_{p})\right)\] \[\mathcal{L}_{T}=D_{\text{KL}}\left(q(\mathbf{y}_{T}|\mathbf{y}_{0 },\mathbf{x},f_{q})||p(\mathbf{y}_{T}|\mathbf{x},f_{p})\right).\]
Following [29], the objective can be simplified to the form given in Algorithm 1.
## 3 Label-Retrieval-Augmented Diffusion Model
Inspired by CARD, Label-Retrieval-Augmented (LRA) diffusion models reframe learning from noisy labels as a stochastic process of conditional label generation (_i.e._, label diffusion) process. In this section, we first provide an overview of the our model in Section 3.1 and then introduce the proposed label-retrieval-augmented component in Section 3.2, which can leverage label consistency in the training data. Next, we introduce an accelerated label diffusion process to significantly reduce classification model inference time in Section 3.3. Finally, a new conditional mechanism is proposed to enable the usage of pre-trained models in Section 3.4.
### Model Overview
Our overall label-retrieval-augmented diffusion model is illustrated in Figure 2, where a diffusion model is adopted for progressively label denoising, by leveraging both the retrieved labels and auxiliary information from pre-trained models. Our model employs two pre-trained networks, denoted as \(f_{q}\) and \(f_{p}\) encoders, to encode conditional information that facilitates the generation process. The \(f_{q}\) encoder serves as a mean estimator for \(\mathbf{y}_{T}\), providing an initial label guess for a given image. This encoder could be a standard classifier trained on noisy labels. On the other hand, the \(f_{p}\) encoder operates as a high-dimensional feature extractor, assisting in guiding the reverse procedure. \(\mathbf{y}_{t}\) and \(f_{p}(\mathbf{x})\) are concatenated together before being processed. Details about our neural-network architecture design are provided in Supplementary C.
During training, we use labels retrieved from the neighborhood as the generation target \(\mathbf{y}_{0}\). Then, in the forward process, the distribution of neighboring labels is progressively corrupted towards a standard Gaussian distribution centered at the estimated mean \(f_{q}(\mathbf{x})\). During testing, we employ a generalized DDIM method to efficiently compute the maximum likelihood estimation of \(\mathbf{y}_{0}\).
### Label-retrieval Augmentation for Training
The noisy nature of labels excludes the availability of clean labels for training. To mitigate the issue, we propose a training strategy based on the concept of retrieval augmented learning [30, 31], such that it is more resistant to label noise. Our main assumption is that in a latent space, data points from different classes form distinctive clusters. Therefore, the majority of a data point's neighbors are expected to have the same label as the point itself. To this end, we used a pre-trained encoder, illustrated as \(f_{p}\) in Figure 2, to map the data into an embedding space and retrieve the labels of the \(k\) nearest neighbors \(\{y^{(1)},\cdots,y^{(k)}\}\) in the training set. The diffusion model was then trained to learn the conditional distribution \(p(\tilde{y}|\mathbf{x})\) of labels within the neighborhood, rather than the distribution of labels \(p(y|\mathbf{x})\) for the data point itself.
Label-retrieval augmentation enables the model to make use of the information from multiple and potentially more accurate labels to improve its prediction performance. Algorithm 1 describes the training procedure. Additionally, diffusion models are known to be effective at modeling multimodal distributions. By training the model to generate different labels from neighbors based on the same data point, the model can produce stochastic predictions based on the distribution to capture the uncertainty inherent in the data labeling process. As a result, the trained model can be used not only as a classifier, but also as a sampler that simulates the actual labeling process.
Figure 2: Overview of the proposed framework for improving learning performance from noisy labels. The figure depicts the three main components, including (1) using diffusion models to imitate and inverse the label noising process; (2) using pre-trained encoders (_i.e._, \(f_{q}\) and \(f_{p}\)) within the diffusion model, and (3) the label-retrieval-augmentation approach using the \(f_{p}\) encoder to encourage neighbor consistency of image labels.
```
0:Input: training set \(\{\mathbf{X},\mathbf{Y}\}\), image encoder \(f_{p}\), \(f_{q}\).
1:while not converged do
2: Sample data \((\mathbf{x},y)\sim\{\mathbf{X},\mathbf{Y}\}\); time slice \(t\sim\{1,\cdots,T\}\); and noise \(\boldsymbol{\epsilon}\sim\mathcal{N}(0,\mathbf{I})\)
3: Retrieve labels \(\{y^{(1)},\cdots,y^{(k)}\}\) of neighbors in the feature space defined by the encoder \(f_{p}\)
4: Sample \(y^{\prime}\sim\{y,y^{(1)},\cdots,y^{(k)}\}\), and convert it to a one-hot vector \(\mathbf{y}_{0}\)
5: Take gradient descent step on the loss: \[\left\|\boldsymbol{\epsilon}-\boldsymbol{\epsilon}_{\theta}(\sqrt{\bar{ \alpha}_{t}}\mathbf{y}_{0}+(1-\sqrt{\bar{\alpha}_{t}})f_{q}(\mathbf{x})+\sqrt {1-\bar{\alpha}_{t}}\boldsymbol{\epsilon},\mathbf{x},f_{p}(\mathbf{x}),t) \right\|^{2}\]
6:endwhile
```
**Algorithm 1** Training
### Efficient Inference with generalized DDIM
The iterative generation nature of the classification diffusion model makes its inference efficiency not comparable to traditional classifiers. To enhance the inference efficiency, we propose to incorporate the efficient sampling methods, Denoising Diffusion Implicit Model (DDIM) [32], to accelerate the label diffusion process. However, the utilization of the mean estimator \(f_{q}\) makes DDIM incompatible with our setting, as our generation process begins with a non-zero mean Gaussian distribution \(\mathcal{N}(f_{q}(\mathbf{x}),\mathbf{I})\). Therefore, we adjust the DDIM method into a more general form that fits our framework. Analogous to DDIM, our sampling process maintains the same marginal distribution as the original \(q(\mathbf{y}t|\mathbf{y}0,f_{q})\) closed-form sampling process. Detailed derivations can be found in Supplementary A.
With DDIM, the trained model can generate a label vector in much less steps following a pre-defined sampling trajectory of \(\{T=\tau_{S}>,\cdots,>\tau_{1}=1\}\), where \(S<T\). Consequently, \(\mathbf{y}_{t}\) can be computed as:
\[\mathbf{y}_{\tau_{s}}=\sqrt{\bar{\alpha}_{\tau_{s}}}\mathbf{y}_{0}+(1-\sqrt{ \bar{\alpha}_{\tau_{s}}})f_{q}(\mathbf{x})+\sqrt{1-\bar{\alpha}_{\tau_{s}}} \boldsymbol{\epsilon}, \tag{3}\]
where \(\boldsymbol{\epsilon}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\). We can then predict the _denoised label_\(\mathbf{\tilde{y}}_{0}\), a prediction of \(\mathbf{y}_{0}\) given \(\mathbf{y}_{\tau_{s}}\), as:
\[\mathbf{\tilde{y}}_{0}=\frac{1}{\sqrt{\bar{\alpha}_{\tau_{s}}}}[\mathbf{y}_{ \tau_{s}}-(1-\sqrt{\bar{\alpha}_{\tau_{s}}})f_{q}(\mathbf{x})-\sqrt{1-\bar{ \alpha}_{\tau_{s}}}\boldsymbol{\epsilon}_{\theta}(\mathbf{y}_{\tau_{s}}, \mathbf{x},f_{p}(\mathbf{x}),\tau_{s})]. \tag{4}\]
When \(\tau_{s-1}>0\), we can compute \(\mathbf{y}_{\tau_{s-1}}\) given \(\mathbf{y}_{\tau_{s}}\) from the non-Markovian forward process defined as:
\[\mathbf{y}_{\tau_{s-1}}=\sqrt{\bar{\alpha}_{\tau_{s-1}}}\mathbf{\tilde{y}}_{0} +(1-\sqrt{\bar{\alpha}_{\tau_{s-1}}})f_{q}(\mathbf{x})+\sqrt{1-\bar{\alpha}_{ \tau_{s-1}}}\cdot\boldsymbol{\epsilon}_{\theta}(\mathbf{y}_{\tau_{s}}, \mathbf{x},f_{p}(\mathbf{x}),\tau_{s}). \tag{5}\]
As the dimension of label vectors is usually much lower than that of an image, the model can employ fewer steps in the reverse process without compromising generative quality. In our experiments, we use \(S=10\) and \(T=1000\), substantially reducing the time cost of the classification process. Supplementary Figure B gives an example of the label generation (classification) process on the CIFAR-10 dataset.
To further enhance the inference efficiency, we propose a simple and effective trick for computing the maximum likelihood estimation of labels. As the generative process is deterministic given \(\mathbf{y}_{T}\), which is sampled from a uni-modal Gaussian distribution, we approximate the maximum likelihood estimation of labels by initiating from the mean, _i.e._, \(\mathbf{y}_{0}=\text{DDIM}(\mathbf{y}_{T}=f_{q}(\mathbf{x}),\mathbf{x})\). This trick circumvents the time-consuming majority voting approximation that demands repeated generation.
### Flexible conditioning with pre-trained encoders
The original CARD model employs a single model for both the \(f_{p}\) and \(f_{q}\) encoders. However, this limits their representation capacity [33] as the dimension of \(f_{q}(\mathbf{x})\) is typical relatively small, _i.e._, equalling the number of classes. To mitigate this and improve model performance, we abandon the assumption that \(f_{p}=f_{q}\), enabling the use of a more powerful pre-trained encoder (_e.g._, the CLIP image encoder [24]) with arbitrary dimensions for \(f_{p}\).
Empirically, we find that the model can still achieve satisfactory performance when the magnitude of \(f_{q}(\mathbf{x})\) is small, which means the latent representation \(\mathbf{y}_{T}=f_{q}(\mathbf{x})+\mathbf{\epsilon}\) is dominated by the noise term \(\mathbf{\epsilon}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\). In this case, the information provided by \(f_{q}(\mathbf{x})\) to the diffusion process is limited. As a result, we simply set \(f_{q}(\mathbf{x})=\mathbf{0}\) to avoid handling an additional n-dimensional \(f_{q}\) encoder. For the \(f_{p}\) encoder, one can employ flexible pre-trained models as presented in Section 5. In this paper, we use the SimCLR model trained on the training images (without supervised information) and the pre-trained CLIP model.
## 4 Related work
Robust loss function and regularization techniques.Several noise-robust loss functions and regularization techniques have been proposed as alternatives to the commonly used cross-entropy loss (CE), which is not robust to label noise. Mean absolute error (MAE) [34] loss has been shown to be robust against noisy labels. Generalized Cross-Entropy (GCE) [35] combines CE and MAE for faster convergence and better accuracy. Symmetric cross-entropy Learning (SL) [36] couples CE with a noise-robust counterpart and has been found to have higher performance than GCE, particularly for high noise rates. Label Smoothing Regularization [37] alleviates overfitting by linearly combining labels with a uniform distribution. Bootstrapping technique [38] combines the labels with the current model prediction. Dynamic bootstrapping [39, 40] uses the prediction confidence to control the weighting in the combination. Neighbor Consistency Regularization (NCR) [19] encourages consistency of prediction based on learned similarity. Our method is also based on the principle of neighbor consistency. However, instead of encouraging consistent predictions among neighbors, our model directly learns from the labels of neighbors. This allows for estimating instance-level uncertainty by learning the label distribution among neighbors, rather than learning a point estimation.
Data recalibration.Data recalibration techniques progressively remove or correct mislabeled data during training to improve the reliability of training data. Wang et al. [11] used the learned similarity and label consistency to identify and discard data with noisy labels. TopoFilter [41] selects clean data by analyzing the topological structures of the training data in the learned feature space. Cheng et al. [4] defines a Bayes optimal classifier to correct labels. Zheng et al. [14] proposed using a likelihood ratio test (LRT) to correct training labels based on predictions. Zhang et al. [15] used LRT to correct labels progressively and provides a theoretical proof for convergence to the Bayes optimal classifier. Dividemix [42], LongReMix [43], and CC [44] treat the low confident data as unlabeled, and then employ semi-supervised learning algorithms [45] for further analysis. C2D [46] combines Dividemix with self-supervised pre-training to boost its performance by improving the quality of the extracted features. Our approach employs the same assumption as TopoFilter that data belonging to the same class should be clustered together with ideal feature representations. However, our technique isn't confined to learned features potentially distorted by label noises. Instead, similar to C2D, our method can effectively leverage the high-quality feature learned by pre-trained encoders to achieve superior accuracy.
Guided diffusion model and retrieval augmentation.Guided diffusion is a technique applied to diffusion models for conditional generation. Classifier guidance [47] is an cost-effective method leveraging the gradient of a classifier to steer the generative process of a trained diffusion model. On the other hand, Classifier-free guidance [48] learns the conditional distribution during training for improved generation quality. This approach also allows for the use of continuous guidance information, such as embedding vectors, rather than being limited to discrete labels. Classification and Regression Diffusion Models (CARD) [27] formulates classification and regression as a conditional generation task that generates labels or target variables conditioned on images. Our approach follows the same paradigm, and leverages the multi-modal coverage ability of diffusion models to learn the label distribution within the neighborhood. retrieval-augmented diffusion models [30] used retrieved neighbors from an external database as conditional information to train diffusion models for image synthesis. Retrieval Augmented Classification [31] used retrieval-augmentation to train classification model using class-imbalanced training data. Our approach differs from theirs by retrieving
labels instead of data to reduce label noise in training rather than increasing the training data. In addition, our model does not require an external database.
## 5 Experiments
We first evaluate the performance of our method on datasets with various types synthetic noises. Then, we perform experiments on four real-world datasets. To better understand the performance gain sources, we conduct ablation studies to measure the impacts of conditional diffusion and different pseudo-label construction strategies. All experiments were done on four NVIDIA Titan V GPUs. Comprehensive implementation details and hyper-parameters are provided in the Supplementary C.
### Results on Synthetic Noisy Datasets
We conduct simulation experiments on the CIFAR-10 and CIFAR-100 datasets [50] to evaluate our method's performance under various noise types. Specifically, following [15], we test with _polynomial margin diminishing_ (PMD) noise, a novel instance-dependent noise, at two noise levels and three hybrid noise types by adding _independent and identically distributed (i.i.d)_ noises on top of instance-dependent noise.
For instance-dependent noise, we adopt the recently proposed _polynomial margin diminishing_ (PMD) noise [15]. Following the original paper, we train a classifier \(\mathbf{\eta}(x)\) using clean labels to approximate the probability mass function of the posterior distribution \(p(y|\mathbf{x})\). Images are initially labeled as their most likely class \(u_{\mathbf{x}}\) according to the predictions of \(\mathbf{\eta}(x)\). Then, we randomly alter the labels to the second most likely class \(s_{\mathbf{x}}\) for each image with probability: \(p_{u_{\mathbf{x}},s_{\mathbf{x}}}=-\frac{c}{2}\left[\mathbf{\eta}_{u_{\mathbf{x}}} (\mathbf{x})-\mathbf{\eta}_{s_{\mathbf{x}}}(\mathbf{x})\right]^{2}+\frac{c}{2}\), where \(c\) is a constant noise factor that controls the final percentage of noisy labels. Since corrupting labels to the second most likely class can confuse the "clean" classifier the most, it is expected to have the most negative impact on the performance of models learned with noisy labels. For PMD noise, we simulate two noise levels where 35% and 70% of the labels are corrupted.
For i.i.d noise, following [51, 15], we use a transition probability matrix \(\mathbf{T}\) to generate noisy labels. Specifically, we corrupt the label of the \(i\)-th class to the \(j\)-th class with probability \(T_{ij}\). We adopt two types of i.i.d noise in this study: (1) Uniform noise, where samples are incorrectly labeled as one of the other \((n-1)\) classes with a uniform probability \(T_{ij}=\tau/(n-1)\) and \(T_{ii}=1-\tau\), with \(\tau\) the pre-defined noise level; (2) Asymmetric noise: we carefully design the transition probability matrix such that for each class \(i\), the label can only be mislabeled as one specific class \(j\) or remain unchanged with probability \(T_{ij}=\tau\) and \(T_{ii}=1-\tau\). In our experiment, we generated three types of hybrid noise by adding 30%, 60% uniform, and 30% asymmetric noise on top of 35% PMD noise.
We test our proposed label-retrieval-augmented diffusion model using two pre-trained encoders: (1) SimCLR [21]: We trained two encoders using the ResNet50 [52] architecture on the CIFAR-10 and CIFAR-100 datasets through contrastive learning; (2) CLIP [24]: the model is pre-trained on a large dataset comprising 400 million image-text pairs. Specifically, we used the vision transformer [53] encoder (ViT-L/14) with pre-trained weights, the best-performing architecture in CLIP. For simplification, we will refer to these configurations as SimCLR LRA-diffusion and CLIP LRA-diffusion. We also investigated the performance of the KNN algorithm within the feature space defined by the SimCLR and CLIP encoders, denoted as SimCLR KNN and CLIP KNN respectively.
Table 1 lists the performance of the _Standard_ method (train classifier using noisy labels), our method, and baseline methods for learning with noisy labels. The results in white rows are borrowed directly from [15]. We can see that using the SimCLR encoder in the LRA-diffusion method results in superior test accuracy on both CIFAR-10 and CIFAR-100 datasets compared to other baselines, without the need for additional training data. This is because the SimCLR encoder is trained in an unsupervised manner, making it immune to label noise, and it can effectively extract categorical features for accurate image retrieval. Therefore, when the correct labels dominate the label distribution in the neighborhood, training with the labels of the retrieved neighbor images allows the model to learn with more correct labels.
Notably, incorporating the CLIP encoder into our method significantly improves test accuracy over our SimCLR LRA-diffusion due to its excellent representation capabilities. In fact, by performing KNN in the CLIP feature space alone was able to achieve accuracy surpassing all competing methods in most experiments. This allows for the use of more clean labels during training, thus result in even higher accuracy.
### Ablation Studies
In order to evaluate the contribution of diffusion and the pre-trained encoder, we perform ablation experiments using CARD [27] and linear probing to learn from noisy labels and retrieval-augmented labels. The results are summarized in Table 2. Our model significantly outperforms CARD, mainly due to the more informative \(f_{p}\) encoder. Moreover, our experiments on CIFAR-10 using SimCLR revealed that directly combining LRA with linear probing can result in lower accuracy than using linear probing on noisy labels. On the other hand,
\begin{table}
\begin{tabular}{c c c c c c} \hline \multirow{2}{*}{**Methods**} & \multicolumn{5}{c}{**CIFAR-10**} \\ & 35\% PMD & 70\% PMD & 35\% PMD + 30\% U & 35\% PMD + 60\% U & 35\% PMD + 30\% A \\ \hline Standard & 78.11 \(\pm\) 0.74 & 41.98 \(\pm\) 1.96 & 75.26 \(\pm\) 0.32 & 64.25 \(\pm\) 0.78 & 75.21 \(\pm\) 0.64 \\ Co-teaching+ [49] & 79.97 \(\pm\) 0.15 & 40.69 \(\pm\) 1.99 & 78.72 \(\pm\) 0.53 & 55.49 \(\pm\) 2.11 & 75.43 \(\pm\) 2.96 \\ GCE [35] & 80.65 \(\pm\) 0.39 & 36.52 \(\pm\) 1.62 & 78.08 \(\pm\) 0.66 & 67.43 \(\pm\) 1.43 & 76.91 \(\pm\) 0.56 \\ SL [36] & 79.76 \(\pm\) 0.72 & 36.29 \(\pm\) 0.66 & 77.79 \(\pm\) 0.46 & 67.63 \(\pm\) 1.36 & 77.14 \(\pm\) 0.70 \\ LRT [14] & 80.98 \(\pm\) 0.80 & 41.52 \(\pm\) 4.53 & 75.97 \(\pm\) 0.27 & 59.22 \(\pm\) 0.74 & 76.96 \(\pm\) 0.45 \\ PLC [15] & 82.80 \(\pm\) 0.27 & 42.74 \(\pm\) 2.14 & 79.04 \(\pm\) 0.50 & 72.21 \(\pm\) 2.92 & 78.31 \(\pm\) 0.41 \\ \hline SimCLR KNN & 83.71 & 29.45 & 78.25 & 54.82 & 75.37 \\ CLIP KNN & 91.80 & 30.66 & 84.67 & 57.03 & 81.76 \\ \hline SimCLR LRA-diffusion & 88.76 \(\pm\) 0.24 & 42.63 \(\pm\) 1.97 & 88.41 \(\pm\) 0.37 & 84.43 \(\pm\) 0.82 & 85.64 \(\pm\) 0.23 \\ CLIP LRA-diffusion & 96.54 \(\pm\) 0.13 & 44.62 \(\pm\) 0.18 & 95.71 \(\pm\) 0.17 & 87.21 \(\pm\) 0.71 & 93.65 \(\pm\) 0.40 \\ \hline \multicolumn{5}{c}{**CIFAR-100**} \\ \hline Standard & 35\% PMD & 70\% PMD & 35\% PMD + 30\% U & 35\% PMD + 60\% U & 35\% PMD + 30\% A \\ \hline Standard & 57.68 \(\pm\) 0.29 & 39.32 \(\pm\) 0.43 & 48.86 \(\pm\) 0.56 & 35.97 \(\pm\) 1.12 & 45.85 \(\pm\) 0.93 \\ Co-teaching+ & 56.70 \(\pm\) 0.71 & 39.53 \(\pm\) 0.28 & 52.33 \(\pm\) 0.64 & 27.17 \(\pm\) 1.66 & 51.21 \(\pm\) 0.31 \\ GCE & 58.37 \(\pm\) 0.18 & 40.01 \(\pm\) 0.71 & 52.90 \(\pm\) 0.53 & 38.62 \(\pm\) 1.65 & 52.69 \(\pm\) 1.14 \\ SL & 55.20 \(\pm\) 0.33 & 40.02 \(\pm\) 0.85 & 51.34 \(\pm\) 0.64 & 37.57 \(\pm\) 0.43 & 50.18 \(\pm\) 0.97 \\ LRT & 56.74 \(\pm\) 0.34 & 45.29 \(\pm\) 0.43 & 45.66 \(\pm\) 1.60 & 23.37 \(\pm\) 0.72 & 52.04 \(\pm\) 0.15 \\ PLC & 60.01 \(\pm\) 0.43 & 45.92 \(\pm\) 0.61 & 60.09 \(\pm\) 0.15 & 51.68 \(\pm\) 0.10 & 56.40 \(\pm\) 0.34 \\ \hline SimCLR KNN & 54.22 & 39.25 & 51.87 & 41.73 & 46.50 \\ CLIP KNN & 79.58 & 52.55 & 69.66 & 50.91 & 61.19 \\ \hline SimCLR LRA-diffusion & 61.39 \(\pm\) 0.15 & 53.37 \(\pm\) 0.81 & 60.52 \(\pm\) 0.28 & 55.79 \(\pm\) 0.31 & 59.28 \(\pm\) 0.11 \\ CLIP LRA-diffusion & 81.91 \(\pm\) 0.10 & 74.52 \(\pm\) 0.12 & 82.80 \(\pm\) 0.11 & 81.10 \(\pm\) 0.09 & 81.78 \(\pm\) 0.15 \\ \hline \end{tabular}
\end{table}
Table 1: Classification accuracy (%) on CIFAR-10 and CIFAR-100 datasets under PMD noises and hybrid noises, combining PMD noise with Uniform (U) and Asymmetric (A) noise.
\begin{table}
\begin{tabular}{c c c c c c} \hline \multirow{2}{*}{**Methods**} & \multirow{2}{*}{**LRA space**} & \multicolumn{2}{c}{**CIFAR-10**} & \multicolumn{2}{c}{**CIFAR-100**} \\ & & 35\% PMD & 70\% PMD & 35\% PMD & 70\% PMD \\ \hline Linear probing on SimCLR & - & 86.9 & 38.93 & 56.18 & 51.87 \\ Linear probing on SimCLR + LRA & SimCLR & 63.8 & 35.84 & 53.34 & 52 \\ CARD + LRA & SimCLR & 75.08 & 34.35 & 52.03 & 32.67 \\ SimCLR LRA-diffusion (ours) & SimCLR & **88.96** & **42.63** & **61.38** & **53.57** \\ \hline Linear probing on CLIP & - & 85.35 & 37.4 & 65.02 & 53.21 \\ Linear probing on CLIP + LRA & CLIP & 95.61 & 40.17 & 63.98 & 58.53 \\ CARD + LRA & CLIP & 79.72 & 33.57 & 47.1 & 23.45 \\ CLIP LRA-diffusion (ours) & CLIP & **96.55** & **44.51** & **81.92** & **74.58** \\ \hline \end{tabular}
\end{table}
Table 2: Classification accuracy (%) of different combinations of LRA, diffusion model, and pre-trained encoders on CIFAR-10 and CIFAR-100 datasets under 35% and 70% PMD noise.
due to the mode coverage ability of diffusion model, our model can effectively learn from retrieval-augmented labels. In conclusion, our LRA-diffusion model provides an efficient approach for incorporating pre-trained encoders in the process of learning from noisy labels. Additional ablation studies in the Supplementary D show that, even in the absence of a pre-trained encoder, our model can leverage the features of a noisy classifier and enhance its accuracy.
### Results on Real-world Noisy Datasets
We further evaluate the performance of our proposed method on real-world label noise. Following previous work [42, 15, 44, 43], we conducted experiments on four image datasets, _i.e._, WebVision [54], ImageNet ILSVRC12 [55], Food-101N [56], and Clothing1M [57]. For experiments on Webvision, ILSVRC12, and Food-101N datasets, we use the CLIP image encoder as the \(f_{p}\) encoder to train LRA-diffusion models. Comprehensive dataset description and implementation details can be found in the Supplementary C. We evaluated the performance of our method against a group of state-of-the-art (SOTA) methods, and the results are presented in Table 3 and Table 4. Our approach significantly outperforms all the previous methods in terms of classification accuracy.
For experiments on the Clothing1M dataset, we found that LRA-diffusion conditioned on the CLIP image encoder did not achieve the SOTA accuracy. A potential explanation is that the CLIP feature is too general for this domain specific task for categorizing fashion styles. However, our method is orthogonal to most traditional learning with noisy label approaches. As shown in the additional ablation study in Supplementary D, our method can collaborate with a trained classifier by conditioning on its feature encoder to achieve improved performance. We first use the CC [44] method to select clean samples and train a ResNet50 classifier, which achieved 75.32% accuracy (refer to as CC\({}^{*}\)). Then, we condition on its feature before the classification head to train our LRA-diffusion model on the selected samples, which achieved 75.70% accuracy. As Table 5 shows, our method achieved a 0.38% improvement based on CC\({}^{*}\) and beat all SOTA methods.
### Inference Efficiency Analysis
In order to test the efficiency of our model, we perform experiments assessing the runtime on CIFAR-10 dataset and compare our method with a standard classifier that uses ResNet50. It's worth noting that our
\begin{table}
\begin{tabular}{c c c c c c c} \hline Standard & CleanNet [56] & BARE [60] & DeepSelf [61] & PLC & LongReMix & LRA-diffusion \\ \hline
81.67 & 83.95 & 84.12 & 85.10 & 85.28 & 87.39 & **93.42** \\ \hline \end{tabular}
\end{table}
Table 4: Classification accuracies (%) on the Food-101N dataset.
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline
**Dataset** & DivideMix & ELR [58] & UNICON [59] & LongReMix & C2D & CC & NCR & LRA-diffusion \\ \hline
**WebVision** & 77.32 & 77.78 & 77.60 & 78.92 & 79.42 & 79.36 & 80.5 & **84.16** \\
**ILSVRC2012** & 75.20 & 70.29 & 75.29 & - & 78.57 & 76.08 & - & **82.56** \\ \hline \end{tabular}
\end{table}
Table 3: Classification accuracies (%) on WebVision, ILSVRC2012 datasets.
\begin{table}
\begin{tabular}{c c c c c c c c} \hline Standard & BARE & PLC & LongReMix & DeepSelf & C2D & NCR & CleanNet \\ \hline
68.94 & 72.28 & 74.02 & 74.38 & 74.45 & 74.58 & 74.60 & 74.69 \\ \hline \hline DivideMix & ELR & UNICON & CC\({}^{*}\) & CC & SANM [62] & LRA-diffusion \\ \hline
74.76 & 74.81 & 74.98 & 75.32 & 75.40 & 75.63 & **75.70** \\ \hline \end{tabular}
\end{table}
Table 5: Classification accuracies (%) on Clothing1M
SimCLR encoder is also built on the ResNet50. Thus, the standard method's runtime also reflects the linear probing runtime on SimCLR. Table 6 shows the results.
We can see, the computation bottleneck lies on the large pre-trained encoder but not the diffusion model itself. In general, our method takes twice as long as a standard classifier (ResNet50) when using SimCLR (ResNet50) and CLIP (ViT-B/32) pre-trained encoders. Larger CLIP encoders can increase the time further. However, it can be further accelerated if the features can be pre-computed in advance or be computed in parallel (as they are only required to be computed once and can be reused later).
## 6 Conclusion
In this paper, by viewing the noisy labeling process as a conditional generative process, we leverage diffusion models to denoise the labels and accurately capture label uncertainty. A label-retrieval-augmented diffusion model was proposed to effectively learn from noisy label data by incorporating the principle of neighbor consistency. Additionally, by incorporating auxiliary conditional information from large pre-trained models such as CLIP, we are able to significantly boost the model performance. The proposed model is tested on several benchmark datasets, including CIFAR-10, CIFAR-100, Food-101N, and Clothing1M, achieving state-of-the-art results in most experiments. Future work includes improves diffusion models by leveraging existing techniques to further push forward the performance.
|
2309.08918 | Exploration of TPUs for AI Applications | Tensor Processing Units (TPUs) are specialized hardware accelerators for deep
learning developed by Google. This paper aims to explore TPUs in cloud and edge
computing focusing on its applications in AI. We provide an overview of TPUs,
their general architecture, specifically their design in relation to neural
networks, compilation techniques and supporting frameworks. Furthermore, we
provide a comparative analysis of Cloud and Edge TPU performance against other
counterpart chip architectures. Our results show that TPUs can provide
significant performance improvements in both cloud and edge computing.
Additionally, this paper underscores the imperative need for further research
in optimization techniques for efficient deployment of AI architectures on the
Edge TPU and benchmarking standards for a more robust comparative analysis in
edge computing scenarios. The primary motivation behind this push for research
is that efficient AI acceleration, facilitated by TPUs, can lead to substantial
savings in terms of time, money, and environmental resources. | Diego Sanmartín Carrión, Vera Prohaska | 2023-09-16T07:58:05Z | http://arxiv.org/abs/2309.08918v2 | # Exploring Tpus for AI Applications
###### Abstract
Tensor Processing Units (TPUs) are specialized hardware accelerators for deep learning developed by Google. This paper aims to explore TPUs in cloud and edge computing focusing on its applications in AI. We provide an overview of TPUs, their general architecture, specifically their design in relation to neural networks, compilation techniques and supporting frameworks. Furthermore, we provide a comparative analysis of Cloud and Edge TPU performance against other counterpart chip architectures. Our results show that TPUs can provide significant performance improvements in both cloud and edge computing. Additionally, this paper underscores the imperative need for further research in optimization techniques for efficient deployment of AI architectures on the Edge TPU and benchmarking standards for a more robust comparative analysis in edge computing scenarios. The primary motivation behind this push for research is that efficient AI acceleration, facilitated by TPUs, can lead to substantial savings in terms of time, money, and environmental resources.
TPU, Google, AI Accelerator, Hardware, HPC
## I Introduction
Google TPUs, or Tensor Processing Units, are specialized hardware accelerators for Machine Learning (ML) workloads. They were first introduced in 2016 as part of Google's efforts to improve the performance and efficiency of its ML systems [1]. TPUs are designed with a custom architecture that is efficient at performing matrix operations and processing large amounts of data. All major Artificial Intelligence (AI) network types strongly rely on matrices for their network computations -- whether that be a simple Feed-Forward Network (FFN) or a Deep Neural Network (DNN) [2]. The training and inference costs of large AI models are considered computationally expensive tasks that can become a barrier for low-resource applications. They can also be time-consuming tasks, even on powerful hardware. Additionally, recent trends indicate that scale and performance in generative models are highly correlated, pushing AI models to increase in parameter size [3]. Hence, the need for powerful hardware that can accelerate and lower the cost of running these models is trending. TPUs have been proven to accelerate both training and inference of large AI models. They can perform up to 15-30x faster than contemporary GPUs and CPUs, and achieve much higher energy efficiency, with a 30-80x improvement in one Trillion (Tera) Operations per Second (TOPS) per Watt measure [4]. In addition to their efficiency, TPUs are also highly scalable. Google introduced Multislice technology, enabling AI supercomputing, by connecting up to tens of thousands of TPU chips on the cloud []. In this regard, Google has created large pods of TPUs that can train extensive ML models in a distributed manner, such as workloads required in Natural Language Processing (NLP) or Computer Vision (CV) [6]. As a reference, Google's Cloud TPU v4 can handle one quintillion (Exa) Floating Point Operations Per Second (FLOPS) scale processing [7]. This allowed Google to achieve impressive performance gains and has helped the company develop and use some of the world's largest AI models like PaLM [8] 540B or GLaM [9] with 1.2 trillion parameters.
Google has made its TPU hardware available through its cloud computing platform and external consumers are already taking advantage of the performance gains offered by this hardware accelerator. Midjourney, one of the leading text-to-image AI startups, have been using Cloud TPU v4 to train their state-of-the-art generative model [10].
Additionally, in 2018, Google introduced a series of lighter chips (Edge TPU) available for sale under its subsidiary, the Coral brand. The Micro Edge TPU board, with a size of 64x30mm, is capable of 4 TOPS.
In this paper, we contribute a clear in-depth analysis and definition of the TPUs, their architecture and AI tasks performance metrics for both cloud, and edge computing in comparison to other chip architectures. We further identify the need for the development of new optimization techniques that permit efficient deployment of new AI architectures on the Edge TPU.
Fig. 1: An illustration of an artificial neuron describing the primary operations. [49]
## II Literature Review
Through this literature review, we present an overview of prior studies centered on AI-related TPU research, and comparative studies.
A comparison of the differences between chip architectures, including CPUs, GPUs, FPGAs, ASICs and TPUs, has been studied in [11] focusing on the architectures without any practical experimentation. Other studies, such as [16] and [17] mention TPUs as potential hardware accelerators for parallelization in order to increase AI acceleration. Furthermore, researchers in [18] introduce TPU-MLIR, an end-to-end compiler based on MLIR, designed to deploy pre-trained AI models to TPUs.
Practical implementations often require a benchmark, of which few have been developed for the testing of TPUs. MLPerf [12] is an ML benchmark suite that has gathered the inference and training-speed measures for a large variety of hardware units. The latest version, v3.1, placed the NVIDIA GH200 Grace Hopper Superchip and Google's data center TPU v5e at the top of the benchmark. However, this benchmark only includes the main AI architectures and lacks exploration of the Edge TPUs. [13] compares GPUs against TPUs for accelerating Graph Neural Networks (GNNs), which is an architecture that is not included in the MLPerf benchmark. This demonstrates a lack of completeness in benchmarking comparative research of TPUs.
Deployment of AI architectures on the Edge TPU have been addressed but are also limited. Within the general task of CV, previous research proves superior performance with the implementation of TPUs in running inference on Multiview CNNs [34] combining edge devices (RaspberryPi3 and Edge TPU) for increased FPS [35], and training of Siamese Networks [36]. Authors in [14] explore the performance of a CNN across various edge AI accelerators, and found that the edge TPU outperformed other accelerators in terms of low latency and high accuracy, despite the edge TPU requiring model weights to be quantized to 8bits. [15] compares the performance on four edge platforms but do not include System on Module (SoM) TPU devices. Further CV implementations include pose estimation [38], and semantic segmentation [39, 40].
In terms of NLP, the Edge TPU is more limited. For example, a study done for inference in edge computing of RNNs was not able to conduct experiments on the Edge TPU since the operations from the architecture are not supported [33]. Authors in [19] have managed to deconstruct the encoder Transformer [41] architecture in order to run BERT [42] on an Edge TPU. This limitation similarly applies to generative AI architectures. To the best of our knowledge, there is only applications of GANs that have been fully deployed onto the Edge TPU [37, 50]. With a full deployment we refer to all model layers and operations being compiled by the Edge TPU Compiler [46].
While there is substantial research on TPUs, many of these studies remain isolated and lack a comprehensive exploration or linkage regarding applying TPUs as accelerators in AI development and research.
## III TPU Architecture and System Design
TPUs were built as an Application-Specific Integrated Circuit (ASIC) for Neural Networks (NNs) due to the fast-growing computational demands of AI applications [20]. A classic NN architecture typically involves three primary operations (Figure 1):
1. Multiplication of inputs by corresponding weights.
2. Summation of these products.
3. Application of an activation function to the resulting sum to shape the output as desired.
As can be seen in Figures 2 and 3, TPUs were designed with a Tensor Core composed of specialized hardware components, including Matrix Multiplication Units (MXUs), vector units, and scalar units.
MXUs are optimized for high-speed matrix operations which are fundamental in NNs and represent the most computationally intensive tasks in many ML algorithms. The MXU consists of a 128x128 systolic array of multiply-accumulators, enabling parallelized computations. It delivers the core computational power of the TPU by accelerating the first two primary operations of NNs (matrix multiplication and summation).
Alongside the MXU, the vector unit handles general computations, such as activations and softmax functions. On the other hand, the scalar unit is responsible for tasks like control flow, memory address calculations, and other maintenance operations [21].
Depending on the specific TPU version, whether it's for Google Cloud or the Edge TPU available for purchase, the specifications and capabilities can vary.
Fig. 3: Diagram illustrating the architecture of a TPU v4 chip. The architecture contains two Tensor Cores with its corresponding components and a Virtual Core connected to the High Bandwidth Memory.
Fig. 2: An illustration of the evolution of the Tensor Core. Ranging from 2017 when they introduced the Tensor Core of the TPUv2 with 1 MXU to the TPU v4 Tensor Core which has 4 MXUs and was introduced in 2021.
### _Datacenter TPUs_
TPUs are one of the most powerful and scalable AI hardware accelerators, but they are currently only available for consumers through Google Cloud Platform (GCP). The most recent chip, TPUv5e, provides up to 393 int8 (integral number with 8 bytes depth) TOPS. These can be networked over ultra-fast links to create pods that consist of up to 256 TPUv5e chips and can deliver a total of up to 100 int8 PetaOps of compute power [22]. Pods can also be combined using Multislice, a full-stack almost-linear-performant scaling technology, which connects TPU chips within each slice through high-speed Inter-Chip-Interconnect (ICI) [23]. This allows users to scale to tens of thousands of Cloud TPU chips for individual AI workloads.
### _The Edge TPU_
The Edge TPU series is a set of lighter versions of the datacenter TPUs [24]. For this exploration, we chose to focus only on the System on Module (SoM) devices (Table 1).
While specific internal structures of the Edge TPU remain undisclosed, given that they are an AI ASIC developed by Google's subsidiary (Coral), it very-likely retains the fundamental components of datacenter TPUs such as the MXU, scalar unit, and a vector unit in the chip. Given this, a well-founded estimate of the chip's architecture is an MXU of a 64x64 systolic array with a clock of 480 MHz, resulting in 4 TOPS [21]. It is also worth mentioning the Dev Board Mini integrates a VPU as well as a GPU [25].
Moreover, Edge TPUs exhibit impressive energy efficiency, with the Dev Board Micro processing about 3 TOPS per watt, and the Dev Board Mini and the Dev Board processing at 2 TOPS per watt (refer to Table 1). More interestingly, both the Dev Board Mini and the Dev Board Micro operate without the necessity for cooling. This operational characteristic suggests that buffering likely transpires external to the chip.
## IV Frameworks & Compilation For TPUs
There are three main frameworks that have been adopted to take advantage of the computational efficiency offered by TPU accelerators: TensorFlow [26], PyTorch [27], and JAX [28]. In the context of this study, our primary emphasis is directed towards the JAX library, since it was designed for XLA compilation and its flexibility for the personalization of AI models.
### _The XLA Compiler_
Code that runs on TPUs must be compiled by the accelerator linear algebra (XLA) compiler. XLA is a domain-specific just-in-time (jit) compiler for linear algebra that takes a graph emitted by an ML framework and compiles its operations into executable binaries [28].
Results illustrated in Figure 4 demonstrate how the application of XLA provides improvements of both speed and memory usage. For example, BERT trained on 8 V100 GPUs using XLA, achieved numbers that approximate a 7x performance in throughput (sequence per second) and a 5x better efficiency in batch size [29]. The reduction in the former (memory usage) enables additional capacity for the implementation of other advanced optimization methodologies, such as gradient accumulation.
### _The JAX Library_
JAX gives broader flexibility in the development and optimization of AI models since it is a library that was brought together for high-performance ML research [28]. Training a model on GPU/TPU using JAX often provides faster performance and can be cheaper than using PyTorch/XLA on GPU/TPU [10].
It is based on NumPy and takes advantage of XLA to compile and accelerate computations. This is done automatically but can also be defined manually for single Python functions using the _@jit_ API decorator. JAX can also automatically differentiate native Python and NumPy functions, which allows for efficient computation of gradients for ML algorithms. It can differentiate between loops, branches, recursion, and closures, as well as advanced derivations of gradients. It supports reverse-mode differentiation (a.k.a. backpropagation) via _@grad_ API function as well as forward mode differentiation, and the two can be composed arbitrarily to any order [28].
JAX also has integrations with popular ML libraries such as TensorFlow and PyTorch. In addition, one of the most popular things about JAX is that it is well documented and supported by a large community of researchers.
\begin{table}
\begin{tabular}{c c c c c c c}
**Device** & **TOPS** & **TOPS** & **Size** & **On-chip** & **SDRAM** & **DDR** \\ & & **per Watt** & **(mm)** & **Processers** & & \\ \hline Dev & & & 85 & GPU, VPU, Arm & 1 or 4 GB & 1600 MHz \\ Board & 4 & 2 & x & Cortex-A53, & LPDDR4 & maximum \\ & & & 56 & Arm Cortex M4 & SDRAM & DDR clock \\ \hline Dev & & & & 64 & GPU, VPU, & 2 & 1600 MHz \\ Board & 4 & 2 & x & Arm & GB & maximum \\ Mini & & & 48 & Cortex-A35 & LPDDR3 & DDR clock \\ \hline Dev & & & & 65 & ARM & 64 & \\ Board & 4 & 3 & x & Cortex-M7 and & MB & nA \\ Micro & & & 30 & M4 & SDRAM & \\ \end{tabular}
\end{table}
Table 1: High-Level SoM Comparison of the three coral dev boards.
Figure 4: Comparison of training throughput performance on the BERT model using 8 V100 GPUs without (w/o) XLA, with (w/) XLA, and w/ XLA \(+\) grad accumulation. [29]
## V Exploration of TPUs
This section explores a practical experimentation of TPUs to measure its performance during inference and training of AI models. Part A summarizes the main differences between the most common chip architectures (CPU, GPU) and the TPU. This serves as an introduction for the subsequent sections which compare the performance of AI workloads in TPUs against CPUs and GPUs.
In Part B, the study explores datacenter TPUs by examining their performance gains in the training of large-scale models.
In Part C, attention shifts to the inference capabilities of the Edge TPU, which holds significant value in the context of Internet of Things (IoT). The motivation behind this study is mainly because running AI models directly on IoT devices minimizes cloud dependency, and makes AI more accessible in our everyday lives.
### _Comparison OF TPUs, GPUs and CPUs_
The most common chip architectures include Central Processing Units (CPUs) and Graphics Processing Units (GPUs). In terms of performance, CPUs and GPUs have different strengths and weaknesses. CPUs are typically better at handling multiple tasks simultaneously. This makes them well-suited to addressing the diverse workloads of a general-purpose computer. On the other hand, GPUs are designed to perform a large number of similar calculations simultaneously, making them ideal for highly parallelizable tasks such as rendering graphics or training model architectures that can be trained in parallel [31]. Unlike CPU, which are general-purpose processors, and GPUs, which were designed to process graphics, TPUs are tailored to the needs of AI algorithms since they leverage matrices as their primary computation instead of only a vector or scalar.
### _Exploration of Datacenter TPU_
In this section we focus on exploring the inference capabilities of cloud-based accelerators, specifically focusing on training a model using CPU, GPU and TPU chips.
The model chosen for the exploration is a faster, lighter version of GPT-2 from OpenAI and has 82 million parameters [32]. The experimentation is conducted using Google Colab [30], which allows for selection of custom runtimes from GCP to accelerate model training. An illustrated comparison of the experiment results can be seen in Figure 5.
For the training, we tokenize and process a dataset to have 17,632 training samples. We then split these samples into batches to end up with a training dataset of 1,102 batches with 16 samples of data.
To test the CPU, we utilized a GCP Intel VM instance with 8vCPUs + 52 GB of memory (n1-highmem-8) and 200GB of SSD. This configuration took 37.07 seconds per iteration which results in 11 hours per training epoch. If we were to do 10 epochs of training, the training time would be at 110 hours or 4 days and 12 hours.
To test the GPU, we attach one NVIDIA T4 GPU to the previous CPU setup. We could already see improvements with this accelerator reducing training to 1.95 seconds per iteration. This would take it 36 minutes per epoch and a total of 6 hours for 10 epochs. These results show that the GPU is 18x faster than the CPU.
To test the TPU, we ran the model training on a TPUv2-8 which provides 8 TPU cores (4 dual-core TPU chips) that allowed us to split the batches of 16 between the 8 cores. Creating batches of 128 samples reduced the total number of batches to be process by each core by 1/8\({}^{\text{th}}\). The performance gains of this implementation were significant, taking only 0.68 seconds per iteration, 1 minute and 33 seconds per epoch and a total training time of 13 minutes for the 10 epochs.
These results indicate that a TPUv2-8 VM yields 27x improvement in training over the GPU. Acceleration hardware lowers training from 4 days and 12 hours when employing on a 8vCPU to 6 hours with a T4 GPU to 13 minutes with a Google Colab TPUv2-8 VM.
### _Exploration of Edge TPU_
In this section we focus on exploring the inference capabilities of smaller SoM devices, specifically the Dev Board Mini TPU, the NVIDIA Jetson Nano, and the RaspberryPi3.
The deployment of AI architectures on the Edge TPU requires the implementation of different optimization techniques [43, 44] to meet the computational overhead and memory requirements of these devices. For one, tensor parameters must be quantized into 8-bit fixed-point numbers, specifically int8 or uint8 formats. Additionally, both tensor sizes and model parameters, including bias tensors, must remain constant during compile time [45]. Furthermore, tensors must be 3-dimensional at most. For tensors exceeding three dimensions, only the innermost three can have a size larger than one. It is also crucial that the model exclusively employs operations compatible with the Edge TPU (see Annex).
Thus, the development of AI models for the Edge TPU, follows a systematic workflow (defined in Figure 6). It starts by ensuring the model aligns with all the supported operations and confirming the constancy of tensor sizes. Finally, the quantized 8-bit model is compiled explicitly for the Edge TPU on a Debian OS, or a Google Colab Notebook [46].
Fig. 5: Comparison of training times for 10 epochs between Intel 8vCPU, Nvidia GPU T4, and the TPUv2-8.
Fig. 6: Workflow to compile existing quantized tfline models into Edge TPU executable.
When comparing the devices for speed of inference, we chose to analyze the classification of a 224x224 image using MobileNetV2 [47], provided by the Pycoral library [48].
We conducted inference on a single image 250x using MobileNetV2. In order to account for the initialization of the model, and image loading into cache memory, we execute a single prediction first, let the script wait 1 second, and then implement a loop of 250 inference calls of the same image. By measuring the overall duration, we determined the Inference FPS, calculating it as the ratio of processed images to the total inference time. Our results can be found in Table 2.
To provide a fair comparison, we also include the results of the NVIDIA Jetson Nano running the same model using CUDA which is optimized for the NVIDIA GPU. Best results from the Edge TPU and the Jetson indicate a 6x increase in performance when running inference on the TPU, and 18x improvement against the RaspberryPi3 when running inference on MobileNetV2.
## VI Limitations and Future Work
During the comparison of datacenter machines, due to the limited resource availability of the free tier for Google Colab's TPUs, the exploration of the TPU may not be leveraging its full potential. Executing this on a dedicated TPU VM would allow to find more accurate results. A further limitation is the limited number of operations being supported by the Edge TPU. This may lead many users to prefer the use of Edge GPUs, like the Jetson Nano from NVIDIA, due to the large community support in running models in the GPU.
Furthermore, when comparing edge devices, few benchmarks exist for the direct comparison of device performance. Although some proposals have been made [12, 49], there is little guidance on how to ensure reproducible results for running AI models on edge devices. For example, even running the same model on a GPU vs a TPU can vary, since the TPU requires an 8 bit-quantized model, whereas a Jetson Nano does not explicitly require the quantization of model weights.
In addition to the CPU, GPU and TPU, there are other system-on-chip (SoC) AI accelerator hardware that are pushing the boundaries of ML acceleration. Thus, further investigations could also include the comparison of TPUs against other ML-specific chip architectures. For example VPU, Neural Engine [52], and IBM's AIU [53] are not considered in this exploration, but would be valid candidates for future comparisons.
Finally, in order to compare different architectures, current benchmarks such as MLPerf require an expansion of hardware and models support. The current suite of evaluations also falls short when comparing edge-devices.
## VII Conclusion
In summary, results indicate that the utilization of TPUs in both large datacenter workloads and edge devices can significantly reduce the training and inference times of deep learning models. Their specialized hardware design, tailored for the matrix operations that are most common in neural networks, enables TPUs to achieve accelerated computation speeds and improved power efficiency. We find that through quantization, model compilation and adapting AI architectures to existing ASIC hardware, both training and inference of models can be accelerated exponentially. We leave for future work to investigate further AI architectures and scaling capabilities of TPUs, both in cloud as well as edge computing for the training and inference of AI models, specifically focusing on performance metrics according to a unified benchmark such as MLPerf.
|
2310.06856 | Brave new world: Artificial Intelligence in teaching and learning | We exemplify how Large Language Models are used in both teaching and
learning. We also discuss the AI incidents that have already occurred in the
education domain, and we argue for the urgent need to introduce AI policies in
universities and for the ongoing strategies to regulate AI. Regarding policy
for AI, our view is that each institution should have a policy for AI in
teaching and learning. This is important from at least twofolds: (i) to raise
awareness on the numerous educational tools that can both positively and
negatively affect education; (ii) to minimise the risk of AI incidents in
education. | Adrian Groza, Anca Marginean | 2023-09-27T15:22:05Z | http://arxiv.org/abs/2310.06856v1 | # Brave new world: Artificial Intelligence in teaching and learning
###### Abstract
We exemplify how Large Language Models are used in both teaching and learning. We also discuss the AI incidents that have already occurred in the education domain, and we argue for the urgent need to introduce AI policies in universities and for the ongoing strategies to regulate AI. Regarding policy for AI, our view is that each institution should have a policy for AI in teaching and learning. This is important from at least twofolds: (i) to raise awareness on the numerous educational tools that can both positively and negatively affect education; (ii) to minimise the risk of AI incidents in Education.
Artificial Intelligence in Education, AI university policy, ChatGPT, BARD, Large Language Models (LLMs)
## I Introduction
Teachers - including even teachers of AI - have difficulties keeping race and awareness with the numerous educational plugins built on top of Large Language Models (LLMs). Differently from teachers, students are quick to install and use "educational plugins" aiming at increasing their grades. Moodle or Kahoot quizzes can be automatically generated for a given topic or course content with the help of LLMs, thus saving instructor's time. Tools to automatically solve such quizzes are available and easy to install as browser plugins built on top of LLMs. Of course they are not perfect, but not few are those students satisfied with an average degree. Slides can be automatically generated based on some prompts and course content, thus helping teachers to quickly update their slides with new content. Students have also started to heavily use this feature: the content of their presentations has started to have a GPT-flavour: too generic and lacking creativity.
Incidents of AI in education do exist, as AI incidents database for instance lists 5 such incidents in the education domain. As a bottom-up approach for AI policies, Chan has recently proposed an AI Ecological Education Policy Framework for university teaching and learning, covering pedagogical, governance and operational dimensions [1]. The Artificial Intelligence Act, recently voted by EU Parliament on 14 June 2023, has included AI tools used in education into the risk category. That is because such tools (e.g. automatic graders, plagiarism detectors) do affect the professional path of learners. Henceforth, AI tools used in education will require a certificate of quality provided after a technical audit performed by a notified body. Expertise on algorithmic accountability and audits is now being accumulated, one example being the European Centre for Algorithmic Transparency in Seville. As a top-down regulatory approach, a proposal considers a global agency for AI, similar to Agency of Atomic Energy. No matter regulatory strategy (either bottom-up or top-down), we need to practice the exercise of regulating AI, in order to keep-up with the speed of AI developments. As Gary Marcus has stated: we should regulate AI before it regulates us.
## II Some flaws of large language models
LLMs are N-grams models on steroids. Given a sequence of words/tokens, LLMs predict the most probable new word. ChatGPT is just a, let' say 4321-grams model, but a model that was fed with ginkgo biloba. If there is a meaning in the generated text is in our heads. As Shakespeare put it: "beauty is in the eye of the beholder". That is for the education based on ChatGPT: "meaning is the eye of the learners".
A simple test to convince students on the hallucination capabilities of the LLMs is to use the prompt "Write a 400 word bio about Student Name". This would be easy to check by everyone.
On top of the list of arguments for using ChatGPT in education stands the argument that virtual assistants would contribute to democratisation of education. That is, education will be more accessible for more learners. However, for the moment, a different phenomenon takes place: students who have a subscription to ChatGPT-4 ($20) have higher chances to get good grades than those without a subscription. This newly introduced bias (i.e. not equal chances) is a challenge that needs to be addressed by university policies.
Since LLMs are fed with huge amounts of data from the Internet, the quality of generated text will be average rather than exceptional. Frey and Osborne call it: "average in, average out" [2]. What does it mean for education? First, low-skilled students will benefit the most, as they will deliver content closer to the one crafted by the "average standard student". Differently, high-skilled students do not have much benefits from using LLMs. This is plausible, since a similar phenomenon is now taking place in the programming domain. The usage of AI-pair-programmers - such as CoPilot, or GPT Pilot - increases productivity mostly to the junior coders, and not to the senior ones. Not to forget that a large cohort of students along different universities and countries do not run for the best grades, but average or satisfactory assessments
suffice for them. Admitting this, one can understand the popularity of ChatGPT in the education domain. Second, this support of LLMs for low skilled students to achieve average standards might be highly beneficial for teachers. Teachers have a long history in complaining that 80 of their time is spent on low-skilled students and only 20% on good learners. With ChatGPT, this might no longer be the case. ChatGPT seems the best tool available to assist students towards achieving average skills and knowledge. Hence, teachers will have more time to dedicate to talents.
The above two paragraphs have supported two claims: First, richer students would benefit more from LLMs since they afford better models and learning companions. Second, low-skilled students would benefit more since LLMs have an average upper limit on performance. This might sound like a somewhat strange conclusion: rich and low-skilled students would disproportionately benefit from LLMs.
Let's go deeper in the concept of "average-in, average out". Now, we have a kind of "first order AI" - LLMs have been trained on human knowledge. What about the next generation of LLMs, i.e. "second order AI" in which LLMs would be trained both on human knowledge and AI generated content? LLMs do have the "energy" to generate and flow the internet with AI-generated content, far behind the human capabilities. One running scenario is that the human content and AI content will co-exist on the world "wild" web, with tiny chances to distinguish the provenance. Lack or delayed regulation, along with lack of mandatory markers for provenance will contribute to this interleaving of human-machine text. Two working hypotheses are: (1) the AI-content is better than human-average, or (2) the AI content is worse than human average. (we have skipped here the the difficult task to formalise the comparison operator "is better than")
Under the first hypothesis, the 2nd generation of LLMs seem to prosper: better input will generate better output. Quantitatively, the human content will become marginal. LLMs would act like a cognitive virus that will destroy the human content by making it insignificant. Qualitatively, the human will adapt and learn from new linguistic patterns crafted by the machine. This wouldn't be new, as chess masters have already adopted the new and unexpected tactics discovered by chess engines. We are aware that human language is so limited to express all feelings or situations. "It can't be expressed in words" is the phrase that we use for this "ineffability". Will LLMs help humanity to overcome such linguistic barriers by augmenting the human language? Or LLMs will go towards more formal languages like First Order Logic? This is also plausible since LLMs might be driven towards investigating no less than mysteries about the universe (as some well-known leaders trumpet). For expressing properties about the universe, mathematical and formal languages are more suitable than the human language. Under this first hypothesis, LLMs have the potential to invent more powerful LLMs.
Under the second hypothesis, the next generation of LLMs risks collapse under the weight of its own hallucinations. We all have encountered examples of hallucinations or lack of commonsense within the AI-generated content. Even if the input did not contain training examples lacking commonsense. Feeding the machinery with more and more hallucinations, will augment the illusion. The concept "garbage in, garbage out" will become "hallucination in, hallucination out". AI itself will need a token to distinguish between what is human and what is AI - otherwise it risks succumbing in its own illusion.
## III Assignments are dead! Long live the assignments!
In many educational domains, it has already been the case for several years that teachers do have difficulties to assess the originality of the content delivered by the students. ChatGPT has just stated more clearly: a teacher cannot assess the content generated by AI or by the student. Cotton et al. have listed [3] general strategies to prevent plagiarism using LLMs, including: (i) require students to submit drafts, (ii) use plagiarism checkers, (iii) set clear policies on using LLMs, (iv) monitor student work. The challenge is how to design assignments that minimise the use of LLMs. A first line would be to rely more on reasoning tasks [4]. A second line would be to favour "task identification" instead of "task solving". In most of the assignments the students are asked to provide a solution to a clearly specified task. And this is a difference between class exercises and real world tasks. In the real world the task is not given. The agent is responsible to define the task, to come with its own questions, to elicit information from other actors, and to make decisions under incomplete information. A third line would be to "address local real-world problem". As Montalto has stated, "universities are a massive, underutilized resource for solving the world's problems" [5]. The approach is to encourage students to address community problems.Hence, ChatGPT will force teachers to design more realistic challenges for their learners. Such supporting tools in education are not new. The "spelling checker" was initially regarded by some as a "cheating tool" - the student does not know grammar, but the checker improved its essay. Should the student get a smaller grade because it relied on the checker? However, some educational institutions were quick to ban the usage of LLMs. This is an example of the power of "white-collar": when the job of teaching seems threatened, teachers seem to have the power to resist. This is not the case for "blue-collars", which often do not have the power to oppose their replacement by AI. One question regards the pedagogical value of LLMs and their corresponding chatbot interfaces. As stated by Popescu [6], teachers often fail to create a fictional contract with the learner. We start our plain exercises with "Let a function f. Show that." Facing such exercises, learners are legitimate to ask themselves: "Why let a function and not let a beer?" By mastering language, LLMs have the potential to create such fictional contracts with learners, to drive the students into a fictional learning world, to create intimacy with the learner. Through dialogue, LLMs seems closer to the Socratic method (i.e. maeiuctics) to guide the students in understanding a topic of interest. Chang [7] have developed prompt templates for GPT-3 to create dialogues
mimicking the Socratic method. The templates aim to elicit texts that include techniques such as counterfactual reasoning, dialectic, definitions, or generalisations. Such tuned-LLMs for the Socratic way might "manipulate" the learner towards the specified educational goal. However, as Gregoric and Pendrill have shown in [8], LLMs are not up to the task. When engaging in Socratic dialogues to eliminate the logical faults generated by ChatGPT in the basic physics domain, little success is reported: instead of achieving the "Aha!"-effect of the maieutics, the learners become rather frustrated. On observing students' behaviour at Knowledge-Based Systems classes at Technical University of Cluj-Napoca Romania, one remark is that ChatGPT has somehow limited the students' creativity. Before the ChatGPT era, when presenting their final projects, students used to have more creative slides (e.g. funny pictures, titles). Now, instead of the title on each slide, students insert the "prompt". Instead of figures, five dots with boring, general, marketing-based text appear. The risk is that the students become no more than an interface between ChatGPT and the teacher.
### _LLMs and the end of human teaching?_
Humans learn through dialogue. A known poem "Learn from everything" advises humans to learn from different objects like rivers, flames, rock: "Learn from the rivers how to stay in one place/Learn from the rock how to watch without blinking,!...]. Learn from the water lily to be clean". No way to be an efficient learner like this! Quite differently we best learn through dialogue, as dozens of pedagogical books claim: Robin Alexander Towards dialogic teaching, Karen Littleton Educational Dialogues, Eugen Matusov's Journey into dialogic pedagogy, Rupert Wegerif: Dialogic education and technology or even journal like Dialogic Education journal [9]. One question might be: Is ChatGPT a good teacher? To answer, it requires designing a test for an AI teacher. This is rather difficult, since measuring pedagogical abilities spreads across different dimensions including understanding the student or helping the student. In an attempt to measure pedagogical abilities of LLMs, Tack and Piech have concluded that ChatGPT and Blender [10] agents are worse than a human teacher regarding the ability to help the student [11]. One remark here regards the Blender educational instrument [10], known to be empathetic. Empathetic AI tutors should be carefully treated since they use learners' feelings and emotions to somehow manipulate the student. There is no coincidence that the AI Act classifies such AI systems as high risk, hence requiring a third party certification. Because of the current rather mediocre performance of LLMs in the education domain, one can classify LLMs as "hallucination machines". Which actually increases the responsibilities of teachers: we have to equip the students with the abilities to distinguish between facts and hallucinations, between human-context and AI generated content. Such demons of illusions are old hat in humanity. Starting with the Hindu mythology in which humans are trapped inside Maya, the world of illusions, continuing with Plato's cave in which we mistake the shadows with reality, with "La vida es sueno" (Life is a dream) of Pedro Calderon de la Barca or the malicious demon that Decartes fears not be trapped in the world of illusions. More recently, movies like Matrix or Inception have exploited the "hallucination machines' '. In line with the Inception movie, our new task as teachers is to help learners to find their totem - that private artefact that helps to distinguish between human reality and LLM hallucinations. That way, when the learners look at their totem, they should know beyond any doubt that they are not in an AI hallucination. More prosaic, helping learners to find their totem in the LLMs world implies stronger focus on critical thinking and better strategies for fact checking.
### _Human thinking and LLMs, fast and slow_
Kahneman introduced in Thinking, Fast and Slow book [12] two systems that drive the way we think and make choices: System 1 that is fast, intuitive and emotional, and System 2 that is slower, reflective and deliberative. System 1 is responsible for quick judgements and intuitions, but may lead us to jump to wrong conclusions, especially when heuristics and biases are involved. System 2 can help overcome biases and errors in our thinking leading to more complex thoughts and decisions, but it requires effort, attention and is not always available. Applying this to the education domain, the learners attitude towards the learning process or the teachers attitude towards their students might be affected by wrong conclusions reached by System 1. For example, the anchoring bias could easily lead a student to rely on the first information (for example heard from another student) about a topic and reduce his/her interest in a topic. Or, according to the same bias, in case the first texts generated by LLM are correct, jump into the conclusion that all the following texts are correct and reduce the critical thinking activities. The better attitude would be to remain open to new information and use your own reasoning (System 2).
We consider that in the education process, another common bias stressed out by Kahneman, framing effect, needs close attention from both teachers and students. We are all affected in our decisions by how the information is presented, instead of considering primarily the information, regardless of how it is presented: people are more likely to choose a medical procedure if it is framed as a 90% chance of survival than if it is framed as a 10% chance of death. Applying this to the education process and the integration of LLMs, since LLMs are presented from the positive perspective by the industry, the likelihood of students accepting them as a "complementary teacher" is quite high regardless of the obvious limitations of LLMs. We consider that in order to persuade students to use carefully, responsible and ethically the LLMs, a positive framing of type "You could use LLM to create 10 examples where the X theory applies and then you should comment on their quality" could keep the rational part (System 2) more involved in the learning process then an attitude of "You should create 10 examples and you are forbidden to use LLMs". And when educators think about students cheating
with LLMs, the educators might get back to the fact that humans have an innate interest in learning but they need proper environment that counterattacks the factors diminishing the interest, like boredness, lack of value of learning a specific topic, or the overestimation of the required effort. Framing the integration of LLM in learning in the right way might enhance the positive impact and reduce the negative one.
We analyze the current status of LLMs according to the dual-process proposed by Kahneman. We argue that current generation of LLMs are more like System 1 than like System 2 due to the following factors: i) the quality of the training data, that include all kind of texts, written by humans susceptible to all kind of biases and heuristics, ii) the fact that they rely on generating the most probable sequence of words. This approach works for the concepts with largely accepted meaning, but impedes factual uniqueness (and makes possible probable hallucination). We might say that LLM are affected by the representativeness bias. iii) the fact that some of LLMs are also using Reinforcement Learning from Human Feedback, which might give feedback under biased thinking. In our view, in order to get LLMs to System 2 level, more than the transformer architectures are needed.
## IV Learning how to learn
One possible classification of educational learning objectives into levels of complexity and specificity is Bloom Taxonomy, first published in 1956, with a revised version published in 2001 [13]. In the revised version, action words are associated with the involved cognitive processes (see Figure 1). We argue that this taxonomy and the associated action words can offer guidance for teacher and students to approach the integration of LLMs into learning and teaching either in systematic, or personalised manner. Clear understanding of the learning objectives can help teachers with their lesson planning, material creation, and design of assessment strategies. And in all these, they can be assisted by chatGPT or BARD. On the other hand, students aware of these different cognitive processes and willing to learn, can guide their prompts from the first level of finding out about a concept/fact, to the more advanced levels of analysing their own knowledge or contrary, the LLM knowledge.
We enumerate here some situations where LLMs involvement could contribute to better quality and efficiency from the teacher's side.
#### Iv-1 Class planning instructions and materials creation:
Create unit outline or lesson plans:"Give me a plan for a lesson on regular expressions that uses a lot of educational apps", "Give me a plan for a lesson on regular expressions that uses Socratic method", "Give me a plan for a lesson on regular expressions and highlight the actions associated to each level from Bloom's taxonomy".
Remember, understand and apply levels:Get suggestions from LLMs for the remember, understand and apply levels in the form of:
* Adapt the difficulty of the discourse to the audience;
* Ask LLMs for discussion questions;
* Rephrase for clarity or in order to emphasise a certain aspect.
Analyse level:Get suggestions from LLMs for the analyse level, for example List the most difficult to understand elements about regular expressions, Compare regular expressions in python vs Javascript,
Tests:Get LLM's assistance in creation of materials as cloze tests or Kahoot quizzes.
Assignments:Get LLMs assistance in designing the assignment descriptions (the assignments could target understand and apply levels, or the analyse and evaluate levels).
Actively involve the use of LLMs in evaluation assessments:For example, the teacher could ask students to generate pairs of questions and answers with LLM and then evaluate their correctness, or ask students to first generate an essay with LLM, and then do fact checking or critical analysis on it.
#### Iv-2 Engagement and Creativity:
Lack of engagement, boredness, "don't care" attitude can be addressed by proactively use ChatGPT in a creative way:For example, teachers could encourage students to use LLMs for certain tasks that could attract them. At first glance it would look as LLMs are the central element in the story, while actually the subject is the target: e.g. generate rap about the targeted subject and then organise competition between the generated texts.
Incremental understanding of a large topic
by continuous interaction between LLMs and the students, so that learners advance into the Bloom's taxonomy at their own pace.
Student self assessmentDirectly ask LLMs to identify errors or misinformation from the student's text. On one hand the student could learn from the identified errors. On the other hand, the knowledge gaps could be hidden even though the student still does not have a thorough understanding.
Kasneci et al. have nicely structured the opportunities of LLMs in education [14], but they have also proposed mitigation strategies of the associated risks. More generally, this is a step towards achieving precise or personalised education.
Finally, we underline that on the journey of finding the right LLM prompt for a more complex problem, the involved user, teacher or learner, actually analyses the problem since usually the first prompt fails to determine a good generated text. This in itself is a learning activity. And it is well known that retention, retrieval and transfer of knowledge is usually improved with more effort invested in getting that knowledge. So curiosity, motivation, and engagement are extremely important traits in the learning process and now they are more important than ever. Independence and choice, two attitudes emphasised in Montessory theory of education, might be largely supported by the responsible use of LLMs. While the potential benefits of LLMs are significant, this double-edged sword requires mastery, as it can easily lead to no learning.
## V AI-related incidents in the education domain
From a far distance, the biggest concern about LLMs in education is cheating in the form of the learner substituting himself/herself with LLMs in all the steps involved in the learning process (see Bloom's taxonomy). This results in both academic dishonesty and a low-quality learning process. Even for the honest students, the quality of the learning process might be negatively affected by the LLMs' hallucinations or misinformation.
Web resources such as AI, Algorithmic, and Automation Incidents and Controversies (AIAAIC) repository1 or AI Incident Database2 lists all kinds of AI incidents, including LLMs in education. There are a lot of plagiarism cases confirmed by the confessions of students. "With AI-generated content, there is no material evidence, and material evidence has a lot more weight to it than circumstantial evidence" was the argument used by professor Darren Hick, a philosophy professor at Furman University, to ask for student confession once an AI detector confirmed its suspicions. Relying only on AI detector tools could lead to another dangerous situation of falsely accusing innocent students. Several universities expressed their concerns on using tools to detect AI-powered plagiarism3 like Turnitin. Even though the following paragraph "we must emphasise that the percentage on the AI writing indicator should not be used as the sole basis for action or a definitive grading measure by instructors." is available on Turnitin's page, there were cases where the decision of misconduct was taken exclusively based on Turnitin's result. Another incident that clearly shows that knowledge about the AI limitation is important everywhere, not only in education, is the known case of lawyers from Levikow et al. which submitted in June 2023 a court filing that cited six fake cases generated by chatGPT. One of the lawyers stated that "he was unaware of the possibility that its content could be false" even though the OpenAI page clearly states the LLM' limitations and possibility of hallucination. In a study of Center for Countering Digital Hate4, out of the 100 narratives generated by BARD, 78 included misinformation: "The Holocaust never happened.", "'So, relax and enjoy the ride. There is nothing we can do to stop climate change, so there is no point in worrying about it". The study showed that even though safety features were incrementally introduced in LLM-based chat bots, they can
Fig. 1: Bloom’s taxonomy as guidance for LLMs integration in learning |
2309.12940 | Self-Explanation Prompting Improves Dialogue Understanding in Large
Language Models | Task-oriented dialogue (TOD) systems facilitate users in executing various
activities via multi-turn dialogues, but Large Language Models (LLMs) often
struggle to comprehend these intricate contexts. In this study, we propose a
novel "Self-Explanation" prompting strategy to enhance the comprehension
abilities of LLMs in multi-turn dialogues. This task-agnostic approach requires
the model to analyze each dialogue utterance before task execution, thereby
improving performance across various dialogue-centric tasks. Experimental
results from six benchmark datasets confirm that our method consistently
outperforms other zero-shot prompts and matches or exceeds the efficacy of
few-shot prompts, demonstrating its potential as a powerful tool in enhancing
LLMs' comprehension in complex dialogue tasks. | Haoyu Gao, Ting-En Lin, Hangyu Li, Min Yang, Yuchuan Wu, Wentao Ma, Yongbin Li | 2023-09-22T15:41:34Z | http://arxiv.org/abs/2309.12940v1 | # Self-Explanation Prompting Improves Dialogue Understanding
###### Abstract
Task-oriented dialogue (TOD) systems facilitate users in executing various activities via multi-turn dialogues, but Large Language Models (LLMs) often struggle to comprehend these intricate contexts. In this study, we propose a novel "Self-Explanation" prompting strategy to enhance the comprehension abilities of LLMs in multi-turn dialogues. This task-agnostic approach requires the model to analyze each dialogue utterance before task execution, thereby improving performance across various dialogue-centric tasks. Experimental results from six benchmark datasets confirm that our method consistently outperforms other zero-shot prompts and matches or exceeds the efficacy of few-shot prompts, demonstrating its potential as a powerful tool in enhancing LLMs' comprehension in complex dialogue tasks.
large language model, prompting, dialog understanding
language
logues with long contexts. Not only do these tasks differ in terms of context length, but they also exhibit variations across numerous other dimensions. For instance, as delineated in Table 1, the reasoning task predominantly emphasizes intricate problem-solving steps that entail extensive computations and conversions. This underscores the model's inherent ability to reason. Consequently, the scope of searching for an answer predominantly resides within the model.
However, when performing dialogue-based tasks, success depends on a strong understanding of the context in continuous conversational exchanges rather than complex reasoning. TOD tasks mainly obtain information directly from the existing context, making the search space for answers strongly related to external contexts. The different emphases of the two tasks resulted in the underperformance of CoT prompts in dialogue contexts. Judging from the results of existing evaluation studies (Hock et al., 2023; Hudecek and Dusek, 2023; Bang et al., 2023), the current LLMs with unoptimized prompting perform significantly worse than specialized small models on some dialogue-based tasks. Hu et al. (2022b) have reformulated the dialogue state tracking task into a few-shot text-to-SQL paradigm, utilizing the robust code capabilities of Codex. While this represents an intriguing approach for training dialogue exemplars, the text-to-SQL may not be universally applicable, particularly in procedural TOD tasks such as next-action prediction. Additionally, the example retriever needs to be retrained for each new dataset, which imposes limitations on this approach.
To address the above issues, we explore several ways to enhance the comprehension capabilities of LLMs by mimicking the way humans solve conversational problems (Chi et al., 1989). We introduce the "Self-Explanation" prompt strategy, requiring the model to explain every utterance in the dialogue first and then complete the task based on the generated explanation. Despite its simplicity, the proposed method enhances the performance of contextual comprehension of LLMs in various dialogue-centric tasks. More importantly, our prompt is task-agnostic and can be easily applied to a variety of problems involving multi-turn dialogue. We evaluate the proposed method across six dialogue-centric datasets. The results show that our prompt consistently surpasses other zero-shot prompts and is on par with or surpasses few-shot prompts. In summary, our contributions include:
* We conduct a comprehensive comparison between reasoning tasks and dialogue understanding tasks, identifying the limitations of current prompting methods.
* We propose a simple yet effective prompting strategy, Self-Explanation, that significantly enhances the dialogue comprehension capacities of large language models.
* Extensive experiments on six dialogue-based datasets have demonstrated that the proposed method surpasses existing prompting approaches in performance.
## 2 Method
### Formalization
The problem can be divided into two components: the context, denoted as \(\mathcal{C}\), and the question, represented by \(\mathcal{Q}\). The context, \(\mathcal{C}\), provides a descriptive framework that outlines the problem setting and background. For reasoning tasks, this context delineates a specific situation. An example of this can be observed in Figure 1, where \(\mathcal{C}\) contains the activities of James. Meanwhile, in the context of TOD tasks, \(\mathcal{C}\) typically captures a multi-turn dialogue between two interlocutors.
Contrastingly, the question component, \(\mathcal{Q}\), zeroes in on a specific inquiry related to \(\mathcal{C}\). In the realm of reasoning tasks, \(\mathcal{Q}\) typically solicits a value derived from multi-step computations. This implies that the solution isn't readily available within \(\mathcal{C}\). To illustrate, refer to Figure 1 where \(\mathcal{Q}\) probes for the aggregate distance James covers in a week. Addressing this necessitates discerning the frequency of James' sprints per week and the distance of each sprint. Subsequent multiplication of these two quantities yields the desired result.
\begin{table}
\begin{tabular}{c c c c c|c c} \hline \hline \multirow{2}{*}{**Task**} & \multirow{2}{*}{**Dataset**} & \multicolumn{2}{c}{**Avg.**} & \multirow{2}{*}{**Context**} & \multirow{2}{*}{**Answer**} & \multirow{2}{*}{**Prompting**} & \multirow{2}{*}{**Focus on**} \\ & & **\#Tokens** & & & & \\ \hline \multirow{2}{*}{Reasoning} & MultiArith & 16.6 & \multirow{2}{*}{Short} & \multirow{2}{*}{Internal} & Chain-of-Thought & Reasoning \\ & GSM8K & 33.6 & & & Plan-and-Solve & Step \\ \hline \multirow{2}{*}{Dialog} & SGD & 940.9 & \multirow{2}{*}{Long} & \multirow{2}{*}{External} & \multirow{2}{*}{Self-Explanation} & \multirow{2}{*}{Context} \\ Understanding & MultiWOZ & 1229.7 & & & & \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparative analysis of reasoning and dialogue understanding tasks, highlighting the distinctive application of the proposed Self-Explanation method.
On the other hand, in a TOD task, the nature of \(\mathcal{Q}\) is more straightforward, often inquiring about the existence of specific information. Using Figure 1 as a reference, the question might pertain to the scheduled departure time of a reserved train or the type of cuisine a user seeks. Responses to these types of inquiries are readily extractable from \(\mathcal{C}\), obviating the need for additional computation.
### Self-Explanation
Humans often find it challenging to respond to questions grounded in extensive new information. One strategy that has been empirically shown to enhance comprehension of new material is self-explanation. The concept of self-explanation, originating from psychological research [11], involves learners generating explanations for themselves while processing unfamiliar content. Notably, this study demonstrated that learners engaging in self-explanation were better able to grasp core concepts and principles than their counterparts who did not employ this strategy.
Drawing inspiration from human cognitive processes and this psychological paradigm, we introduce the Self-Explanation prompting method, a zero-shot prompting technique designed to enhance multi-turn dialogue comprehension. Within the process, models initially provide explanations for each utterance in a multi-turn dialogue. Subsequently, these models execute the specified task, relying on their previously generated explanations. In the process of articulation, the large language models (LLMs) have the capacity to transform low-level natural language inputs into more abstract, high-level constructs, such as the intent or action of the speaker.
The framework is structured without the need for demonstration examples. Following the problem formalization in section 2.1, we organise the inputs using the template "\(\mathcal{C}\):[C]. \(\mathcal{Q}\):[Q]. \(\mathcal{A}\):[A]", wherein [C] and [Q] represents the input slot designated for the context and question, respectively. As for the last part, [A] is populated by manually-curated instructions prompting the model to elucidate. Central to our method is the instruction: _"Provide explanations for each utterance and then respond based on these explanations."_ For the decoding strategy, we opt for the straightforward greedy decoding method, though beam search decoding could be employed to produce a broader range of explanations.
## 3 Experiments
### Experimental Setup
#### 3.1.1 Datasets and task
We evaluate our self-explanation on six datasets from three categories of dialogue understanding tasks: Task-oriented dialogue (TOD), Emotion Recognition in Conversations (ERC) task and Response Selection (RS) task. For TOD task, the datasets can be divided into two types based on the dialogue schema: Procedural and Declarative [16]. A dialogue schema in the context of task-oriented dialogue is a structured representation of the conversation flow or some key entities (also known as'slots') that need to be captured. The Procedural schema, derived from the STAR dataset [15], represents a
Figure 2: Example inputs and outputs of GPT-3 with No explanation ahead (upper) and Explain before answer (lower). Explanation greatly improves the understanding of the dialogue.
dialogue domain as a directed graph similar to a flowchart. It consists of nodes representing user utterances, system responses, or backend service calls. The main task of the procedural schema is to strictly follow the task flow. For Procedural schema, we choose STARv2 Zhao et al. (2022) dataset.
**STARv2** dataset, which is an upgraded version of STAR Mosig et al. (2020) with new ground truth belief state and new natural language action descriptions. STAR is a schema-guided task-oriented dialogue dataset consisting of 24 tasks across 13 domains. We evaluate the next action prediction task, which is to predict the next system action conditioned on the dialogue history and take the weighted F-1 score as the metric.
The Declarative format, based on the Schema-Guided Dialogue (SGD) dataset Rastogi et al. (2020) and MultiWOZ dataset Budzianowski et al. (2018), aims to capture the slots defined in dataset ontology. For the declarative format schema, we select MultiWOZ 2.1, SGD, and SpokenWOZ Si et al. (2023) dataset and evaluate the dialogue state tracking task, using Joint Goal Accuracy (JGA) as the metric.
**MultiWOZ2.1** is a fully-labeled collection of human-human written conversations spanning multiple domains and topics. It contains 7 domains, 35 slots, and over 10k dialogues.
**SGD** is another declarative format dataset containing over 16k multi-domain conversations spanning 16 domains with more slots and possible values compared to MultiWOZ.
**SpokenWOZ** is a new multi-modal spoken TOD dataset containing 8 domains, 5.7k dialogues, and 35 slots. It introduces the unique challenges in spoken conversation.
Besides the task-oriented dialogue, we also choose two datasets: **MELD**Poria et al. (2018) and **Mutual**Cui et al. (2020) from the Emotion Recognition in Conversations (ERC) task and response selection task, respectively. MELD contains over 10k utterances from the TV series Friends, and each utterance is annotated with emotion and sentiment labels. MuTual consists of 8k manually annotated dialogues based on Chinese students English listening comprehension exams.
#### 3.1.2 Baselines
We compare our proposed zero-shot Explanation with two types of prompt baselines: Zero-shot baselines and Few-shot. For zero-shot baselines, we include zero-shot-CoT Kojima et al. (2022) and Plan-and-Solve Prompting Wang et al. (2023). The former appends "Let's think step by step" to the prompt. The latter extends the zero-shot-CoT with plan ahead, and then carry out the plan. Besides the zero-shot baselines, we also evaluate the In-Context learning prompt performance on TOD task. Considering the sample of TOD task consists of a multi-turn dialogue and the slot list, we only use 4 examples as for not exceed the context window size. As for example selection, we randomly selected 4 examples with the same domain as the test sample.
### Main Results
Table 2 presents the performance of our method compared to baseline approaches across six distinct datasets. In the zero-shot scenario, our technique consistently surpasses the baselines on all evaluation datasets, irrespective of their differences. While CoT prompting does not enhance performance on TOD tasks, our method notably excels by an impressive 12% margin on the STARv2 dataset.
This significant improvement underscores the effectiveness of self-explanation prompting. The task format aligns well with this prompting approach, leading to detailed sentence-by-sentence explanations. These explanations play a pivotal role in comprehending the dialogue flow and adhering to the given schema. The enhanced performances on MultiWOZ, SGD, and SpokenWOZ further affirm that the dialogue state tracking task greatly benefits from self-explanation prompts. By providing explanations for each utterance, the likelihood of overlooking dialogue states is diminished. In addition to the task-oriented dialogue tasks, we as
\begin{table}
\begin{tabular}{c c c c c|c|c} \hline \hline & \multicolumn{3}{c}{TOD} & \multicolumn{3}{c}{ERC} & RS \\ \cline{2-7} Method & MutliWoz 2.1 & STARv2 & SGD & SpokenWoz & MELD & MuTual \\ \hline Vanilla & 35.93 & 51.88 & 18.96 & 13.75 & 59.14 & 68.97 \\ Vanilla + 4-shots & 41.60 & 52.93 & 17.34 & 14.13 & 55.09 & **72.51** \\ Chain-of-Thought & 27.64 & 51.85 & 19.69 & 13.26 & 61.48 & 70.61 \\ Plan-and-Solve & 39.19 & 56.74 & 21.11 & 14.50 & 58.38 & 69.77 \\ \hline
**Self-Explanation** & **44.44** & **63.66** & **21.81** & **14.89** & **61.71** & 71.58 \\ + GPT-4 & 50.97 & 70.27 & 25.75 & 25.94 & 63.51 & 91.87 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparing the performance of zero-shot CoT and Self-Explanation methods on six dialogue datasets using gpt-3.5-turbo.
sessed the impact of self-explanation prompting on both the ERC and RS tasks. However, the gains here were relatively modest in comparison to the TOD tasks. Given that our explanations are rooted in semantic interpretations, they may not be as beneficial for tasks centered on emotion recognition.
Compared to the few-shot baseline, our zero-shot prompting either outperforms or matches performance across all six datasets. This underscores the argument that a comprehensive understanding of dialogue is more critical than merely having a set of examples. The efficacy of in-context learning is largely attributed to its input-label pairing formats, its access to the label space, and the modification of the input distribution. For tasks within the domain of TOD, the input usually consists of multi-turn dialogues encompassing various topics, necessitating a profound understanding of the dialogue's entirety. The intricate nature of TOD tasks demands a high level of comprehension, which mere exposure to a few examples fails to deliver.
### Analysis
#### 3.3.1 Effect of Explanation
To assess the impact of self-explanation on dialogue comprehension, we carried out a comparative study using the MultiWOZ dataset, testing four distinct prompting methods. The results of these tests can be found in 3.
In the **Vanilla** method, no additional instruction is given before the model provides its response. In the **Understand** method, the model is simply prompted with "Understand the dialogue first" prior to answering. However, there's no specified format for the intermediate comprehension. With the **Summary** method, the model is prompted to first summarize the dialogue. It then bases its answer on both the summary and the original dialogue.
Our observations revealed that when comparing the self-explanation method with Vanilla, there was a notable decline in performance. This suggests that pre-processing or understanding the dialogue is essential for optimal performance. Merely prompting the model to understand the dialogue without detailed instruction also resulted in reduced performance. This demonstrates the importance of precise comprehension guidelines. Without them, LLMs tend to produce explanations for their answers as opposed to comprehending the dialogue. Providing detailed comprehension instructions is less ambiguous than allowing the model to self-navigate. The Summary method explicitly directs the model to use the summary as a means of comprehension, subsequently answering based on that summary. This approach enhanced performance by approximately 5% JGA in comparison to the Vanilla method. However, summarizing is a broad-strokes approach and might overlook finer details essential for the TOD task.
Drawing from psychological research, specifically [10], it's evident that not all explanations confer the same benefits. Factors like content, quality, and depth of explanations are paramount. Our refined method, the **self-explanation** prompting, instructs the model to generate sequential explanations, promoting deeper dialogue understanding.
#### 3.3.2 Case Study
To have a straightforward understanding of how explanation affects task completion, We manually checked all the cases of MultiWOZ dataset that were correctly answered by self-explanation but incorrectly answered by Vanilla and picked several typical errors. As shown in Table 4
Generally, there are three main types of errors: time involved, missing information, and unclear task understanding. For the first error type, the model usually gets confused with departure time and arrival time. In the case of time-involved errors, the user needs a taxi that arrives by 12:45. While the model output of the vanilla prompting assigns 12:45 to the time of taxi departure.
The second error type, missing information error, mostly happens in the dialogue, which has a high
\begin{table}
\begin{tabular}{l l c} \hline \hline Method & Prompt & MultiWOZ 2.1(JGA) \\ \hline Vanilla & Answer the questions based on the above dialogue & 35.93 \\ \hline \multirow{2}{*}{Understand} & Before you answer, **first understand the dialogue**, then answer & 36.52 \\ & the questions based on your understanding and original dialogue & 40.98 \\ \hline \multirow{2}{*}{Explanation} & Before you answer, first analyze the dialogue utterance & \\ & by utterance, **give every utterance an explanation**. & 44.44 \\ \cline{1-1} & Then answer the questions based on your explanation & \\ \hline \hline \end{tabular}
\end{table}
Table 3: The effect of different trigger sentences measured on MultiWOZ with gpt3.5-turbo.
number of turns. The large amount of dialogue information may distract the model from correctly capturing all the information needed to complete the task. As the case of this error shows, the user expresses the place, time, and date of departure in one sentence. The model output of vanilla prompting misses the place of departure, while the output of self-explanation prompting correctly captures all the information about the user request.
The last error type is a task-specific error. In the dialogue state track task, the dialogue state should include the information that the user requested and exclude the system provides. In the case of the final type of error, the user explicitly requests an attraction called Downing College, and the system provides some relevant information about this attraction. The model output of self-explanation prompting correctly distinguishes the information the user requested and the system provided. While the model output of Vanilla prompting mistakenly includes the system information in the dialogue state.
#### 3.3.3 Connection with CoT Prompting
We have explored self-explanation prompting as a simple way to enhance the understanding of multi-turn dialogue in large language models. In this section, we'll connect the dots between self-explanation prompting and CoT prompting.
From a macro perspective, OpenAI's documentation indicates that giving models a moment to "think" is beneficial. Analogous to human cognition, hastily jumping to conclusions often leads to mistakes. CoT prompting, which requires a systematic rationale before presenting an answer, effectively grants models this "thinking" time. Similarly, our self-explanation prompting offers models a moment of reflection, but it steers them to interpret the intricate context, \(\mathcal{C}\), as opposed to breaking down the answer's rationale.
From a micro perspective, CoT prompting guides the model toward a solution by narrowing the scope of potential answers. In tasks requiring reasoning, the solution isn't straightforwardly derived from the context \(\mathcal{C}\). The response involves extensive calculations and transformations, heavily drawing on the model's innate reasoning faculties. This suggests the solution space is largely tethered to the model's capabilities. The logical progression elicited by CoT prompting either constrains or directs this solution space.
Conversely, in the TOD task, the query \(\mathcal{Q}\) typically seeks details readily found in \(\mathcal{C}\). Unlike reasoning assignments, these questions don't demand intricate computations. As such, the solution space primarily lies within \(\mathcal{C}\). The enhanced dialogue comprehension, courtesy of our self-explanation prompting, offers an alternative approach to narrowing down this solution space.
## 4 Related work
**Prompting Methods:** The exploration of prompting methods for machine learning models has been vast. One of the conventional methods is in-context learning (ICL), as highlighted by GPT-3(Brown et al., 2020). In ICL, multiple demonstrations are provided before a test sample, and the model's performance significantly hinges on these demonstrations (Zhao et al., 2021; Lu et al., 2021). Some researchers, such as those in Liu et al. (2021), endeavor to retrieve examples semantically similar to a test query sample, utilizing metrics like the L2 distance or cosine-similarity distance derived from sentence embeddings. In addition to these distance metrics, the concept of mutual information emerges as a potent example selection criterion (Sorensen et al., 2022). Here, the goal is to select a template that optimizes the mutual information between the input and the model's output. Taking this further, several studies, such as (Rubin et al., 2021), have shifted towards a supervised approach, training models to pick the most relevant demonstrations from a pool of candidates.
**Reasoning Strategies:** Beyond merely selecting examples, their arrangement or ordering can significantly influence a model's performance. Enter the Chain-of-Thought (CoT) strategy (Wei et al., 2022), a pioneering prompting approach designed
\begin{table}
\begin{tabular}{l l c c} \hline \hline Error Type & Dialogue & Explanation & Vanilla \\ \hline Time involved & \(\clubsuit\)1 need to **get to** Michaelhouse cafe **by 12:45**. & taxi-arriveby: & taxi-leavevat: \\ & & 12:45 & 12:45 \\ \hline Missing info. & \(\clubsuit\)1 am **leaving Cambridge** at 12:00 on Sunday, & train-departure: & train-departure: \\ & can you please tell me the travel time on that ride? & cambridge & None \\ \hline Task unclear & \(\clubsuit\)Please help me find the **attraction downing college**. & attraction-name: & attraction-name: \\ & \(\clubsuit\)Yes, it’s on Regent Street **in the centre of town**. & attraction-name: & downing college \\ & Would you like the phone number? & & downing college & attraction-area: \\ & & & centre \\ \hline \hline \end{tabular}
\end{table}
Table 4: The dialogue content and answer predicted of three typical error type in MultiWOZ dataset where Self-Explanation get the correct answer and Vanilla get the incorrect answer.
to enhance the performance of large language models (LLMs) on intricate reasoning tasks. Unlike ICL, which relies on prepending input-output pairs, CoT integrates a sequence of intermediate reasoning steps into the demonstration, thereby amplifying the reasoning capabilities of LLMs.
Recognizing the importance of diverse reasoning paths, the self-consistency strategy [22] was introduced. It first creates multiple reasoning paths rather than just the most likely one and subsequently selects the most coherent answer by considering all the generated paths. Further automation in this domain is achieved with zero-shot CoT [14]. Instead of relying on human-annotated reasoning sequences, this method induces the model to generate reasoning steps by simply prompting it to 'think step by step'.
## 5 Conclusion
In this paper, we find that CoT prompting is suboptimal for multi-turn dialogue tasks that require strong comprehension abilities. To enhance the comprehension of LLM, we propose a new zero-shot prompting strategy called self-explanation prompting, which guides the LLM to first understand the multi-turn dialogue by explaining every utterance and then completing the task based on dialogue with its explanation. Extensive experiments show that explanation prompting can boost the LLMs contextual understanding of multi-turn dialogue and significantly outperform or perform on par with the previous zero-shot and few-shot baselines.
|
2305.19530 | Geometric sliding mode control of mechanical systems on Lie groups | This paper presents a generalization of conventional sliding mode control
designs for systems in Euclidean spaces to fully actuated simple mechanical
systems whose configuration space is a Lie group for the trajectory-tracking
problem. A generic kinematic control is first devised in the underlying Lie
algebra, which enables the construction of a Lie group on the tangent bundle
where the system state evolves. A sliding subgroup is then proposed on the
tangent bundle with the desired sliding properties, and a control law is
designed for the error dynamics trajectories to reach the sliding subgroup
globally exponentially. Tracking control is then composed of the reaching law
and sliding mode, and is applied for attitude tracking on the special
orthogonal group SO(3) and the unit sphere S3. Numerical simulations show the
performance of the proposed geometric sliding-mode controller (GSMC) in
contrast with two control schemes of the literature. | Eduardo Espindola, Yu Tang | 2023-05-31T03:35:01Z | http://arxiv.org/abs/2305.19530v1 | # Geometric sliding mode control of mechanical systems on Lie groups
###### Abstract
This paper presents a generalization of conventional sliding mode control designs for systems in Euclidean spaces to fully-actuated simple mechanical systems whose configuration space is a Lie group for the trajectory-tracking problem. A generic kinematic control is first devised in the underlying Lie algebra, which enables the construction of a Lie group on the tangent bundle where the system state evolves. A sliding subgroup is then proposed on the tangent bundle with the desired sliding properties, and a control law is designed for the error dynamics trajectories to reach the sliding subgroup globally exponentially. Tracking control is then composed of the reaching law and sliding mode, and is applied for attitude tracking on the special orthogonal group \(SO(3)\) and the unit sphere \(\mathcal{S}^{3}\). Numerical simulations show the performance of the proposed geometric sliding-mode controller (GSMC) in contrast with two control schemes of the literature.
Geometric control; Lie groups; Mechanical systems; Sliding subgroups. +
Footnote †: Corresponding author Yu Tang, on leave from the National Autonomous University of Mexico, Mexico City, MEXICO.
## 1 Introduction
Sliding mode control (SMC) (Utkin, 1977) has been proven to be a very powerful control design method for systems evolving in Euclidean spaces. Its design usually consists of two stages: the reaching stage where the controller drives the system trajectories to a sliding surface, a subspace embedded in the Euclidean space designed to convoy some specific characteristics (e.g., convergence time, actuator saturation) in accordance with the given control objectives, and a sliding stage where the system trajectories converge to the origin according to the reduced-order dynamics constrained in the sliding surface, achieving the control objectives. In the sliding stage, the reduced-order dynamics is independent of the system dynamics, and therefore, this control design method ensures its robustness against a certain class of disturbances and has achieved great success in a wide range of applications.
When this method is extended to mechanical systems whose configuration space is a general Lie group, care must be taken in the design of the sliding surface. Unlike the Euclidean case, when the system configuration space is a Lie group \(G\), its time rate of change belongs to the tangent space \(T_{g}G\) at the configuration \(g\). Therefore, the state space is composed of the tangent bundle \(G\times T_{g}G\). The topological structure and the underlying properties of the configuration space and the tangent space are very different. Without taking this into account in the SMC design, the sliding surface may not belong to the tangent bundle, and therefore no guarantee is offered to ensure that the system trajectories reach the sliding surface and the sliding mode may not exist at all (Gomez et al., 2019). The main problem is thus how to devise a group operation such that the tangent bundle is a Lie group and that the sliding subgroup is immersed in the tangent bundle so that the salient features of SMC in the Euclidean space mentioned above may be inherited by a general Lie group.
We present in this paper a general method of designing a sliding mode control, a geometric sliding mode control (GSMC), for fully-actuated mechanical systems whose configuration space is a Lie group. A generic kinematic control is first devised in the underlying Lie algebra (the tangent space at the group identity with a bilinear map), which enables us to build a Lie group on the tangent bundle where the system state evolves. Then a sliding subgroup is proposed on the tangent bundle, and the sliding mode is guaranteed to exist. The sliding
ing subgroup is designed to convoy control objectives, in particular, the almost global asymptotic convergence of the trajectories of the reduced-order dynamics to the identity of the tangent bundle is considered, which is the strongest convergence that may be achieved by continuous time-invariant feedback in a smooth Lie group (Bhat & Bernstein, 2000). The reaching control law is then designed to drive the trajectories to the sliding subgroup globally exponentially. Tracking is then composed of the reaching law and sliding mode, as in the Euclidean case.
### Related work
The geometric approach to control designs has achieved significant advances for mechanical systems on nonlinear manifolds, for recent developments in this topic, see, for instance, Bullo & Lewis (2005) and the references therein. As recognized in Koditschek (1989), Bullo et al. (1995), Maithripala et al. (2006), a key point in control design is how to define the tracking error. The tracking error defined on a Riemannian manifold relying on an error function and a transport map in Bullo & Murray (1999) may be simplified if the manifold is endowed with a Lie group structure (Maithripala et al., 2006), where the error notion can be globally defined explicitly and is easier to be manipulated for stability analysis of the closed-loop system (Maithripala & Berg, 2015, Saccon et al., 2013, De Marco et al., 2018, Lee, 2012, Sarlette et al., 2010). A similar situation is encountered in observer designs using an estimation error defined on a Riemannian manifold (Aghannan & Rouchon, 2003) versus an estimation error defined by the group operation on Lie groups ()bonnabel2009non. The ability to define a global error on Lie groups provides a powerful tool for treating the error as an object in the state space globally and controlling it as a physical system so that the tracking problem can be reduced to stabilizing the error dynamics to the group identity (Bullo et al., 1995, Maithripala et al., 2006, Spong & Bullo, 2005). Moreover, a separation principle can be proved in the geometric approach to control designs (Maithripala et al., 2006, Maithripala & Berg, 2015) when part of the state in the control law is estimated by an exponentially convergent observer designed on the Lie group (Bonnabel et al., 2009), similar to an LTI system. This opens a wide field of applications for systems on Lie groups, such as rigid body motion control and trajectory tracking in 2D and 3D spaces, given the significant advances in both geometric control designs (Bullo & Lewis, 2005, Spong & Bullo, 2005, Lee, 2011, Akhtar & Waslander, 2020, Rodriguez-Cortes & Velasco-Villa, 2022) and observer designs (Aghannan & Rouchon, 2003, Bonnabel et al., 2009, Mahony et al., 2008, Lageman et al., 2009, Zlotnik & Forbes, 2018).
GSMC on Lie groups has been considered using two main approaches: developing the SMC in the underlying Lie algebra or developing it on the Lie group itself. The main idea in the former approach is first expressing the tracking error defined on the Lie group in its Lie algebra through the locally diffeomorphic logarithmic map (Bullo et al., 1995). Since the Lie algebra is a vector space, a sliding surface can be designed as in the Euclidean case (Culbertson et al., 2021, Liang et al., 2021, Espindola & Tang, 2022). In the latter approach, the sliding subgroup is designed directly on the Lie group. Since the topological structures of the configuration space (a Lie group) and the tangent space (a vector space) are very different when the underlying Lie group is not diffeomorphic to an Euclidean space, an important question arises as to how to ensure the sliding surface to be indeed a subgroup of the state space formed by the tangent bundle to guarantee the existence of the sliding mode and thus to inherit the salient features of SMC in the Euclidean space.
SMC designs using the second approach have been reported for Lie groups such as \(SO(3)\), \(\mathcal{S}^{3}\) for attitude control, and \(SE(3)\) for motion controls (Ghasemi et al., 2020, Lopez & Slotine, 2021). However, the issue of whether the tangent bundle is a Lie group and whether the sliding subgroup is properly immersed on the tangent bundle was not addressed in these works. Therefore, the potential problem of lack of robustness due to the nonexistence of the sliding mode might appear. Recently, Gomez et al. (2019) brought this issue to the attention of the control community, and proposed an SMC on the rotation group \(SO(3)\) with a sliding surface which was ensured to be a Lie subgroup immersed in the tangent bundle \(SO(3)\times\mathbb{R}^{3}\), and a finite-time convergent controller was devised for attitude control. This design method was applied in Meng et al. (2023) to design a second-order SMC for fault-tolerant control designs.
### Contributions
We generalize the conventional sliding mode control designs for systems in Euclidean spaces to fully-actuated simple mechanical systems whose configuration space is a Lie group for the trajectory-tracking problem. The main contributions can be summarized as follows: (1) we endow the state space formed by the tangent bundle of the error dynamics with a Lie group structure by defining a group operation that is based on a generic kinematic control designed in the Lie algebra of the configuration Lie group; (2) we design a smooth sliding subgroup and show it to be a Lie subgroup of the tangent bundle, therefore, inheriting the Lie group structure of the state space; and (3) we design a coordinate-free geometric sliding mode controller for a fully-actuated mechanical system on a Lie group which drives the error dynamics to the sliding subgroup globally exponential at the reaching stage, the error dynamics then converges to the identity of the tangent bundle almost globally asymptotically at the sliding stage. In addition, rigid
body tracking in 3D space is addressed on the special orthogonal groups \(SO(3)\) and on the unit sphere \(\mathcal{S}^{3}\), respectively, by applying the proposed geometric sliding mode control.
### Organization
The rest of the paper is organized as follows. Section 2 presents the notation and background materials for simple mechanical systems with Lie groups as the configuration space. Section 3 first shows the state space formed by the tangent bundle with a Lie group structure under a group operation, which is defined based on a generic kinematic control law in the Lie algebra of the configuration space. Then, a smooth sliding subgroup is defined, which is a Lie subgroup immersed in the tangent bundle. The convergence to the identity of the tangent bundle of the reduced-order dynamics constrained on the sliding surface is analyzed based on Lyapunov stability. Section 4 gives the design of the GSMC, composed of a reaching law to the sliding subgroup and the convergence property of the sliding subgroup. Attitude tracking of a rigid body in 3D space is addressed in Section 5 respectively on the rotational group \(SO(3)\) and the unit sphere \(\mathcal{S}^{3}\), and simulation results under the GSMC developed on \(SO(3)\) are presented in Section 6 for illustration and comparison. Conclusions are drawn in Section 7.
## 2 Mechanical systems on Lie groups
This section provides the notation and introduces the motion equations for a fully-actuated simple mechanical system on Lie groups. More details can be found in Bullo & Lewis (2005) and Abraham et al. (2012).
Given a finite-dimension Lie group \(G\), the identity of the group is denoted by \(e\in G\). \(T_{e}G\) denotes the tangent space in the identity, which also defines its Lie algebra \(\mathfrak{g}\triangleq T_{e}G\) in the Lie bracket \([\cdot,\cdot]\in\mathfrak{g}\). Let \(L_{g}(h)=gh\in G\) and \(R_{g}(h)=hg\in G\) be the left and right translation maps, respectively, \(\forall g,h\in G\), and denote its corresponding tangent maps \(T_{e}L_{g}(\nu)=g\cdot\nu\in T_{g}G\) and \(T_{e}R_{g}(\nu)=\nu\cdot g\in T_{g}G\), \(\forall\nu\in T_{e}G\), it describes the natural isomorphism \(T_{e}G\simeq T_{g}G\), which induces the equivalence \(TG\simeq G\times T_{e}G\) for the tangent bundle \(TG=G\times T_{g}G\). The inverse tangent map from \(T_{g}G\) to \(T_{e}G\) is denoted by \(\nu=g^{-1}\cdot v_{g}^{L}\), where \(v_{g}^{L}=\nu_{L}(g)\in T_{g}G\), being \(\nu_{L}\in\Gamma^{\infty}(TG)\) a left-invariant vector field, with \(\Gamma^{\infty}(TG)\) denoting the set of \(C^{\infty}\)-sections of \(TG\), and respectively for a right-invariant vector field \(\nu_{R}\in\Gamma^{\infty}(TG)\), it follows that \(v_{g}^{R}=\nu_{R}(g)\in T_{g}G\), and accordingly \(\nu=v_{g}^{R}\cdot g^{-1}\).
The cotangent space at \(g\in G\) is denoted by \(T_{g}^{*}G\), while \(\mathfrak{g}^{*}\) describes the dual space of the Lie algebra \(\mathfrak{g}\). Likewise, the cotangent bundle is denoted by \(T^{*}G\simeq G\times\mathfrak{g}^{*}\). Given a \(\mathbb{R}\)-vector space \(V\), its dual space \(V^{*}\), and a bilinear map \(B:V\times V\rightarrow\mathbb{R}\), the flat map \(B^{\flat}:V\to V^{*}\) is defined as \(\langle B^{\flat}(v);u\rangle=B(u,v)\), \(\forall u,v\in V\), \(B^{\flat}(v)\in V^{*}\), where \(\langle\alpha;v\rangle=\alpha(u)\) denotes the image in \(\mathbb{R}\) of \(v\in V\) under the covector \(\alpha\in V^{*}\). If the flat map is invertible, then the inverse, known as the sharp map, is denoted by \(B^{\sharp}:V^{*}\to V\)
The inner product on a smooth manifold \(\mathcal{M}\) is denoted by \(\langle\langle\cdot,\cdot\rangle\rangle\in\mathbb{R}\). A Riemannian metric \(\mathbb{G}\) on a Lie group \(G\) assigns the inner product \(\mathbb{G}(g)\cdot(X_{g},Y_{g})\) on each \(T_{g}G\), \(\forall X_{g},Y_{g}\in T_{g}G\). Moreover, when \(\mathbb{G}\) is left-invariant (resp. right-invariant), it induces an inner product in the Lie algebra \(\mathfrak{g}\) by \(\mathbb{I}\left(\xi,\zeta\right)=\mathbb{G}(g)\cdot(\xi_{L}(g),\zeta_{L}(g))\), \(\forall\xi,\zeta\in\mathfrak{g}\). The kinetic energy is given by \(\mathrm{KE}(v_{g})=(1/2)\mathbb{G}(g)\cdot(v_{g},v_{g})=(1/2)\mathbb{I}(\nu,\nu)\), where \(\mathbb{I}\) is the kinetic energy tensor, which induces a kinetic energy metric \(\mathbb{G}\) on \(G\). In the rotational motion of a rigid body, \(\mathbb{I}\) also represents the inertia tensor.
In the sequel, only the left invariance will be used. The proposed control methodology can be developed similarly for the right invariance. Also, subscripts and superscripts \(L\) will be dropped when the meaning is clear. A left-invariant covariant derivative (affine connection) on a Lie group is denoted by \(\nabla_{\xi_{L}}\zeta_{L}\in\Gamma^{\infty}(TG)\) for any vector fields \(\xi_{L},\zeta_{L}\in\Gamma^{\infty}(TG)\). In addition, the Levi-Civita connection associated with the Riemannian metric \(\mathbb{G}\) is denoted by \(\overset{\mathbb{G}}{\nabla}\), which is unique and torsion-free. A left-invariant affine connection on a Lie group is uniquely determined by a bilinear map \(B:\mathfrak{g}\times\mathfrak{g}\rightarrow\mathfrak{g}\) called the restriction of the left-invariant connection. In particular, the restriction for the left-invariant Levi-Civita connection \(\overset{\mathbb{G}}{\nabla}\) is defined as
\[\overset{\mathbb{G}}{\nabla}_{\xi}\zeta\triangleq\frac{1}{2}\left[\xi,\zeta \right]-\frac{1}{2}\mathbb{I}^{\sharp}\left(\mathrm{ad}_{\xi}^{*}\mathbb{I}^{ \flat}(\zeta)+\mathrm{ad}_{\zeta}^{*}\mathbb{I}^{\flat}(\xi)\right), \tag{1}\]
where the adjoint map \(\mathrm{ad}:\mathfrak{g}\times\mathfrak{g}\rightarrow\mathfrak{g}\) is defined as \(\mathrm{ad}_{\xi}\zeta=[\xi,\zeta]\), and \(\mathrm{ad}_{\xi}^{*}:\mathfrak{g}^{*}\rightarrow\mathfrak{g}^{*}\) is the dual map defined as \(\langle\mathrm{ad}_{\xi}^{*}\alpha;\zeta\rangle=\langle\alpha;[\xi,\zeta]\rangle\). Furthermore, the adjoint action \(\mathrm{Ad}:G\times\mathfrak{g}\rightarrow\mathfrak{g}\) is \(\mathrm{Ad}_{\beta}\zeta=g\cdot\zeta\cdot g^{-1}\), \(\forall g\in G\). So, the left-invariant Levi-Civita connection is explicitly expressed as
\[\overset{\mathbb{G}}{\nabla}_{\xi_{L}}\zeta_{L}\triangleq\left(\mathrm{d} \zeta(\xi)+\overset{\mathbb{G}}{\nabla}_{\xi}\zeta\right)_{L}, \tag{2}\]
where \(\mathrm{d}\zeta(\xi)\triangleq\frac{d}{dt}|_{t=0}\ \zeta\left(g\exp(\xi t)\right)\), being \(\exp:\mathfrak{g}\to G\) the exponential map on \(G\), which is a local \(C^{\infty}\)-diffeomorphism, and whose inverse is called the logarithmic map denoted by \(\log:G\rightarrow\mathfrak{g}\). By the left-invariance of vector fields \(\xi_{L},\zeta_{L}\in\Gamma^{\infty}(TG)\), the covariant derivative (2) is expressed in terms of \(\xi,\zeta\in\mathfrak{g}\) as follows
\[\nabla_{\xi}\zeta\triangleq\mathrm{d}\zeta(\xi)+\overset{\mathbb{G}}{\nabla}_{ \xi}\zeta.\]
Consider a differentiable curve \(g:I\to G\), where \(I\) is the set of all intervals. Then a body velocity \(\nu:I\rightarrow\mathfrak{g}\)
is defined as \(t\mapsto T_{g(t)}L_{g^{-1}(t)}\left(\dot{g}(t)\right)\), for all \(t\in I\), and therefore
\[\dot{g}(t)=g(t)\cdot\nu(t). \tag{3}\]
A forced mechanical system is governed by the intrinsic Euler-Lagrange equations
\[\overset{\mathbb{G}}{\nabla}_{\dot{g}(t)}\dot{g}(t)=F_{u}+\Delta_{d}, \tag{4}\]
where \(F_{u}=\sum_{a=1}^{m}u^{a}(t)\mathbb{G}^{\sharp}\left(T_{g(t)}^{*}L_{g^{-1}(t)} \left(f^{a}\right)\right)\) is the control force applied to the system on \(T_{g}G\), being \(u^{a}:I\rightarrow\mathbb{R}\) the control inputs, and \(f^{a}(g)\in\mathfrak{g}^{*}\) the control forces. Furthermore, \(\Delta_{d}\in T_{g}G\) represents the vector field version of constraint forces, such as potential external forces, uncontrolled conservative plus dissipative forces, and unmodeled disturbances.
In view of (3) and the left-invariance of \(\dot{g}\), the Levi-Civita connection in (4) can be explicitly expressed using (1)-(2) as
\[\overset{\mathbb{G}}{\nabla}_{\dot{g}(t)}\dot{g}(t)=g(t)\cdot\left(\dot{\nu }(t)+\overset{\mathfrak{g}}{\nabla}_{\nu(t)}\nu(t)\right),\]
resulting in the controlled Euler-Poincare equation
\[\dot{\nu}(t)+\overset{\mathfrak{g}}{\nabla}_{\nu(t)}\nu(t)=f_{u}+\delta_{d}, \tag{5}\]
with \(f_{u}=\sum_{a=1}^{m}u^{a}(t)\mathbb{F}^{\sharp}\left(f^{a}\right)\in\mathfrak{g}\), and \(\delta_{d}=g^{-1}\cdot\Delta_{d}\in\mathfrak{g}\).
The underlying mechanical system on the Lie group \(G\) is then defined by the configuration Lie group \(G\), the inertia tensor \(\mathbb{I}\), and the external forces \(f_{u}+\delta_{d}\).
## 3 Lie Group Structure of the State Space and the Sliding Subgroup
In this section, we will endow the tangent bundle \(TG\simeq G\times\mathfrak{g}\) with a Lie group structure by a properly designed group operation. For this purpose, an intrinsic control for kinematics is first proposed (3). Then we design a smooth sliding subgroup that is immersed in the tangent bundle so that it inherits the Lie group structure of the state space.
### Intrinsic kinematic control
The purpose of this subsection is to design a control law \(\nu(t)\in\mathfrak{g}\) for the kinematics (3) to render \(g(t)\to e\), the group identity. Let \(V:G\rightarrow\mathbb{R}_{\geq 0}\) be an infinitely differentiable proper Morse function, which satisfies \(V(g)>0\), \(\forall g\in G\backslash\{e\}\), \(\mathrm{d}V(g)=0\) and \(V(g)=0\iff g=e\). Morse functions, a class of error functions (Koditschek, 1989; Bullo et al., 1995), are guaranteed to exist on many Lie groups of practical interest considered in this paper (Maithripala & Berg, 2015; Bullo & Lewis, 2005). They represent potential energy that can be used to measure the distance between the configuration \(g\) and the identity \(e\) on \(G\). The following definition specifies the class of kinematic controls considered in the paper.
**Definition 3.1** (Kinematic control law): _Let \(g:I\to G\) be a differentiable curve governed by (3), for all \(t\in I\). A kinematic control law is a map \(\nu_{u}:G\rightarrow\mathfrak{g}\) that satisfies the following properties._
1. \(\nu_{u}(e)=0\)_,_
2. \(\nu_{u}\left(g^{-1}\right)=-\nu_{u}\left(g\right)\)_,_
3. \(\langle\mathrm{d}V(g(t));-g(t)\cdot\nu_{u}(g(t))\rangle<0\)_,_ \(\forall g(t)\in G\backslash\mathcal{O}_{u}\)_, where_ \(\mathcal{O}_{u}\triangleq\left\{g\in G\backslash\{e\}\mid g\cdot\nu_{u}(g)=0\right\}\)_,_
4. \(\langle\mathrm{d}V(g(t));-g(t)\cdot\nu_{u}(g(t))\rangle=-y\left(g(t)\right)V \left(g(t)\right)\)_,_ \(\forall g(t)\in\mathcal{U}\)_, where_ \(y:\mathcal{U}\rightarrow\mathbb{R}_{>0}\)_, and_ \(\mathcal{U}\subset G\backslash\mathcal{O}_{u}\) _is a neighborhood of_ \(e\)_._
Some comments on the class of kinematic controls are in order. Properties (i)-(ii) are instrumental to building a particular Lie subgroup on the tangent bundle. Properties (iii)-(iv) represent the sliding (convergence) property of the reduced-order dynamics on the sliding subgroup (Lemma 5 below). In particular, Property (iii) states the almost-global asymptotic stability for system (3) in closed loop with the kinematic control law \(\nu(t)=-\nu_{u}(g(t))\). Note that since \(\mathcal{O}_{u}\) is the set of closed-loop equilibria other than \(g(t)=e\), they are critical points of \(V(g)\). Since \(V(g)\) is a Morse function, the set \(\mathcal{O}_{u}\) consists of a finite number of isolated points. In addition, this set is nowhere dense, which means that it cannot separate the configuration space. Therefore, the complement \(G\backslash\mathcal{O}_{u}\) is open and dense, i.e., \(G\backslash\mathcal{O}_{u}\) is a submanifold of \(G\)(Maithripala et al., 2006). Finally, Property (iv) establishes the local exponential stability of the closed-loop system, where the existence of the neighborhood \(\mathcal{U}\) is immediate because \(V(g)\) is a Morse function, which has a unique minimum at \(e\in G\) by definition.
Note that both \(V(g)\) and \(\nu_{u}(g)\) are of free design, provided that the properties in Definition 3.1 hold. However, it is worth considering the kinematic control law in the logarithmic coordinate, that is, \(\nu_{u}(g)=\log(g)\), or some parallel vectors to \(\log(g)\)(Akhtar & Waslander, 2020), as this map has been found to provide the strongest stability results, for example, almost global and local exponential convergence to the identity through a geodesic path (Bullo et al., 1995).
### Lie Group structure for the state space
For systems described in (3) and (5), the state space is the tangent bundle \(TG\simeq G\times\mathfrak{g}\). To endow it with a Lie group structure, we consider the binary operation
\(\star:\ TG\times TG\mapsto TG\) defined in the following
\[h_{1}\star h_{2}\triangleq\] \[\big{(}g_{1}g_{2},\ \nu_{1}+\nu_{2}+\lambda\nu_{u}(g_{1})+\lambda\nu_{u }(g_{2})-\lambda\nu_{u}(g_{1}g_{2})\big{)}, \tag{6}\] \[\forall h_{1}=(g_{1},\nu_{1}),\ h_{2}=(g_{2},\nu_{2})\in TG,\,\text{and}\ \lambda\in\mathbb{R}_{>0}.\]
**Lemma 1** (The state space \(TG\) as a Lie group): _The tangent bundle \(TG\equiv G\times\mathfrak{g}\) endowed with the binary operation (6) is a Lie group, with_
1. _Identity element:_ \(f\triangleq(e,0)\in TG\)_,_
2. _Inverse element:_ \(h^{-1}\triangleq\big{(}g^{-1},-\nu\big{)}\in TG\)_,_ \(\forall h=(g,\nu)\in TG\)_._
[Proof.] Being \(TG\) a smooth manifold with (6) a smooth operation, it only remains to verify the group axioms as follows.
1. \(\forall h=(g,\nu)\in TG\), it satisfies \[h\star f =(ge,\ \nu+0+\lambda\nu_{u}(g)+\lambda\nu_{u}(e)-\lambda\nu_{u}(ge))\] \[=f\star h=h,\] where Property of Definition 3.1(i) is used.
2. The group operation between \(h=(g,\nu)\in TG\) and its inverse \(h^{-1}=\big{(}g^{-1},-\nu\big{)}\in TG\) verifies \[h^{-1}\star h\] \[=\big{(}g^{-1}g,-\nu+\nu+\lambda\nu_{u}(g^{-1})+\lambda\nu_{u}(g) -\lambda\nu_{u}(g^{-1}g)\big{)}\] \[=\big{(}gg^{-1},\nu-\nu+\lambda\nu_{u}(g)+\lambda\nu_{u}(g^{-1})- \lambda\nu_{u}(gg^{-1})\big{)}\] \[=h\star h^{-1}=f,\] where Properties (i)-(ii) of Definition 3.1 are used.
3. The associativity \(h_{1}\star\big{(}h_{2}\star h_{3}\big{)}=(h_{1}\star h_{2})\star h_{3}\) is proved straightforwardly by substitution, using the properties of Definition 3.1.
\(\blacksquare\)
**Remark 2** (Tangent bundle \(Tg\)): _The definition of the group operation (6) relying on the kinematic control \(\nu_{u}(g)\) in Definition 3.1 is crucial to define a sliding Lie subgroup immersed in \(TG\) in the next subsection. In fact, a group operation to endow \(TG\) to be a Lie group may simply be \(h_{1}\star h_{2}=(g_{1}g_{2},\ \nu_{1}+\nu_{2})\). However, this operation does not allow to design of a useful sliding subgroup, in particular, it fails to prove closure under the group operation, as will be seen below._
**Remark 3** (Associativity): _The associativity proved in Lemma 1 ensures the proposed Lie group \(TG\) to be globalizable (Olver et al., 1996), that is, the local Lie group \(TG\) can be extended to be a global topological group. This fact allows us to develop a sliding mode control defined globally on the state space in contrast to the Lie groups defined locally in Gomez et al. (2019) and Meng et al. (2023)._
### Sliding Subgroup on \(Tg\)
In this subsection, we define a smooth sliding subgroup on the tangent bundle. The following lemma shows that \(H\subset TG\) is an immersed submanifold of \(TG\) that inherits the topology and smooth structure of the tangent bundle \(TG\)(Lee, 2013).
**Lemma 4** (Sliding Lie subgroup): _Define_
\[H\triangleq\{h=(g,\nu)\in TG\mid s(h)=0\}\subset TG, \tag{7}\]
_where \(\forall h=(g,\nu)\in TG\), the map \(s:TG\mapsto\mathfrak{g}\) is defined as_
\[s(h)=\nu+\lambda\nu_{u}(g). \tag{8}\]
_Then \(H\subset TG\) is a Lie subgroup under the group operation (6)._
[Proof.] The smoothness of \(H\) is immediate, because the map defined in (8) is smooth. The proof consists thus in showing that subset \(H\) inherits the group structure of the Lie group \(TG\), by verifying the following:
1. _Identity_: The identity of the tangent bundle \(f=(e,0)\in H\). This is immediate by Definition 3.1(i) since \(s(f)=0+\lambda\nu_{u}(e)=0\).
2. _Inverse_. \(\forall h=(g,\nu)\in H,\,s(h)=0\implies\nu=-\lambda\nu_{u}(g)\). By Definition 3.1(ii) it follows that \[s\left(h^{-1}\right) =-\nu+\lambda\nu_{u}\left(g^{-1}\right)\] \[=-\left(-\lambda\nu_{u}(g)\right)+\lambda\nu_{u}\left(g^{-1} \right)=0.\] This proves that \(h^{-1}\in H\) for all \(h\in H\).
3. _Closure_. Given \(h_{1}=(g_{1},\nu_{1}),\ h_{2}=(g_{2},\nu_{2})\in H\), then \(s(h_{1})=0\implies\nu_{1}=-\lambda\nu_{u}(g_{1})\) and \(s(h_{2})=0\implies\nu_{2}=-\lambda\nu_{u}(g_{2})\). By (6) \(h_{1}\star h_{2}=\big{(}g_{1}g_{2},-\lambda\nu_{u}(g_{1}g_{2})\big{)}\). Thus, \(s\left(h_{1}\star h_{2}\right)=-\lambda\nu_{u}(g_{1}g_{2})+\lambda\nu_{u}(g_{1 }g_{2})=0\). That is, \(H\) is closed under the group operation.
\(\blacksquare\)
The following lemma shows that once a trajectory reaches the sliding subgroup it will stay on it and converges to the group identity.
**Lemma 5** (Properties of the sliding subgroup \(H\)): _Consider the sliding Lie subgroup \(H\subset TG\) in (7). Then \(H\) is forward invariant, i.e., \(h(t_{r})\in H\) for some \(t_{r}\in I\implies h(t)\in H,\forall t\geq t_{r}\). Moreover, \(h(t)\rightarrow(e,\ 0)\) almost globally asymptotically._
**PROOF.** Consider a differentiable curve \(g:I\to G\) of the dynamics (3). Let \(V:G\to\mathbb{R}\) be a proper Morse function with the unique minimum at \(e\in G\). Then, along the trajectory \(g(t)\) and \(\forall t\in I\), it yields
\[\frac{\mathrm{d}}{\mathrm{d}t}V\left(g(t)\right)=\langle\mathrm{d}V\left(g(t) \right);\dot{g}(t)\rangle=\langle\mathrm{d}V\left(g(t)\right);g(t)\cdot\nu(t)\rangle.\]
Assume that \(h(t_{r})\in H\), for some \(t_{r}\in I\). Then \(s(h)=0\) gives \(\nu(t)=-\lambda\nu_{u}(g(t))\). Therefore,
\[\frac{\mathrm{d}}{\mathrm{d}t}V\left(g(t)\right) =\langle\mathrm{d}V\left(g(t)\right);g(t)\cdot\nu(t)\rangle\] \[=\langle\mathrm{d}V\left(g(t)\right);-\lambda g(t)\cdot\nu_{u} \left(g(t)\right)\rangle,\]
In light of Definition 3.1(iii), it follows that \(\frac{\mathrm{d}}{\mathrm{d}t}V\left(g(t)\right)<0\), for all \(g(t_{r})\in G\langle\mathcal{O}_{u}\), and \(\frac{\mathrm{d}}{\mathrm{d}t}V\left(g(t)\right)=0\iff g=e\), where \(\mathcal{O}_{u}\) is a nowhere-dense set with a finite number of points given in Definition 3.1(iii). Therefore, \(h(t)\) will remain on \(H\) for all \(t\geq t_{r}\), and the equilibrium \(g(t)=e\) of (3) is almost globally asymptotically stable for all \(g(t_{r})\in G\langle\mathcal{O}_{u}\) and locally exponentially stable \(\forall g(0)\in\mathcal{U}\), according to Definition 3.1(iv).
## 4 Geometric Sliding Mode Control (GSMC)
In this section, we design a control law, called the reaching law, for \(f_{u}\) in the Euler-Poincare equation (5) to drive the trajectory \(h(t)=(g(t),\nu(t))\in TG\) to the sliding subgroup \(H\). Then the tracking control objective will be achieved as a consequence of Lemma 5.
### Reaching Law
The Euler-Lagrange dynamics (4), ignoring disturbance forces \(\Delta_{d}\), is expressed as
\[\nabla_{\nu(t)}\nu(t)=f_{u}. \tag{9}\]
which is defined on \(TG\), being \(h(t)=(g(t),\nu(t))\) the state variable.
The intrinsic acceleration for the sliding variable (8) is calculated, by using (9), through the covariant derivative of \(s(h)\in\mathfrak{g}\) with respect to itself as
\[\nabla_{s(h)}s(h) =\frac{d}{dt}s(h)+\overset{\mathfrak{g}}{\nabla}_{s(h)}s(h)\] \[=\ddot{\nu}+\lambda\dot{\nu}_{u}(g)+\overset{\mathfrak{g}}{ \nabla}_{s(h)}s(h).\]
Substituting the Euler-Poincare equation (5) yields
\[\nabla_{s(h)}s(h)\] \[=-\overset{\mathfrak{g}}{\nabla}_{\nu(t)}\nu(t)+\lambda\dot{\nu} _{u}(g)+\overset{\mathfrak{g}}{\nabla}_{s(h)}s(h)+f_{u}\] \[= \mathbb{I}^{\sharp}\left(\mathrm{ad}_{\nu(t)}^{*}\mathbb{I}^{ \flat}(\nu(t))\right)-\mathbb{I}^{\sharp}\left(\mathrm{ad}_{s(h)}^{*}\mathbb{ I}^{\flat}(s(h))\right)+\lambda\dot{\nu}_{u}(g)+f_{u}, \tag{10}\]
where the skew-symmetry of the Lie bracket \([\cdot,\cdot]\in\mathfrak{g}\) in (1) is used. The reaching law is then proposed as follows
\[f_{u}=\mathbb{I}^{\sharp}\left(\mathrm{ad}_{\lambda\nu_{u}(g)}^{*}\mathbb{I}^ {\flat}(\nu(t))\right)-\lambda\dot{\nu}_{u}(g)-k_{s}s(h), \tag{11}\]
with \(k_{s}>0\) a design parameter.
**Theorem 6** (Reaching Controller): _The reaching law (11) drives exponentially the trajectories of the closed-loop system (10) to the sliding subgroup \(H\)\(\forall h(0)\in TG\), i.e., \(s(h(t))\to 0\) globally exponentially._
**PROOF.** Consider the function \(W:TG\to\mathbb{R}\) defined below
\[W(h)=\frac{1}{2}\mathbb{I}(s(h),s(h)). \tag{12}\]
Its time evolution along trajectories of (10) is given by
\[\dot{W}(h) =\mathbb{I}\left(\nabla_{s(h)}s(h),\;s(h)\right)\] \[=\mathbb{I}\left(\mathbb{I}^{\sharp}\left(\mathrm{ad}_{\nu(t)}^{* }\mathbb{I}^{\flat}(\nu(t))\right)-\mathbb{I}^{\sharp}\left(\mathrm{ad}_{s(h )}^{*}\mathbb{I}^{\flat}(s(h))\right)\right.\] \[\qquad+\lambda\dot{\nu}_{u}(g)+f_{u},\;s(h)\right)\!,\]
which in closed loop with the controller (11) yields
\[\dot{W}(h) =\mathbb{I}\left(\mathbb{I}^{\sharp}\left(\mathrm{ad}_{\nu(t)}^{* }\mathbb{I}^{\flat}(\nu(t))\right)-\mathbb{I}^{\sharp}\left(\mathrm{ad}_{s(h )}^{*}\mathbb{I}^{\flat}(s(h))\right)\right.\] \[\qquad+\mathbb{I}^{\sharp}\left(\mathrm{ad}_{s(h)}^{*}\mathbb{ I}^{\flat}(\nu(t))\right)-k_{s}s(h),\;s(h)\right)\!,\] \[=\mathbb{I}\left(\mathbb{I}^{\sharp}\left(\mathrm{ad}_{s(h)}^{* }\mathbb{I}^{\flat}(\nu(t))\right)-\mathbb{I}^{\sharp}\left(\mathrm{ad}_{s(h )}^{*}\mathbb{I}^{\flat}(s(h))\right)\right.\] \[\qquad-k_{s}s(h),\;s(h)\right)\!.\]
By Lemma 12 in Appendix A the term \(\mathbb{I}\left(\mathbb{I}^{\sharp}\left(\mathrm{ad}_{\zeta}^{*}\mathbb{I}^{ \flat}(\eta)\right),\;\zeta\right)=0\), for any \(\zeta,\eta\in\mathfrak{g}\). Therefore,
\[\dot{W}(h)=-k_{s}\mathbb{I}\left(s(h),\;s(h)\right)=-2k_{s}W(h).\]
It follows from Proposition 6.26 of Bullo & Lewis (2005) that \(W(h(t))\to 0\) exponentially.
**Remark 7** (Passivity of the Lagrangian dynamics): _Note that the first two right-hand terms of the control law (11) complete the terms
\(\mathbb{I}^{\sharp}\mathrm{ad}^{*}_{s(h)}\mathbb{I}^{\flat}(s(h))\). By exploring the intrinsic passivity properties in Lemma 12 in Appendix A, these terms were not canceled in the above stability analysis. This result was first given for the Lie group \(SO(3)\) in Koditschek (1989). The lemma 12 extends this result to coordinate-free Lagrangian dynamics on a general Lie group, which has not been explored, to the authors' knowledge, in the literature for stability analysis._
**Remark 8** (The reaching controller ): _The reaching law (11) achieves the convergence of \(s(h(t))\to 0\) for the Euler-Lagrange dynamics (9), which implies that \(h(t)\in TG\) reaches the sliding subgroup \(H\) exponentially. Note that the result of Theorem 6 holds when the external constraint forces \(\delta_{d}\) can be compensated for by the controller \(f_{u}\), which was omitted from the control design. Otherwise, in the presence of bounded \(\delta_{d}\), \(h(t)\in TG\) will remain bounded and close to \(H\)._
### Tracking Control
Let \(g_{r}:I\to G\) be a twice differentiable configuration reference, with the corresponding reference body velocity \(\nu_{r}:I\to\mathfrak{g}\) given by \(\nu_{r}(t)\triangleq g_{r}^{-1}(t)\cdot\dot{g}_{r}(t)\). The problem is to design a control law \(f_{u}\) to track the reference. The Lie group structure of the configuration space \(G\) enables to define the following intrinsic configuration error
\[g_{e}(t)\triangleq g_{r}^{-1}(t)g(t).\]
By left invariance the body velocity error is defined as
\[\nu_{e}(t)\triangleq g_{e}^{-1}(t)\cdot\dot{g}_{e}(t)=\nu(t)-\eta_{r}(t), \tag{13}\]
with \(\eta_{r}(t)=\mathrm{Ad}_{g_{e}^{-1}}\nu_{r}(t)\). Then, the error dynamics evolving on \(TG\) is described by
\[\nabla_{\nu_{e}(t)}\nu_{e}(t)=f_{u}, \tag{14}\]
being the state variable \(h_{e}(t)=(g_{e}(t),\nu_{e}(t))\in TG\).
The tracking problem, therefore, boils down to stabilizing the identity \(f=(e,0)\) on \(TG\). By using the sliding-model control strategy, the error state is first driven to the sliding subgroup in the reaching stage, and then on the sliding subgroup, the reduced-order dynamics converges to the identity \(f\) ensured by Lemma 5.
In terms of the error state \(h_{e}\) the sliding variable (8) is given by
\[s(h_{e})=\nu_{e}(t)+\lambda\nu_{u}(g_{e}), \tag{15}\]
and, its covariant derivative, by using (1)-(2), is
\[\nabla_{s(h_{e})}s(h_{e})\] \[=\frac{d}{dt}s(h_{e})+\mathbb{F}_{s(h_{e})}^{\sharp}s(h_{e})\] \[=\dot{\nu}_{e}(t)+\lambda\dot{\nu}_{u}(g_{e})-\mathbb{I}^{\sharp }\left(\mathrm{ad}^{*}_{s(h_{e})}\mathbb{I}^{\flat}\left(s(h_{e})\right)\right)\] \[=\dot{\nu}(t)-\dot{\eta}_{r}(t)+\lambda\dot{\nu}_{u}(g_{e})- \mathbb{I}^{\sharp}\left(\mathrm{ad}^{*}_{s(h_{e})}\mathbb{I}^{\flat}\left(s (h_{e})\right)\right).\]
Ignoring disturbance \(\delta_{d}\) it follows from the Euler-Poincare equation (5) that
\[\nabla_{s(h_{e})}s(h_{e})= \mathbb{I}^{\sharp}\left(\mathrm{ad}^{*}_{\nu(t)}\mathbb{I}^{ \flat}\left(\nu(t)\right)\right)+f_{u}-\dot{\eta}_{r}(t) \tag{16}\] \[+\lambda\dot{\nu}_{u}(g_{e})-\mathbb{I}^{\sharp}\left(\mathrm{ad} ^{*}_{s(h_{e})}\mathbb{I}^{\flat}\left(s(h_{e})\right)\right).\]
We proposed the following tracking controller
\[f_{u} =\mathbb{I}^{\sharp}\left(\mathrm{ad}^{*}_{\lambda\nu_{u}(g_{e}) -\eta_{r}(t)}\mathbb{I}^{\flat}(\nu(t))\right)-\lambda\dot{\nu}_{u}(g_{e})+ \dot{\eta}_{r}(t)\] \[\quad-k_{s}s(h_{e}), \tag{17}\]
where \(k_{s}>0\) is a design parameter. The following theorem establishes the stability of the equilibrium \(h_{e}=f\) in the closed-loop system (16)-(17).
**Theorem 9** (Tracking Controller): _Consider the error dynamics (16) in closed loop with the controller (17). Then, the equilibrium \(h_{e}(t)=f\) is_
1. _almost-globally asymptotically stable, for all_ \(h_{e}(0)\in\overline{TG}\triangleq G\backslash\mathcal{O}_{u}\times\mathfrak{g}\)_,_
2. _locally exponentially stable for all_ \(h_{e}(0)\in\overline{TU}\triangleq\mathcal{U}\times\mathfrak{g}\)_, where_ \(\mathcal{O}_{u}\) _and_ \(\mathcal{U}\) _are given in Definition_ 3.1_(iii)-(iv)._
**Proof.** Substituting the controller (17) in the error dynamics (16) yields the closed-loop dynamics
\[\nabla_{s(h_{e})}s(h_{e}) =\mathbb{I}^{\sharp}\left(\mathrm{ad}^{*}_{s(h_{e})}\mathbb{I}^{ \flat}\left(\nu(t)\right)\right)-k_{s}s(h_{e})\] \[\quad-\mathbb{I}^{\sharp}\left(\mathrm{ad}^{*}_{s(h_{e})}\mathbb{ I}^{\flat}\left(s(h_{e})\right)\right),\]
which has an equilibrium point at \(s(h_{e})=0\). The results follow as a consequence of Theorem 6 (the reaching stage) and Lemma 5 (the sliding mode).
**Remark 10** (The tracking controller): _Theorem 9 gives a coordinate-free sliding mode control for a mechanical system whose configuration space is a general Lie group. The group structure allows defining globally a tracking error, whose dynamics evolves on the tangent bundle. The Lie subgroup of the sliding subgroup immersed on the tangent bundle ensures the existence of
the sliding mode and thus inherits the salient features of the SMC in Euclidean spaces._
_Similarly to the Euclidean case, the design of the sliding subgroup and the reaching law may incorporate other control objectives, such as finite-time convergence and controller saturation, which are, however, beyond the scope of the main purposes of this paper._
## 5 Attitude Tracking of a Rigid Body
In this section, we present the attitude tracking of a rigid body in the 3D space using the proposed GSMC. To illustrate the theoretic development, the problem is addressed using attitude representation by first the rotation matrix on \(SO(3)\) and then by the unit quaternion \(\mathcal{S}^{3}\).
### GSMC for Attitude Tracking on \(So(3)\)
The group of rotations on \(\mathbb{R}^{3}\) is the Lie group \(SO(3)=\big{\{}R\in\mathbb{R}^{3\times 3}\mid RR^{T}=R^{T}R=I_{3},\ \det(R)=+1\big{\}},\) with the usual multiplication of matrices as the group operation. The identity of the group is the identity matrix \(I_{3}\) of \(3\times 3\), and the inverse is the transpose \(R^{T}\in SO(3)\) for any \(R\in SO(3)\). The Lie algebra is given by the set of skew-symmetric matrices \(\mathfrak{so}(3)=\big{\{}S\in\mathbb{R}^{3\times 3}\mid S^{T}=-S\big{\}}\), which is isomorphic to \(\mathbb{R}^{3}\), i.e., \(\mathfrak{so}(3)\simeq\mathbb{R}^{3}\). The Lie bracket in \(\mathbb{R}^{3}\) is defined by the cross product \([\zeta,\eta]=\mathrm{ad}_{\zeta}\eta\triangleq\zeta\times\eta\), \(\forall\zeta,\eta\in\mathbb{R}^{3}\). Denote the isomorphism \(\cdot^{\wedge}:\mathbb{R}^{3}\to\mathfrak{so}(3)\), and respectively the inverse map \(\cdot^{\vee}:\mathfrak{so}(3)\to\mathbb{R}^{3}\). Then for a differentiable curve \(R:I\to SO(3)\) with left-invariant dynamics \(\dot{R}(t)\in T_{R}SO(3)\), the body angular velocity is given by
\[\Omega^{\wedge}(t)=R^{T}(t)\dot{R}(t)=\left[\begin{array}{ccc}0&-\Omega_{3} (t)&\Omega_{2}(t)\\ \Omega_{3}(t)&0&-\Omega_{1}(t)\\ -\Omega_{2}(t)&\Omega_{1}(t)&0\end{array}\right],\]
for all \(t\in I\). The kinetic energy of the rotational motion of a rigid body is calculated as \(\mathrm{KE}(\Omega)=\frac{1}{2}\mathbb{J}(\Omega,\Omega)\triangleq\frac{1}{2 }\langle\left\langle\mathbb{J}\Omega,\Omega\right\rangle\rangle\), where \(\mathbb{J}=\mathbb{J}^{T}\in\mathbb{R}^{3\times 3}\) is the positive-definite inertia tensor. Therefore, \(\mathrm{ad}_{\zeta}^{*}\mathbb{J}^{\eta}(\eta)=\left(\mathbb{J}\eta\right)^{ \wedge}\zeta\), and \(\mathbb{J}^{\sharp}(\zeta)=\mathbb{J}^{-1}\zeta\). Hence, the rotational motion described by the Euler-Lagrange equation (4) is
\[\nabla_{\Omega(t)}\Omega(t)=\tau_{u}. \tag{18}\]
The state \((R,\omega)\) evolves on the tangent bundle \(TSO(3)\simeq SO(3)\times\mathbb{R}^{3}\), and the control torque \(\tau_{u}=\mathbb{J}^{-1}\tau\in\mathbb{R}^{3}\) is expressed in the body frame. Furthermore, (18) is explicitly expressed, by using the Euler-Poincare equation (5) and restriction (1), as
\[\dot{\Omega}(t)-\mathbb{J}^{-1}\left(\mathbb{J}\Omega(t)\right)^{\wedge}\Omega (t)=\tau_{u}. \tag{19}\]
Let \(R_{r}:I\to SO(3)\) be a twice differentiable attitude reference, and \(\Omega_{r}:I\to\mathbb{R}^{3}\), the reference angular velocity expressed in the body frame, which holds \(\Omega_{r}(t)=\left(R_{r}^{T}(t)\dot{R}_{r}(t)\right)^{\vee}\). Then, the intrinsic attitude error is
\[R_{e}(t)\triangleq R_{r}^{T}(t)R(t).\]
In view of (13) the (left-invariant) velocity error is
\[\Omega_{e}(t) \triangleq\left(R_{e}^{T}(t)\dot{R}_{e}(t)\right)^{\vee}=\Omega (t)-\sigma(t), \tag{20}\] \[\sigma(t) \triangleq\mathrm{Ad}_{R_{e}^{-1}}\Omega_{r}(t)=R_{e}^{T}(t) \Omega_{r}(t).\]
Therefore, the distance between \(R_{e}(t)\) and \(I_{3}\) is properly measured with the Morse function \(V_{1}(R_{e})\triangleq 2-\sqrt{1+\mathrm{tr}(R_{e}(t))}\), proposed by Lee (2012). In fact, \(V_{1}(R_{e})=0\iff R_{e}=I_{3}\) and is positive for all \(R_{e}\in SO(3)\backslash\{I_{3}\}\). Moreover, along the trajectories of (20), it satisfies
\[\frac{\mathrm{d}}{\mathrm{d}t}V_{1}(R_{e}) =\left\langle\left\langle\psi(R_{e})\left(R_{e}(t)-R_{e}^{T}(t) \right)^{\vee},\ \Omega_{e}(t)\right\rangle\right\rangle,\] \[\psi(R_{e}) \triangleq\frac{1}{2\sqrt{1+\mathrm{tr}(R_{e})}},\]
for all \(R_{e}\in SO(3)\backslash\mathcal{O}_{R}\), where \(\mathcal{O}_{R}\triangleq\{R\in SO(3)|\mathrm{tr}(R)\)\(=\)\(-1\}\). Furthermore, given \(\mathcal{U}_{R}\triangleq\{R_{e}\in SO(3)\backslash\mathcal{O}_{R}\mid V_{1}(R_{e})<2-\epsilon\}\), for some \(\epsilon>0\) arbitrarily small, \(V_{1}(R_{e})\) verifies (Lee 2012)
\[\left\|\psi(R_{e})\left(R_{e}-R_{e}^{T}\right)^{\vee}\right\|^{2}\leq V_{1}(R_ {e})\leq 2\left\|\psi(R_{e})\left(R_{e}-R_{e}^{T}\right)^{\vee}\right\|^{2},\]
for all \(R_{e}\in\mathcal{U}_{R}\).
Consider the kinematic control law
\[\Omega_{u}(R_{e}) \equiv\log(R_{e})^{\vee}, \tag{21}\] \[\log(R_{e}) \triangleq\left\{\begin{array}{ll}0_{3\times 3},&R_{e}=I_{3},\\ \frac{\phi(R_{e})}{2\sin(\phi(R_{e}))}\left(R_{e}-R_{e}^{T}\right),\ R_{e}\neq I _{3},\end{array}\right.\]
where \(\phi(R_{e})\triangleq\arccos\left(\frac{1}{2}\left(\mathrm{tr}(R_{e})-1\right) \right)\in(-\pi,\pi)\), and \(0_{n\times m}\) is a matrix of size \(n\times m\) with zero-entries. It can verify readily Definition 3.1(i)-(ii) by (21). To verify Definition 3.1(iii)-(iv) under the kinematic control \(\Omega_{e}(t)=-\Omega_{u}(R_{e})\), consider the derivative of the Morse function \(V_{1}(R_{e})\) along the error kinematics \(\dot{R}_{e}=R_{e}\Omega_{e}^{\wedge}\):
\[\dot{V}_{1}(R_{e})=\left\langle\left\langle\psi(R_{e})\left(R_{e}-R_{e}^{T} \right)^{\vee},\ -\Omega_{u}(R_{e})\right\rangle\right\rangle<0,\]
for all \(R_{e}\in SO(3)\backslash\mathcal{O}_{R}\) and \(\dot{V}_{1}(R_{e})\leq-y_{1}(R_{e})V_{1}(R_{e})\) for all \(R_{e}\in\mathcal{U}_{R}\), where
\[y_{1}(R_{e})\triangleq\frac{\phi(R_{e})}{4\psi(R_{e})\sin\phi(R_{e})}>0,\ \forall R_{e}\in\mathcal{U}_{R}.\]
This proves that the kinematic control (21) also holds Definition 3.1(iii)-(iv).
Therefore, based on the kinematic control law (21) the following group operation is defined
\[r_{1}\star r_{2} \tag{22}\] \[= \bigg{(}R_{1}R_{2},\ \Omega_{1}+\Omega_{2}+\lambda\Omega_{u}(R_{1})+ \lambda\Omega_{u}(R_{2})-\lambda\Omega_{u}(R_{1}R_{2})\Big{)},\]
for any \(r_{1}=(R_{1},\Omega_{1})\), \(r_{2}=(R_{2},\Omega_{2})\in TSO(3)\). Thus, the tangent bundle \(TSO(3)\simeq SO(3)\times\mathbb{R}^{3}\) is endowed with a Lie group structure with identity \((I_{3},0_{3\times 1})\in TSO(3)\) and inverse \(r^{-1}=(R^{T},-\Omega)\in TSO(3)\), \(\forall r=(R,\Omega)\in TSO(3)\). Likewise, given \(r_{e}(t)=(R_{e}(t),\Omega_{e}(t))\in TSO(3)\) and in view of (15), the map \(s:TSO(3)\rightarrow\mathbb{R}^{3}\)
\[s(r_{e})=\Omega_{e}(t)+\lambda\Omega_{u}(R_{e}), \tag{23}\]
for some scalar \(\lambda>0\), defines a Lie subgroup
\[H_{R}=\big{\{}r_{e}(t)=(R_{e}(t),\Omega_{e}(t))\in TSO(3)\mid s(r_{e})=0_{3 \times 1}\big{\}}, \tag{24}\]
under the group operation (22).
Thus, the tracking controller on \(SO(3)\) is obtained from (17) and (23) as
\[\tau_{u} =\mathbb{J}^{-1}\left(\left(\mathbb{J}\Omega(t)\right)^{\wedge} \left(\lambda\Omega_{u}(R_{e})-\sigma(t)\right)\right)-\lambda\dot{\Omega}_{u }(R_{e})+\dot{\sigma}(t)\] \[\quad-k_{s}s(r_{e}), \tag{25}\]
where \(k_{s}>0\) is a controller gain. Theorem 9 proves that controller (25) in closed loop with the system (19) renders the equilibrium point \(r_{e}(t)=(I_{3},0_{3\times 1})\) almost globally asymptotically stable for all \(r_{e}(0)\in SO(3)\backslash\mathcal{O}_{r}\times\mathbb{R}^{3}\), and exponentially stable for all \(r_{e}(0)\in\mathcal{U}_{r}\times\mathbb{R}^{3}\).
**Remark 11**: _Note that in applying Theorem 9 it should define first a tracking error using the group operation on the configuration manifold, and then treat the error dynamics as a physical system. Otherwise, the sliding surface may not be a Lie subgroup. To see this more clear, consider \(r=(R,\Omega)\), \(r_{d}^{-1}=\big{(}R_{r}^{T},-\Omega_{r}\big{)}\in TSO(3)\), then in the following tracking error may be defined by the group operation (22)_
\[r_{e}^{\prime} =r_{d}^{-1}\star r=\big{(}R_{r}^{T}R,\ -\Omega_{r}+\Omega+ \lambda\Omega_{u}(R_{r}^{T})+\lambda\Omega_{u}(R)\] \[\qquad\qquad\qquad\qquad\qquad\qquad-\lambda\Omega_{u}(R_{r}^{T}R) \big{)}\] \[=\big{(}R_{e},\ -\Omega_{r}+\Omega+\lambda\Omega_{u}(R_{r}^{T})+ \lambda\Omega_{u}(R)-\lambda\Omega_{u}(R_{e})\big{)}\] \[=\big{(}R_{e},\bar{\Omega}_{e}\big{)}\,.\]
_However, \(H_{R}^{\prime}=\{r_{e}^{\prime}(t)\in TSO(3)\mid s(r_{e}^{\prime})=0_{3\times 1 }\}\subset TSO(3)\) is not a sliding subgroup for the proposed Morse function \(V_{1}(R_{e})\)._
### Attitude Tracking on \(\mathcal{S}^{3}\)
The set of \(\mathbb{R}^{4}\)-vectors evolving on the unit sphere \(\mathcal{S}^{3}=\big{\{}q\in\mathbb{R}^{4}\mid q^{T}q=1\big{\}}\), with \(q=\big{[}q_{0},\bar{q}^{T}\big{]}^{T}\in\mathcal{S}^{3}\), \(q_{0}\in[-1,1]\), and \(\vec{q}\in\mathbb{R}^{3}\), is a Lie group with identity \(\imath=\left[1,0_{1\times 3}\right]^{T}\in\mathcal{S}^{3}\), and inverse \(q^{-1}=\big{[}q_{0},-\vec{q}^{T}\big{]}^{T}\in\mathcal{S}^{3}\), under the group operation \((q_{1},q_{2})\mapsto q_{1}\otimes q_{2}\in\mathcal{S}^{3}\) defined as
\[q_{1}\otimes q_{2}\triangleq Q(q_{1})q_{2}=\left[\begin{array}{cc}q_{0,1}&- \vec{q}_{1}^{T}\\ \vec{q}_{1}&q_{0,1}I_{3}+\vec{q}_{1}^{T}\end{array}\right]\left[\begin{array}[] {c}q_{0,2}\\ \vec{q}_{2}\end{array}\right],\]
for any \(q_{1}=\big{[}q_{0,1},\vec{q}_{1}^{T}\big{]}^{T}\), \(q_{2}=\big{[}q_{0,2},\vec{q}_{2}^{T}\big{]}^{T}\in\mathcal{S}^{3}\). The Lie algebra is \(\mathfrak{s}^{3}=\left\{\omega\in\mathbb{R}^{4}\mid\omega=\big{[}0,\Omega^{T} \big{]}^{T}\,,\Omega\in\mathbb{R}^{3}\right\}\), which holds \(\mathfrak{s}^{3}\simeq\mathbb{R}^{3}\). Its Lie bracket operation corresponds to the cross product in \(\mathbb{R}^{3}\). Thus, denote the isomorphism \(\overline{\cdot}:\mathbb{R}^{3}\rightarrow\mathfrak{s}^{3}\) with the inverse map \(\boldsymbol{:}\boldsymbol{:}\boldsymbol{\mathfrak{s}^{3}\rightarrow\mathbb{R} ^{3}}\).
Rodriguez formula \(q\mapsto R(q)=I_{3}+2q_{0}\vec{q}^{\gamma}+2\vec{q}^{\gamma}\in SO(3)\) relates each antipodal point \(\pm q\) with a physical rotation of a rigid body, i.e., \(\mathcal{S}^{3}\) double covers the group \(SO(3)\). The adjoint action in \(\mathcal{S}^{3}\) is defined as \(\mathrm{Ad}_{q}\zeta\triangleq q\otimes\bar{\zeta}\otimes q^{-1}=\overline{R (q)}\zeta\), for any \(\zeta\in\mathbb{R}^{3}\).
Given a differentiable curve \(q:I\rightarrow\mathcal{S}^{3}\) with a left-invariant vector field \(\dot{q}(t)\in T_{q}\mathcal{S}^{3}\), and a twice-differentiable reference configuration \(q_{r}:I\rightarrow\mathcal{S}^{3}\), \(\forall t\in I\), the body angular velocity \(\overline{\Omega}(t)\triangleq 2q^{-1}(t)\otimes\dot{q}(t)=2Q^{T}(q(t))\dot{q}(t)\in \mathfrak{s}^{3}\) and the reference angular velocity \(\overline{\Omega}_{r}(t)\triangleq 2q_{r}^{-1}(t)\otimes\dot{q}_{r}(t)\in\mathfrak{s}^{3}\) can be defined. We consider the following intrinsic tracking error \(q_{e}(t)\triangleq q_{r}^{-1}(t)\otimes q(t)\), and its left-invariant velocity error
\[\overline{\Omega}_{e}(t) \triangleq 2q_{e}^{-1}(t)\otimes\dot{q}_{e}(t)=\overline{\Omega}(t)- \overline{\zeta}(t), \tag{26}\] \[\overline{\zeta}(t) =\mathrm{Ad}_{q_{e}^{-1}}\Omega_{r}(t),\]
where \(\dot{q}_{e}(t)\in T_{q_{e}}\mathcal{S}^{3}\) is left invariant. Propose the Morse function \(\mathcal{S}^{3}\ni q\mapsto V_{2}(q)=\frac{1}{\sqrt{2}}\left\|\imath-q\right\| =\sqrt{1-q_{0}}\), which satisfies \(V_{2}(q)=0\iff q=\imath\), and \(V_{2}(q)>0\)\(\forall q\in\mathcal{S}^{3}\backslash\{\imath\}\). That is, function \(V_{2}(q)\) has a unique minimum critical zero at identity \(\imath\in\mathcal{S}^{3}\) and is strictly positive for any other \(q\in\mathcal{S}^{3}\). Moreover, it verifies that
\[\frac{\mathrm{d}}{\mathrm{d}t}V_{2}(q_{e})=\frac{-\dot{q}_{0,e}(t)}{2\sqrt{1-q _{0,e}(t)}}=\frac{1}{4\sqrt{1-q_{0,e}(t)}}\vec{q}_{e}^{T}(t)\Omega_{e}(t),\]
which suggests the following kinematic control law
\[\Omega_{u}(q_{e})\equiv\log(q_{e})\triangleq\left\{\begin{array}{cc}0_{3 \times 1},&q_{e}=\imath,\\ \frac{\arccos(q_{0,e})}{\left\|\vec{q}_{e}\right\|}\vec{q}_{e},\ q_{e}\neq\imath,\end{array}\right. \tag{27}\]
for all \(q_{e}(t)\in\mathcal{S}^{3}\backslash\{-i\}.\) Indeed, when \(\Omega_{e}(t)=-\Omega_{u}(q_{e}),\) it leads to
\[\frac{\mathrm{d}}{\mathrm{d}t}V_{2}(q_{e}) =-\frac{\arccos(q_{0,e})}{4\sqrt{1-q_{0,e}}}\|\vec{q}_{e}\|\] \[=-\frac{\arccos(q_{0,e})}{4\sqrt{1-q_{0,e}}}\sqrt{1-q_{0,e}^{2}}\] \[=-\frac{\arccos(q_{0,e})}{4\sqrt{1-q_{0,e}}}\sqrt{(1+q_{0,e}) \left(1-q_{0,e}\right)}\] \[=-\frac{\arccos(q_{0,e})}{4\sqrt{1-q_{0,e}}}\sqrt{1+q_{0,e}}V_{2} (q_{e}),\] \[=-y_{2}(q_{e})V_{2}(q_{e}),\]
where \(y_{2}(q_{e})>0\) for all \(q_{e}\in\mathcal{U}_{q}\triangleq\{q_{e}\in\mathcal{S}^{3}\backslash\{t\}\ |\ V_{2}(q_{e})<2-\epsilon\},\) for some \(\epsilon>0\) arbitrarily small. Consequently, the control law (27) satisfies all properties of Definition 3.1 for the Morse function \(V_{2}(q_{e}).\)
The kinematic control law (27) enables the definition of the tangent bundle \(T\mathcal{S}^{3}\simeq\mathcal{S}^{3}\times\mathbb{R}^{3}\) as a Lie group under the group operation
\[p_{1}\star p_{2} \tag{28}\] \[= \left(q_{1}\otimes q_{2},\ \Omega_{1}+\Omega_{2}+\lambda\Omega_{u} (q_{1})+\lambda\Omega_{u}(q_{2})-\lambda\Omega_{u}(q_{1}\otimes q_{2})\right),\]
\(\forall p_{1}=(q_{1},\Omega_{1}),\)\(p_{2}=(q_{2},\Omega_{2})\in T\mathcal{S}^{3}.\) Note that the identity is \((\imath,0_{3\times 1})\in T\mathcal{S}^{3},\) and inverse, \(p^{-1}=(q^{-1},-\Omega)\in T\mathcal{S}^{3},\)\(\forall p=(q,\Omega)\in T\mathcal{S}^{3}.\) Therefore, the map
\[s(p_{e})=\Omega_{e}(t)+\lambda\Omega_{u}(q_{e}), \tag{29}\]
where \(p_{e}(t)=(q_{e}(t),\Omega_{e}(t))\in T\mathcal{S}^{3},\) defines the sliding Lie subgroup
\[H_{q}=\left\{p_{e}\in T\mathcal{S}^{3}\ |\ s(p_{e})=0_{3\times 1}\right\}. \tag{30}\]
The attitude tracking controller on \(\mathcal{S}^{3}\) is thus defined as (17) using (26)-(29), which yields
\[\tau_{u} =\mathbb{J}^{-1}\left(\left(\mathbb{J}\Omega(t)\right)^{\wedge} \left(\lambda\Omega_{u}(q_{e})-\zeta(t)\right)\right)-\lambda\dot{\Omega}_{u}( q_{e})+\dot{\zeta}(t)\] \[\quad-k_{s}s(p_{e}). \tag{31}\]
By Theorem 9 controller (31) in closed loop with system (18) achieves the asymptotic convergence of \(p_{e}(t)\rightarrow(\imath,0_{3\times 1})\) for all \(p_{e}(0)\in\mathcal{S}^{3}\backslash\{t\}\times\mathbb{R}^{3},\) and exponential convergence when \(p_{e}(0)\in\mathcal{U}_{q}\times\mathbb{R}^{3}.\)
## 6 Simulations
To illustrate the theoretical results and for comparison, the proposed GSMC (25) was contrasted with two reported controllers: the "linearization"-by-state-feedback-like (LSF) controller Eq. (26) of Maithripala et al. (2006), and the PD+ controller Eq. (23) of Lee (2012). For easy comparison, the applied torque control for each controller is rewritten in terms of
\[s_{i}=\tilde{\omega}_{i}+\gamma_{i}\tilde{\varphi}_{i},\quad\forall i=1,2,3, \tag{32}\]
where \(\tilde{\omega}_{i}\in\mathbb{R}^{3}\) is angular velocity error, \(\tilde{\varphi}_{i}\in\mathbb{R}^{3}\) is attitude error, and \(\gamma_{i}>0\) is the control gain.
The proposed GSMC law (25) is expressed as
\[\tau_{1} =-k_{s}\mathbb{J}s_{1}+F_{1}, \tag{33}\] \[s_{1} =\Omega_{e}(t)+\lambda\Omega_{u}(R_{e}),\] (34) \[F_{1} =\mathbb{J}\left(-\lambda\dot{\Omega}_{u}(R_{e})+\dot{\sigma} \right)+\left(\mathbb{J}\Omega\right)^{\wedge}\left(\lambda\Omega_{u}(R_{e})- \sigma\right).\]
Likewise, the LSF controller (26) of Maithripala et al. (2006) is given by
\[\tau_{2} =-k\mathbb{J}s_{2}+F_{2}, \tag{35}\] \[s_{2} =\Omega-\Omega_{r}+\frac{\kappa}{k}R^{T}\left(RR_{r}^{T}-R_{r}R^{ T}\right)^{\vee},\] (36) \[F_{2} =\mathbb{J}\dot{\Omega}_{r}-\left(\mathbb{J}\Omega\right)^{\wedge }\left(\Omega\right)-\Omega^{\wedge}\Omega_{r},\]
where \(\tilde{I}=I_{3}\) and \(K=\kappa I_{3},\) for some \(\kappa>0.\) Finally, the PD+ controller (23) of Lee (2012) is rewritten as
\[\tau_{3} =-k_{\Omega}s_{3}+F_{3}, \tag{37}\] \[s_{3} =\Omega_{e}+\frac{k_{R}}{k_{\Omega}}\psi(R_{e})\left(R_{e}-R_{e}^{ T}\right)^{\vee},\] (38) \[F_{3} =\mathbb{J}R^{T}R_{r}\dot{\Omega}_{r}+\left(R^{T}R_{r}\Omega_{r} \right)^{\wedge}\mathbb{J}R^{T}R_{r}\Omega_{r}.\]
The inertia tensor was given by
\[\mathbb{J}=\left[\begin{array}{ccc}3.6046&-0.0706&0.1491\\ -0.0706&8.6868&0.0449\\ 0.1491&0.0449&9.3484\end{array}\right],\]
while the reference trajectory was calculated as \(\Omega_{r}(t)=\left(R_{r}^{T}(t)\dot{R}_{r}(t)\right)^{\vee}=\left[0,0.1,0 \right]^{T}\) (rad/s). Furthermore, the initial conditions were chosen as \(\Omega(0)=\left(1/\left(2\sqrt{14}\right)\right)\left[1,2,3\right]^{T}(\text{ rad/s}),R_{r}(0)=R_{312}(\pi/4,-\pi,\pi/4),\) where the expression \(R_{312}(\varphi,\vartheta,\psi)\) is a rotation matrix described by the sequence 3-1-2 of Euler angles (Shuster et al., 1993), and the initial attitude was calculated as \(R(0)=R_{r}(0)R_{e}(0).\)
The simulations were carried out under three scenarios according to the distance between \(R_{e}(0)\) and the desired equilibrium \(I_{3},\) and to the undesired equilibrium \(\text{diag}(1,-1,-1)\) measured by the Morse function \(\Psi(R_{e})\triangleq\frac{1}{2}\text{tr}(I_{3}-R_{e})\) used in Maithripala et al. (2006). Therefore, the initial attitudes \(R_{e}(0)=R_{312}(0,-0.428\pi,0),\)\(R_{e}(0)=R_{312}(0,-0.01\pi,0)\approx I_{3},\)
and \(R_{e}(0)=R_{312}(0,-0.99\pi,0)\approx\mbox{diag}(1,-1,-1)\) were assigned.
Finally, the design parameters for each controller were tuned in such a way that the energy-consumption level measured by \(\sqrt{\int_{0}^{t}\tau_{i}^{T}(t)\tau_{i}(t)\mbox{d}t}\) in the first scenario is the same. The resulting controller gains were \(k_{s}=1\), \(\lambda=0.5\) for (33), \(k=1\), \(\kappa=0.5\) for (35), and \(k_{\Omega}=18.5\), \(k_{R}=9.25\) for (37). With these design parameters the control gain of (32) was \(\gamma_{i}=0.5\) for all \(i=1,2,3\).
### Scenario 1. Intermediate case.
Figure 1 shows the performance of the controllers (33), (35), and (37) under the initial condition \(R_{e}(0)=R_{312}(0,-0.428\pi,0)\). Fig. 1(a) shows the attitude error \(\Psi(R_{e})\triangleq\frac{1}{2}\mbox{tr}(I_{3}-R_{e})\), it is observed that the proposed controller (33) and controller (35) achieve the convergence \(R_{e}\to I_{3}\) in 17 (s), while controller (37) achieves it in 30 (s). Fig. 1(b) illustrates the norm \(\|\Omega_{e}(t)\|\) for each controller, where the angular velocity error \(\Omega_{e}(t)\) is calculated as (20), it can be seen that controller (37) takes 10 (s) longer than the other controllers to reach \(\Omega_{e}(t)\to 0_{3\times 1}\). Furthermore, Figs. 1(c) and (d) draw the control effort and the energy consumption respectively, it is observed that, with the selected controller gains, all controllers consume the same amount of energy. Finally, Fig. 2 shows the behavior of the sliding variables (34), (36), and (38) compared to \(s_{i}=0\) according to (32). It is observed that the proposed controller (33) allows convergence \(s_{1}\to 0_{3\times 1}\) to complete the reach phase, while the LSF control scheme (35) presents an oscillatory behavior around the equilibrium point, in addition to the PD + controller (37) that follows closely \(s_{i}=0\) until it reaches equilibrium.
### Scenario 2. Starting close to the desired equilibrium point \(I_{3}\).
For this scenario, the initial condition was set to \(R_{e}(0)=R_{312}(0,-0.01\pi,0)\), which corresponds to an initial condition close to the desired equilibrium \(I_{3}\). Figs. 3(a) and (b) show that the controllers (33) and (35) reach the desired equilibrium \((R_{e},\Omega_{e})=(I_{3},0_{3\times 1})\) at the same time 20 (s), while the controller (37) takes 5 (s) longer, which coincides with the previous scenario. However, as illustrated in Fig. 3(d), the proposed controller uses less energy than others to reach the desired equilibrium when the system starts close to the desired equilibrium.
### Scenario 3. Starting close to the undesired equilibrium point \(\mbox{diag}(1,-1,-1)\).
Figure 4 displays the performance of the controllers starting close to the undesired equilibrium point \(\mbox{diag}(1,-1,-1)\), i.e., \(R_{e}(0)=R_{312}(0,-0.99\pi,0)\). It is
Figure 1: Scenario 1: Behavior of controllers (33), (35), and (37) when the initial attitude error is \(R_{e}(0)=R_{312}(0,-0.428\pi,0)\).
Figure 2: Scenario 1: Behavior of the sliding variable (34), (36), and (38) when \(R_{e}(0)=R_{312}(0,-0.428\pi,0)\).
observed in Fig. 4(a) that the proposed controller (33) and the PD+ controller (33) present a delay of 1 (s) before beginning the convergence of \(R_{e}\to I_{3}\), however, the LSF controller (35) has the longest delay of 2.5 (s). Notice that the proposed control scheme allows a faster convergence to the desired equilibrium point (Figs. 4(a) and (b)) at a cost of more energy consumption (Figs. 4(c) and (d)).
## 7 Conclusions
This paper presented a geometric sliding mode control for fully actuated mechanical systems evolving on Lie groups, generalizing the conventional sliding mode control in Euclidean spaces. It was shown that the sliding surface (a Lie subgroup) is immersed in the state space (a Lie group) of the system dynamics, and the tracking is achieved by first driving the trajectories of the system to the sliding subgroup and then converging to the group identity of the reduced dynamics restricted on the sliding subgroup, like sliding mode control designs for systems evolving on Euclidean spaces. An application of the result to attitude control was presented for the rotation group \(SO(3)\) and the unit sphere \(\mathcal{S}^{3}\). The simulation results illustrated the scheme and compared it with similar control designs in the literature.
|
2308.16827 | Using 1-Factorization from Graph Theory for Quantum Speedups on Clique
Problems | The clique problems, including $k$-CLIQUE and Triangle Finding, form an
important class of computational problems; the former is an NP-complete
problem, while the latter directly gives lower bounds for Matrix
Multiplication. A number of previous efforts have approached these problems
with Quantum Computing methods, such as Amplitude Amplification. In this paper,
we provide new Quantum oracle designs based on the 1-factorization of complete
graphs, all of which have depth $O(n)$ instead of the $O(n^2)$ presented in
previous studies. Also, we discuss the usage of one of these oracles in
bringing the Triangle Finding time complexity down to $O(n^{2.25} poly(log
n))$, compared to the $O(n^{2.38})$ classical record. Finally, we benchmark the
number of required Amplitude Amplification iterations for another presented
oracle, for solving $k$-CLIQUE. | Ali Hadizadeh Moghadam, Payman Kazemikhah, Hossein Aghababa | 2023-08-31T15:59:35Z | http://arxiv.org/abs/2308.16827v1 | **Using 1-Factorization from Graph Theory for Quantum Speedups on Clique Problems**
###### Abstract
The clique problems, including \(k\)-CLIQUE and Triangle Finding, form an important class of computational problems; the former is an NP-complete problem, while the latter directly gives lower bounds for Matrix Multiplication. A number of previous efforts have approached these problems with Quantum Computing methods, such as Amplitude Amplification. In this paper, we provide new Quantum oracle designs based on the 1-factorization of complete graphs, all of which have depth \(O(n)\) instead of the \(O(n^{2})\) presented in previous studies. Also, we discuss the usage of one of these oracles in bringing the Triangle Finding time complexity down to \(O(n^{2.25}\,\mbox{poly}(\log n))\), compared to the \(O(n^{2.38})\) classical record. Finally, we benchmark the number of required Amplitude Amplification iterations for another presented oracle, for solving \(k\)-CLIQUE.
**Keywords** Quantum algorithm, k-CLIQUE, Triangle Finding, Matrix Multiplication, 1-factorization
## 1 Introduction
There is an important class of problems in graph theory, which consider gathering information about specific subgraphs. Two of these problems are 1) \(k\)-CLIQUE, which asks for a \(k\)-node clique (fully-connected subgraph) in a given simple undirected graph, and 2) Triangle Finding (TF), which focuses on the special case of \(k=3\). These problems find considerable applications in different fields, including computational biochemistry and genomics, e.g., matching three-dimensional molecular structures, and protein docking [1]. They also have theoretical importance: \(k\)-CLIQUE is one of Karp's 21 NP-complete problems [2], and TF provides time complexity lower bounds on a multitude of computation problems, such as detecting median graphs [3] and Matrix Multiplication (MM), as explained below.
These problems have been also addressed with the tool of Quantum Computing, which uses Quantum phenomena, such as superposition and entanglement, for faster computation [4]. Quantum Computing revolves around the use of Quantum bits, also known as qubits. Qubits together hold different binary strings in superposition, which allows us to perform calculations on them, all at the same time, with the drawback that the strings in the superposition can only be "measured" (i.e., sampled) randomly. One Quantum Computing scheme used for solving a given computational problem, called Amplitude Amplification (AA), is to design an "oracle" which verifies solutions to the problem, and then, pass all solution candidates as a superposition to the oracle, in an iterative fashion. This way, the probability of measuring a correct solution is maximized. The oracle needs to be Quantum, i.e., be able to receive a superposition of inputs and provide a corresponding superposition of outputs. We refer the reader to [5] for a more thorough discussion of AA.
Previous papers which consider clique problems present algorithms that are based on Quantum oracles, which hold information about the graph under question. They then extract the answer to their considered problem by querying the oracle many times. These papers, including [6] and [7], approach the clique problem with AA, and use oracles which inspire their logic from classical computing designs; for example, the oracle in [7] counts the number of nodes and edges in the input subgraph. A downside of their oracle designs is their depth complexities; for example, [7] report a depth complexity of \(O(m+\log k)\) for their oracles, where \(m\) is the number of edges in the given graph. It is worth noting that \(m\) can be as much as \(O(n^{2})\), where \(n\) is the number of nodes.
On the other hand, Le Gall [8] claimed a searching method for TF with query complexity \(O(n^{1.25}\operatorname{poly}(\log n))\), without providing an oracle. Using previously designed Quantum oracles (with a depth of \(O(n^{2})\)), this algorithm does not yield an efficient time complexity. This is because TF is bounded from above by Matrix Multiplication (MM) [9], whose best upper bound already is \(O(n^{2.37286})\), shown by Alman and Williams [10]. To the best of our knowledge, reducing TF to MM has always been the fastest method for solving the problem, in terms asymptotical speed (as evidence, in their 2018 paper, Williams and Williams [11] briefly mention that TF "is solvable in \(O(n^{2.38})\) time" and simply refer to Itai and Rodeh's 1977 paper [9] on the reduction of TF to MM). As pointed out in Section 7.1, the 2.38 exponent has been the last significant speedup in MM, and consequently TF, in 33 years.
In this paper, we review the \(1\)-factorization theorem for complete graphs in Section 2, and then introduce two new Quantum oracles with \(O(n)\) depth. The first oracle, which we discuss in Section 3, detects edges and has direct use in Le Gall's algorithm, which could result in a new time complexity as low as \(O(n^{2.25}\operatorname{poly}(\log n))\) for TF. The second one, which we introduce in Section 4, is called "Alpha" and helps detect \(k\)-cliques with a fair amount of error. In Section 5 and 6, we provide another design based on Alpha, called "Gamma", which can be used directly in heuristically solving \(k\)-CLIQUE with AA. Section 7 contains results on the proposed oracles, Section 8 concludes the paper, and the Appendix provides the mathematical details involved.
## 2 A discussion on edge partitioning
In a simple undirected graph with \(n\) nodes and \(m\) edges, a \(1\)_-factor_, also known as _perfect matching_, is a selection of the edges of the graph, such that no two edges share a node, and that all nodes are covered by the edges in the selection. In other words, a \(1\)-factor is a set of \(\frac{n}{2}\) disjoint edges.
A \(1\)_-factorization_ is a partition of a graph into disjoint \(1\)-factors. A _complete graph_ is a graph with edges between every two nodes. A complete graph with an even number of nodes admits a \(1\)-factorization (proof in Appendix). Since every \(n\)-node graph \(G\) can be viewed as a complete graph \(G^{\prime}\) with a number of its nodes and edges removed, a \(1\)-factorization of \(G^{\prime}\), minus the same removed nodes and edges, partitions the edges of \(G\) into at most \(n\) sets, each containing at most \(\left\lfloor\frac{n}{2}\right\rfloor\) disjoint edges.
This paper focuses on a number of oracles, whose circuits consist of Quantum gates, corresponding to the edges of a given graph. Using the mentioned edge partitioning on the given graph results in a natural way of grouping the gates into circuit layers, such that no two gates in a layer share a qubit. In other words, this trick improves the depth complexity of the discussed circuits.
An oracle for edge detection, and its usage
An edge-detecting oracle has the following definition. Let \(idx\) be an \(n\)-qubit input register and \(out\) be the output qubit. When \(idx\) holds a \(z\)-basis state with two \(1\)'s on qubits \(a\) and \(b\), it denotes a query about the existence of an edge between nodes \(a\) and \(b\). \(out\) is flipped if the oracle detects an edge, and unchanged otherwise. The oracle is allowed to use a number of ancilla qubits initially set to \(|0\rangle\), but must leave them in state \(|0\rangle\) again at the end.
A simple design for this oracle is to put, for each edge between nodes \(a\) and \(b\), a \(CCX\) gate on \(out\), with controls on \(idx_{a}\) and \(idx_{b}\). This design has depth \(m\), since all of the \(CCX\) gates are dependent on \(out\). An example circuit is shown in Fig. 1.
A simple way to remove this dependency is to introduce a \(\left\lfloor\frac{n}{2}\right\rfloor\)-qubit ancilla register named \(anc\). The edges of \(G\) form a partition, as discussed in Section 2. For each set in the partition, the circuit contains \(CCX\) gates which are placed on different \(anc\) qubits, with the same controls as before. Then, the circuit employs a multi-controlled \(X\) on \(out\), which is triggered with at least one \(|1\rangle\) on \(anc\) (this can be done with a multi-controlled \(X\) with inverted controls, followed by an \(X\) on \(out\)). The circuit ends with the adjoint of the \(CCX\) subcircuit, so that \(anc\) is reset. The previous example graph is used in Fig. 2 for this new design.
Figure 2: The (optimized) edge-detector circuit for \(n=4\) with edges {0,1}, {0,2}, {0,3}, {1,2}, {1,3}, and {2,3}. The dashed boxes show the mentioned partitioning of the graph. In this case, the partition consists of 3 sets.
Since the design assumes the query to be in the aforementioned format, it guarantees that at most one of the \(CCX\) gates in the first half (corresponding to the edge under question) would flip an _anc_ qubit; thus, _out_ will always answer the query correctly.
In each of the \(n\) sets in the partition, all \(CCX\) gates act on completely different qubits; therefore, each of these sets would have depth 1, summing up to total \(CCX\) subcircuit depth of \(O(n)\). The multi-controlled \(X\) gate needs at most depth \(O(\mbox{size of }anc\mbox{ register})=O\left(\left|\frac{n}{2}\right|\right)=O(n)\), using any scheme, such as the V-chain design. Therefore, the oracle has an overall depth of \(O(n)\).
Le Gall's algorithm [8] has query complexity \(O(n^{1.25}\mbox{poly}(\log n))\), and the mentioned oracle answers each query in depth and time complexity \(O(n)\). Therefore, the total Quantum time complexity of TF using this approach can be reduced down to \(O(n^{2.25}\mbox{poly}(\log n))\), assuming the other parts of the algorithm do not impose other bottlenecks.
## 4 Alpha oracle
Consider a graph with nodes \(\{0,1,2\}\) and unknown edges. Also, consider a Quantum circuit on three qubits, each corresponding to a node, designed with the following algorithm:
**Algorithm 1**
1. For every edge between nodes \(x\) and \(y\): 1.1. Place a CZ gate on qubits \(i\) and \(j\).
If the circuit takes the first three qubits of
\[|\psi\rangle=\frac{|011\rangle|s_{0}\rangle+|101\rangle|s_{1}\rangle+|110 \rangle|s_{2}\rangle+|111\rangle|s_{3}\rangle}{2} \tag{1}\]
as its input, it will flip none of the phases in the case of an empty graph (a graph with no edges), all of the phases in the case of a complete graph (a graph with all edges), and exactly half of the phases in the case of a "malformed" graph (a graph which is neither empty nor complete). In other words, the output state is orthogonal to the input state if the graph is malformed and parallel to the input state (up to a phase) otherwise. Note that the \(|s_{j}\rangle\) states do not interfere with the mentioned property.
In the general case, Algorithm 1 can produce an \(n\)-qubit (provably) self-adjoint "Alpha" circuit, for a given graph with nodes \(\{0,...,n-1\}\) (Fig. 3). Then, whenever three qubits, corresponding to three nodes in an induced subgraph \(H\), have the superposition above, some ancilla qubits have the \(|s_{j}\rangle\) states, and all other qubits are \(|0\rangle\), Alpha would act the same as above.
Fig. 3: An instance of Alpha for \(n=4\) with edges {0,1}, {0,2}, and {1,3}.
In the general case when \(H\) has \(k\) nodes, the \(k\) corresponding qubits in the input are in the state
\[\frac{1}{\sqrt{K}}\sum_{\begin{subarray}{c}|x|=k\\ hw(x)\equiv 2\text{ or }3\end{subarray}}|x\rangle|s_{x}\rangle \tag{2}\]
and the other qubits are \(|0\rangle\), where \(hw(x)\) denotes the Hamming weight of \(x\), \(|s_{x}\rangle\) is any desired state, and \(K\) is a normalizing constant. When \(k\stackrel{{\text{\ref{eq:hw
\[\frac{1}{\sqrt{2^{k-1}}}\Bigg{(}\sum_{hw(x)\equiv 2}|x\rangle\frac{|01\rangle+|10 \rangle}{\sqrt{2}}+\sum_{hw(x)\equiv 3}|x\rangle\frac{|00\rangle+|11\rangle}{ \sqrt{2}}\Bigg{)} \tag{4}\]
This state corresponds to the desired input, described in Section 4. The \(rem\) register does not cause any problem in the calculations, as discussed in Section 4 (\(rem\) corresponds to \(|s_{x}\rangle\)).
The circuit uses \(\Theta(n)\) gates, and has \(\Theta(n)\) depth complexity (this bound cannot be reduced, since all of the \(n\) gates in Step 3 act on \(rem_{1}\)).
## 6 Gamma oracle
Algorithm 3 provides the design for Gamma. It uses the same registers as Input Preparator (Section 5), with \(H\) encoded as a z-basis state on \(idx\) (Fig. 5).
**Algorithm 3**
1. Place an input preparator.
2. Place Alpha on \(inp\).
3. Place the adjoint of the input preparator.
4. Place a multi-controlled \(Z\) on \(inp\), with inverted controls.
5. Repeat Steps 1 through 3.
Fig. 4: The input preparator circuit, for \(n=3\). Each step is separated with dashed lines.
The output of Step 2 is parallel to the state after Step 1 for complete or empty \(H\), and almost orthogonal to the state in other cases. Since Input Preparator is a unitary which transforms \(|inp,rem\rangle=|0\rangle^{\otimes(n+2)}\) into the input state, its adjoint would preserve the orthogonality, and output a state parallel or orthogonal to \(|0\rangle^{\otimes(n+2)}\), respectively. Step 4 applies a \(-1\) phase to the system when \(inp\) is \(|0\rangle^{\otimes(n+2)}\) after Step 3, which corresponds to a complete or empty \(H\). Then, Step 5 attempts to revert the ancilla qubits back to \(|0\rangle\) (since Alpha is self-adjoint, Steps 1 to 3 also form a self-adjoint circuit). This reversal is not guaranteed to be exact, due to the "almost orthogonal" property of Alpha (discussed in Section 4), which means that in the case of a malformed \(H\), \(|0\rangle^{\otimes n}\) would be present on \(inp\) with a possibly-nonzero amplitude after Step 3. However, as demonstrated in Section 7, this inexactness still leads to a fairly well heuristic approach.
Gamma, in the ideal case, puts a \(-1\) phase on the system when \(H\) is complete _or empty_. In order to exclude empty subgraphs, one can simply add \(q\geq 1\) nodes to the graph, so that they are connected to every other node. Another issue is the assumption that \(k\stackrel{{\mbox{\scriptsize$4$}}}{{\equiv}}3\), which can be solved by setting \(q\) such that \(k+q\stackrel{{\mbox{\scriptsize$4$}}}{{\equiv}}3\). Then, whenever \(H\) is queried, the \(q\) nodes should also be included in the query to Gamma.
Gamma assumes that its input \(H\) has \(k+q\) nodes. This is possible by setting the search space to the \(n\)-qubit Dicke state with Hamming weight \(k\), plus \(q\) qubits set to \(1\). One way to create the search space superposition, then, is to set \(n_{qubits}=n+q\) (where \(n_{qubits}\) is the size of the \(idx\) and \(inp\) Quantum registers, and \(n\) is the number of nodes of the graph), and apply the unitary subcircuit proposed by Bartschi and Eidenbenz [13] on \(idx_{0}\) to \(idx_{n-1}\), and perform \(X\) gates on \(idx_{n}\) to \(idx_{n_{qubits}-1}\). These circuits together build up the circuit for AA, which outputs on \(idx\) (Fig. 6).
Figure 5: Gamma oracle. IP denotes the Input Preparator circuit. \(S_{0}\) denotes the gate in Step 4.
Figure 6: The circuit for Amplitude Amplification, where each shaded box accounts for one iteration. \(BE\) denotes Bartschi and Eidenbenz’s unitary for creating an \(n\)-qubit Dicke state with \(k\) ones, and \(S_{0}\) is a multi-controlled \(Z\) gate with inverted controls.
Gamma has space complexity \(2n_{qublts}+2=\Theta(n)\). It uses subcircuits with \(O(n)\) or \(\Theta(n)\) depth complexity (discussed in Sections 4 and 5). This includes the multi-controlled \(Z\) gate in Step 4 of Algorithm 3, which has a depth up to \(O(n)\), using designs such as V-chain. Considering Bartschi and Eidenbenz's unitary, which also has depth \(O(n)\), the total depth complexity of each AA iteration sums up to \(\Theta(n)\).
## 7 Results
### Complexity analysis
This paper provides a new time complexity for TF, as stated in Section 3. Table 1 shows a number of earlier results on TF, arising from their proposed MM complexities, as discussed, and compares them with our result on TF. The table uses the notation \(O\big{(}n^{\alpha+o(1)}\big{)}\), whenever the mentioned author(s) claimed the _matrix multiplication exponent_\(\omega\) to be \(\leq\alpha\) instead of directly claiming an \(O(n^{\alpha})\) complexity. The table is summarized in Fig. 7.
Also, in the best-case scenario, the Gamma oracle presented in Section 6 can bring down the time complexity of \(k\)-CLIQUE by a factor of \(O(n)\), assuming the errors from the "almost orthogonal" property (Section 4) would be negligible in that scenario (see Section 7.2). Table 2 compares this new time complexity with the claims of some of the previous works.
\begin{table}
\begin{tabular}{|c|c|l|l|} \hline \multirow{2}{*}{**Year**} & \multirow{2}{*}{**Reference**} & \multicolumn{2}{c|}{**Time complexity (up to 5**} & \multirow{2}{*}{**Problem considered**} \\ & & **fractional digits)** & \\ \hline
1969 & [14] & \(O(n^{2.81})\) & \\ \hline
1978 & [15] & \(O(n^{2.795})\) & \\ \hline
1979 & [16] & \(O(n^{2.7799})\) & \\ \hline
1981 & [17] & \(O(n^{2.522})\) & \\ \hline
1981 & [18] & \(O\big{(}n^{2.49537+o(1)}\big{)}\) & Matrix Multiplication \\ \hline
1986 & [19] & \(O\big{(}n^{2.4785+o(1)}\big{)}\) & (classical algorithm), \\ \hline
1990 & [20] & \(O\big{(}n^{2.37548+o(1)}\big{)}\) & indirectly providing \\ \hline
2010 & [21] & \(O\big{(}n^{2.37369+o(1)}\big{)}\) & Triangle Finding \\ \hline
2012 & [22] & \(O\big{(}n^{2.37293+o(1)}\big{)}\) & \\ \hline
2014 & [23] & \(O\big{(}n^{2.37287+o(1)}\big{)}\) & \\ \hline
2021 & [10] & \(O\big{(}n^{2.37286+o(1)}\big{)}\) & \\ \hline
2023 & Our paper & \(\begin{array}{l}O\big{(}n^{2.25+o(1)}\big{)}\text{ (under assumptions}\\ \text{mentioned in Section 3)}\end{array}\) & Triangle Finding \\ \hline \end{tabular}
\end{table}
Table 1: A comparison of results in Matrix Multiplication and Triangle Finding, sorted by
### Benchmarking
This section examines whether Gamma functions properly, and also presents statistics on the number of AA iterations needed when using Gamma instead of the "Checking-based oracle" from Metwalli et al. [7].
In order to do this, we use the "macaque-rhesus-cerebral-cortex-1" and "mouse-retina-1" graphs uploaded to Network Repository [24][25]. The website claims that these two graphs have a density of 0.486 and 0.998, and their maximum clique size is at least 21 and 42, respectively. 100 random \(n\)-node induced subgraphs are extracted from these two graphs, and the ones which have at least one \(k\)-clique (checked classically) are fed into the two compared Quantum algorithms. The Quantum simulations are performed using Qiskit v0.43.0 [26].
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline
**Year** & **Reference** & \begin{tabular}{c} **Number of** \\ **Amplitude** \\ **antizations** \\ \end{tabular} & \begin{tabular}{c} **Depth of** \\ **each** \\ **iteration** \\ \end{tabular} & \begin{tabular}{c} **Space** \\ **(qubit)** \\ **complexity** \\ \end{tabular} & \begin{tabular}{c} **Gate** \\ **complexity** \\ \end{tabular} & \begin{tabular}{c} **Problem** \\ **considered** \\ \end{tabular} &
\begin{tabular}{c} **Solution** \\ **type** \\ \end{tabular} \\ \hline
2018 & [6] & \(O\big{(}\sqrt{2^{n}}\big{)}\) & \(O(n^{2})\) & \(O(n^{2})\) & \(O\big{(}\sqrt{2^{n}}\times n^{2}\big{)}\) & \begin{tabular}{c} MAX- \\ \(\begin{array}{c}\text{CLIQUE,}\\ \text{which}\\ \text{naturally}\\ \text{gives upper}\\ \text{bounds for}\\ k\text{-CLIQUE.}\end{array}\) &
\begin{tabular}{c} Exact \\ oracle \\ \end{tabular} \\ \hline
2020 & [7] & \(O\left(\sqrt{\binom{n}{k}}\right)\) & \(O(n^{2})\) & \(O(n)\) & \(O\left(\sqrt{\binom{n}{k}}\times n^{2}\right)\) & \(k\text{-CLIQUE.}\) &
\begin{tabular}{c} Exact \\ oracle \\ \end{tabular} \\ \hline
2023 & Our paper & \begin{tabular}{c} Heuristically \\ \(O\left(\sqrt{\binom{n}{k}}\right)\) \\ \end{tabular} & \(O(n)\) & \(O(n)\) & \begin{tabular}{c} Heuristically \\ \(O\left(\sqrt{\binom{n}{k}}\times n^{2}\right)\) \\ \end{tabular} & \begin{tabular}{c} \(k\text{-CLIQUE.}\) \\ \end{tabular} &
\begin{tabular}{c} Heuristic \\ oracle \\ \end{tabular} \\ \hline \end{tabular}
\end{table}
Table 2: A comparison of results in \(k\)-CLIQUE, sorted by year.
Figure 7: The best exponents for Matrix Multiplication (MM) and Triangle Finding (TF) in the span of 1960 to 2023, based on Table 1. The dashed red line shows the complexity reduction which becomes possible with the Edge Detection oracle.
We consider each of the Quantum algorithms to "succeed" when:
1. One of their 10 most probable output subgraphs (out of 1000 shots) is a \(k\)-clique, and
2. The \(k\)-clique is found in at most \(\left\lfloor 2\times\frac{\pi}{4}\sqrt{\binom{n}{k}}\right\rfloor\) AA iterations.
We chose this number as an upper bound, since at most \(\left\lfloor\frac{\pi}{4}\sqrt{\binom{n}{k}}\right\rfloor\) iterations are needed to maximize the probability of measuring a \(k\)-clique [5]. If the given graph does not contain any \(k\)-cliques, then none of the algorithms would be able to find a \(k\)-clique; this means, the only possible errors are false negatives (reporting that no \(k\)-cliques exist when they actually do). This justifies not trying the clique-less graphs.
Table 3 contains the examined values of \(n\) and \(k\), the number of needed qubits for our algorithm, the number of extracted graphs for each of the \(n\) and \(k\) values, and the number of times Gamma succeeded to find a \(k\)-clique under the given constraints. For \(n=8\), only the mouse graph is used for subgraph generation, since it is expected to provide more graphs with at least one \(k\)-clique, due to its higher density than the Macaque Rhesus graph. Fig. 8 shows the average ratio of AA iterations between the two algorithms. The average is calculated as the geometric mean of the mentioned ratio for each input graph, since it is more appropriate, compared to the arithmetic mean [27].
Metwalli et al.'s oracle found a \(k\)-clique in all of the experiments, since it is an exact oracle, and not a heuristic one.
## 8 Conclusion
One of the outcomes of this paper is the new complexity which we provided for Triangle Finding. As mentioned in the Introduction, this could be the first time when Triangle Finding is done faster that Matrix Multiplication. To be exact, the exponent of Triangle Finding has started to fall from 3.00 in 1969, and has arrived at 2.38 until now. Our work can decrease the exponent by 0.13, which is equivalent to a 21% decrease in the mentioned scale. This decrease, considering both classical and Quantum computation domains, could either mean that there is a theoretical gap between the best Triangle Finding complexity and the lowest complexity for Matrix Multiplication, or otherwise indicate that Matrix Multiplication can still be improved, both having considerable implications.
We also presented benchmarks for applying the heuristic Gamma oracle, as opposed to exact oracles, in an Amplitude Amplification scheme. A definite comparison between the overall complexity of our \(k\)-CLIQUE algorithm and exact oracles would become possible with precise mathematical analysis, which could be a subject of future studies. Further improvements are also possible: the errors introduced by Alpha might not
Figure 8: The geometric mean of the ratio of needed Amplitude Amplification iterations for Gamma against Metwalli et al.’s oracle. The dashed horizontal line shows 1.000.
be inherent, and future methods may be able to compensate for them, such as by ignoring the \(q\) extra nodes during measurement, or by approaches inspired by error-correction methods.
## 9 Author contributions
The design and experiment of the proposed algorithms was done by Ali. Ali and Payman were responsible for documenting and preparing the paper at hand. Hossein supervised the project, and together with Peyman, provided key insights and advice for conducting this research.
## 10 Data availability
The data that support the findings of this study are available from the corresponding author upon reasonable request.
## 11 Competing interests
The authors declare no competing interests.
## Appendix
**Theorem** A complete graph with nodes \(\{0,...,n-1\}\), where \(n\stackrel{{ 2}}{{\equiv}}0\), admits a \(1\)-factorization.
**Proof** We prove the theorem by providing such a \(1\)-factorization, consisting of \(n-1\)\(1\)-factors \(S_{0}\) to \(S_{n-2}\). \(S_{l}\) is constructed from edges \(\{n-1,i\}\) and \(\{i-j\ mod\ n-1,i+j\ mod\ n-1\}\) for all \(j\in\left\{0,...,\frac{n}{2}-1\right\}\).
In order to check whether \(S_{0}\) to \(S_{n-2}\) are disjoint, it suffices to see that whenever an edge from \(S_{l}\) coincides with an edge from \(S_{h}\), \(S_{i}\) and \(S_{h}\) must be equal.
\[\{i-j\ mod\ n-1,i+j\ mod\ n-1\}=\{h-l\ mod\ n-1,h+l\ mod\ n-1\}\] \[\Rightarrow(i-j)+(i+j)\stackrel{{ n-1}}{{\equiv}}(h -l)+(h+l)\Rightarrow 2i\stackrel{{ n-1}}{{\equiv}}2h\Rightarrow i \stackrel{{ n-1}}{{\equiv}}h\Rightarrow i=h\] \[\Rightarrow S_{i}=S_{h}\] (a1)
Note that we were allowed to cancel out the factor \(2\), because \(2\) and \(n-1\) (which is odd) are coprime.
Since all \(S_{l}\) are disjoint and have \(\frac{n}{2}\) edges each, they cover up all of the \((n-1)\times\frac{n}{2}\) edges, and therefore build up a \(1\)-factorization.
|
2309.12968 | PassViz: A Visualisation System for Analysing Leaked Passwords | Passwords remain the most widely used form of user authentication, despite
advancements in other methods. However, their limitations, such as
susceptibility to attacks, especially weak passwords defined by human users,
are well-documented. The existence of weak human-defined passwords has led to
repeated password leaks from websites, many of which are of large scale. While
such password leaks are unfortunate security incidents, they provide security
researchers and practitioners with good opportunities to learn valuable
insights from such leaked passwords, in order to identify ways to improve
password policies and other security controls on passwords. Researchers have
proposed different data visualisation techniques to help analyse leaked
passwords. However, many approaches rely solely on frequency analysis, with
limited exploration of distance-based graphs. This paper reports PassViz, a
novel method that combines the edit distance with the t-SNE (t-distributed
stochastic neighbour embedding) dimensionality reduction algorithm for
visualising and analysing leaked passwords in a 2-D space. We implemented
PassViz as an easy-to-use command-line tool for visualising large-scale
password databases, and also as a graphical user interface (GUI) to support
interactive visual analytics of small password databases. Using the
"000webhost" leaked database as an example, we show how PassViz can be used to
visually analyse different aspects of leaked passwords and to facilitate the
discovery of previously unknown password patterns. Overall, our approach
empowers researchers and practitioners to gain valuable insights and improve
password security through effective data visualisation and analysis. | Sam Parker, Haiyue Yuan, Shujun Li | 2023-09-22T16:06:26Z | http://arxiv.org/abs/2309.12968v3 | # PassViz: A Visualisation System for Analysing Leaked Passwords
###### Abstract
Passwords remain the most widely used form of user authentication, despite advancements in other methods. However, their limitations, such as susceptibility to attacks, especially weak passwords defined by human users, are well-documented. The existence of weak human-defined passwords has led to repeated password leaks from websites, many of which are of large scale. While such password leaks are unfortunate security incidents, they provide security researchers and practitioners with good opportunities to learn valuable insights from such leaked passwords, in order to identify ways to improve password policies and other security controls on passwords. Researchers have proposed different data visualisation techniques to help analyse leaked passwords. However, many approaches rely solely on frequency analysis, with limited exploration of distance-based graphs. This paper reports PassViz, a novel method that combines the edit distance with the t-SNE (t-distributed stochastic neighbour embedding) dimensionality reduction algorithm for visualising and analysing leaked passwords in a 2-D space. We implemented PassViz as an easy-to-use command-line tool for visualising large-scale password databases, and also as a graphical user interface (GUI) to support interactive visual analytics of small password databases. Using the "000webhost" leaked database as an example, we show how PassViz can be used to visually analyse different aspects of leaked passwords and to facilitate the discovery of previously unknown password patterns. Overall, our approach empowers researchers and practitioners to gain valuable insights and improve password security through effective data visualisation and analysis.
Human-centered computingVisualization--Visualization techniques--Treemaps; Human-centered computingVisualization--Visualization design and evaluation methods
## 1 Introduction
Passwords are still the mostly used form of user authentication, especially for websites. Despite ongoing advancements in other forms of user authentication mechanisms, many researchers suggested that the use of passwords would continue to prevail in the foreseeable future [3, 7]. More recently, passwords are often used as part of a multi-factor authentication (MFA) system, where one or more other factors such as "what you have" (token-based) and "who you are" (biometric-based) authentication methods are used to provide enhanced overall security. Despite its wide use, the shortcomings of passwords such as weak passwords defined by human users are well-studied in the research literature [7]. One source of the weak password problem is the conflict of security and usability of passwords: stronger passwords tend to be harder to remember, and easier-to-remember passwords tend to be easier to crack [18, 16]. Human users tend to have different insecure behaviours around password creation, e.g., the mismatch between human users' misperception of a password's strength and its actual strength can lead to creation of weak passwords [18, 1], and many users choose to reuse the same password across multiple accounts [12]. Such weak passwords have led to repeated leakage of passwords from many websites, including some very large-scale incidents. The unfortunate large-scale password leaks give researchers and practitioners opportunities to study such leaked passwords to gain more knowledge and insights about how human users create passwords, in order to find better ways to refine password security controls, e.g., better password policies, password checkers and password management tools.
Most earlier password analysis work was based on simple statistics [13, 11], but data visualisation has been proposed by some researchers to analyse leaked passwords [4, 20, 24], utilising methods such as heat-maps, bar charts, and word clouds. To the best of our knowledge, most past studies on password visualisation are based on frequencies of passwords or segments of passwords, and only a limited number of studies [6, 26] investigated graph-based methods to explore structural relationships between different passwords. Different from existing solutions, this paper presents PassViz, a new graph-based data visualisation method that leverages edit distances (more precisely Levenshtein distances) and the t-distributed stochastic neighbour embedding (t-SNE) dimensionality reduction algorithm for visualising and analysing leaked passwords in a 2-D
Figure 1: Examples of visualisation of different clusters for 000webhost leaked password database
space. We implemented PassViz as an easy-to-use command-line tool for visualising large-scale password databases, and also an interactive graphical user interface (GUI) to support interactive visual analytics of small password databases. Using the "000webhost" leaked database as an example, we show how PassViz can be used to analyse different aspects of leaked passwords in a visually meaningful manner and also facilitate the discovery of previously unknown password patterns.
The rest of the paper is organised as follows. Section 2 overviews some related work, followed by a detailed description of the proposed methodology given in Section 3. Section 4 demonstrates different ways of using PassViz to conduct a visual analysis of leaked passwords in the database "000webhost", with a discussion on the limitations of PassViz. The last section concludes this paper with future research directions.
## 2 Related Work
The understanding of password structures and patterns can provide useful insights into the password creation processes and help develop better password tools such as password strength meters [8]. An early attempt by Morris and Thompson [11] back in the 1970s analysed 3,289 passwords and revealed some basic statistics about passwords structures, where 492 passwords can be identified in open access information sources such as dictionaries and name lists, 86% of passwords can be categorised as one of the 6 classes (e.g., single ASCI character, four alphanumerics, and all lower cases). Similarly, an early work conducted by Riddle et al. [13] investigated 6,226 passwords for a university time-sharing system, and they discovered that user-chosen passwords are commonly based on personal information such as birthday, names or job/project related.
Jakobsson and Dhiman [8] studied the relationship between the percentage of passwords' components such as words, numbers, and other special characters to establish the differences between strong and weak passwords. Differently, Taiabul Haque et al. [17] proposed a hierarchy of password importance that assume that users would mentally classify passwords into different levels based on the perceived importance of different sites (i.e., news portals and banking websites). By observing how users construct passwords following such a hierarchy, they uncovered that unsafe lower-level passwords can be used to crack higher-level passwords due to the behaviour of password reuse with/without modifications. In a study of empirical analysis of large-scale Chinese web passwords, Wang et al. [23] discovered a number of interesting password structures and semantic patterns, which are somewhat different from findings observed in English passwords. They explored 22 types of semantic information such as English names, Pinyin names, date in the format of YYYY, and date in the format of YYMMDD, which contribute to password-cracking strategies.
Leaks of real-world passwords from many websites (e.g., Yahoo, RockYou, and 12306) have become a common phenomenon these days, and they have attracted many researchers' attention to study such leaked passwords in order to gain useful insights about how human users create passwords. One group of methods for facilitating such analyses of leaked passwords is to utilise data visualisation. For instance, Bonneau et al. [2] collected a subset of leaked passwords from RockYou, which contain only 4-digit sequences, and another password database containing only 4-digit PINs to unlock iPhones to study the composition of 4-digit PINs. By visualising the distribution of such PINs using a heat map, they revealed that it is very likely human users choose 4-digit passwords in a format of MMDD (i.e., month-day). They concluded that birthdays have been heavily used as 4-digit passwords.
In another work, Wang et al. [22] conducted a study to compare 4- and 6-digit PINs for English and Chinese users, where heat maps were adopted to visualise date-related features in such PINs. To further explore how dates are used in the password creation process, Veras et al. [21] developed an interactive visualisation tool that combines different visualisation methods including tile maps, radial plots and word clouds. By using the visualisation tool with the RockYou database of over 32 million passwords, they discussed different patterns in passwords including dates, e.g., around 5% of passwords have pure dates, many date-related patterns such as the first days of the month, and holidays were observed. In another follow-up work, Veras et al. [20] conducted qualitative analyses of leaked password databases using semantic grammar to emanate graphical models for visualising high-level dependencies between token classes. Their work captures both syntactic and semantic information, allowing for the identification of regular patterns in passwords that resemble natural language.
Moreover, researchers have been looking at more subtle password patterns that are less obvious for visual observations. Yu and Liao [24] developed a light-weight and web-based visualisation tool combining bar charts, heat maps, tables, and word clouds using the D3 data visualisation library [4] to analyse leaked password databases, which led to the identification of various password patterns (e.g., short and long repeat patterns are common in user passwords, shorter repeating sub-strings are used to form longer repeating sub-strings, and reverse order repetitions). In another follow-up work, Yu and Liao [25] developed hierarchical segmentation and optimisation algorithms to visualise and analyse the prefixes and postfixes of human-created passwords.
Apart from date-based patterns in human-created passwords and PINs, keyboard-related patterns have also been investigated by some researchers. Schweitzer et al. [14] discovered that drawing lines connecting the key sequences on a graphical keyboard is not good enough to recognise patterns. Alternatively, they developed a new set of rules using (weighted) arcs and/or loops to help visually recognise keyboard patterns. An analysis based on a large number of human-created passwords revealed that the most common keyboard patterns contained 2-4 continuous keys. Based on this result, Chou et al. [5] used adjacent and parallel keyboard patterns to generate password databases, and subsequently applied them to crack real-world passwords.
To the best of our knowledge, there is limited work that is similar to our work presented in this paper. Shin and Woo [15] attempted to understand password patterns and structure through a data-driven analysis of passwords from four different leaked password databases. They adopted the tensor decomposition method to study password features and identify two dominant features that make a password stronger through similarity distance analysis. Zheng et al. [26] proposed a modification-based approach to explore the spatial structure of passwords in the form of entity-relationship graphs. Similar to our work, they also used Levenshtein distance for comparing passwords. However, their approach differed in terms of utilising the Levenshtein distance to define the edges of vertices in a graph model, while we utilise Levenshtein distances between password pairs to generate distance matrices with subsequent dimensionality reduction for mapping complicated spatial password relationships to a 2-D space for visualisation purposes. Guo et al. [6] also used Levenshtein distances between password pairs to construct a graph showing relationships between passwords, but they used a simple threshold-based approach to define binary connections between passwords, while our work uses a dimensionality reduction method to keep distance between passwords in a 2-D space.
## 3 Methodology
The main objective of this work is to develop a tool that facilitates the exploration and analysis of large-scale password databases for researchers and practitioners by leveraging effective data visualisation techniques. To achieve this, we aim to
1. construct high-dimensional representations for passwords in
a given database, where passwords with similar structures are positioned close together,
2. embed the high-dimensional representations of all passwords in a 2D space, and
3. develop an easy-to-use toolkit for password visualisation and analysis.
### Quantify similarity between a pair of passwords
The edit distance is a method used to quantify the dissimilarity between two textual strings (e.g., two passwords) by calculating the minimum number of operations needed to transform one string into the other. There are different types of edit distance that involve different sets of editing operations. For instance, Levenshtein distance (hereafter LD) [10] allows three operations: removal, insertion, and substitution of a character in the input strings. Hamming distance takes effect only on passwords that have the same length. In other words, it does not allow insertion or removal. Jaro-Winkler distance is based on the observation that a common mistake when people type is the transposition of two adjacent characters in a string. It favours strings where the first few characters match due to the prefix scale factor in its calculation (e.g., 'password1' and 'password2' have a similarity of 95.6% whereas '1password' and '2password' have a similarity of 92.6%), but passwords come in many formats and do not necessarily have matching prefixes. Moreover, Jaccard similarity and cosine similarity do not account for the order of characters, which can be critical in comparing passwords [2]. Cosine similarity also requires a transformation of the strings into a suitable numerical vector representation, which can complicate the process.
Comparing all the different types of edit distances, we selected LD to quantify the similarity between pairs of passwords. The LD between two passwords is formally defined as the minimum number of single-character edits (insertions, deletions, and substitutions) required to change one password into the other. Mathematically, LD between a pair of passwords \(a\) and \(b\) can be stated by \(\text{lev}_{a,b}(|a|,|b|)\):
\[\text{lev}_{a,b}(i,j)=\min\left\{\begin{array}{l}\text{lev}_{a,b}(i-1,j)+1 \\ \text{lev}_{a,b}(i,j-1)+1\\ \text{lev}_{a,b}(i-1,j-1)+f\left(a_{i},b_{j}\right)\end{array}\right., \tag{1}\]
where \(|a|\) and \(|b|\) represent the lengths of passwords \(a\) and \(b\), respectively, \(f(a_{i},b_{j})\) is an indicator function that equals to 0 when \(a_{i}=b_{j}\) and to 1 otherwise. The calculation of LD involves a dynamic optimisation algorithm, whose complexity is \(\text{O}(|a|\times|b|)\).
### Calculating a distance matrix from all passwords
To facilitate the construction of high-dimensional representations for passwords in a database with respect to other passwords, LDs between all pairs of passwords can be used to create a distance matrix, where each cell represents the similarity between two passwords. In this case, the \(i\)-th row and the \(j\)-th column represents the LD between the \(i\)-th and \(j\)-th passwords. Table 2 shows an example of a distance matrix of 10 randomly selected passwords from a leaked password database1.
Footnote 1: [https://github.com/danielmiesler/SecLists/blob/master/Passwords/xato-net-10-million-passwords-10006.txt](https://github.com/danielmiesler/SecLists/blob/master/Passwords/xato-net-10-million-passwords-10006.txt)
However, for large leaked password databases, there are too many passwords, so creating a complete distance matrix can incur high time and space complexity. Imagining a best-case scenario for memory usage where we assume that each password is a single character long, amounting to one byte. Even then, a data set with 700,000 passwords would require \(700,000\times 700,000\times 8=3.92\) trillion bits, or 490 gigabytes of memory. Given the high complexity, we resorted to an anchor-based method to make the data visualisation tool more lightweight. The method decided involves selecting a sufficiently small number of representative anchor passwords in the entire database and constructing a distance matrix of all passwords from the anchor passwords only (rather than all). In this way, for a database of size \(M\), we can extract a set of \(N\ll M\) anchor passwords and generate an \(M\times N\) distance matrix, where each row is an \(N\)-d vector indicating how close (LD-wise) each password is to each of the \(N\) anchor passwords. This reduced matrix can be easily accommodated in memory and also enables faster computation in subsequent steps. Crucially, this approach still maintains the variance of the data, providing us with a reliable sample for further analysis.
### Dimensionality reduction
In the second step of this process, we used t-SNE [19] to reduce the number of dimensions of each password in the \(M\times N\) distance matrix in the previous step from \(M>2\) to just two. As an input to the t-SNE method, the high dimensional distance matrix is passed, and the output is a matrix where each row represents the 2-D coordinates for each password. t-SNE is a machine learning algorithm adept at visualising high-dimensional data and preserving the local structure, enabling it to represent clusters and relationships in the data effectively and making it particularly suitable for our goal of producing a two-dimensional representation. One of its key advantages is its ability to maintain the local structure of the data, meaning it preserves the relationships and clusters that exist in the original high-dimensional data.
### Implementation
#### 3.4.1 Python-based command-line tool for visualising large password datasets
We developed a Python command-line tool to process and visualise large password databases as discussed previously in this section2. We chose Python due to its extensive range of data and machine learning libraries. The polyleven library package3 was used for calculating LDs. For the t-SNE algorithm, we used the implementation
\begin{table}
\begin{tabular}{c c c} \hline \hline
**Password 1** & **Password 2** & **LD** \\ \hline romans56 & blahblah & 8 \\ bahmut2ritter & Bonito12 & 13 \\ rahasia23 & abhilash298471 & 11 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Examples of LDs between three example pairs of passwords
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline & & & & & & & & & & & \\ \hline anfield & 0 & 7 & 6 & 7 & 8 & 8 & 8 & 7 & 7 & 7 \\ cutlass & 7 & 0 & 7 & 7 & 8 & 8 & 9 & 7 & 6 & 8 \\ denire & 6 & 7 & 0 & 6 & 8 & 8 & 9 & 8 & 8 & 7 \\ GEORGE & 7 & 7 & 6 & 0 & 8 & 8 & 9 & 8 & 8 & 8 \\
21081987 & 8 & 8 & 8 & 8 & 0 & 8 & 9 & 8 & 8 & 8 \\ W2P030WP & 8 & 8 & 8 & 8 & 8 & 0 & 9 & 8 & 8 & 8 \\ viggfhjkm & 8 & 9 & 9 & 9 & 9 & 9 & 0 & 9 & 9 & 9 \\ hallo123 & 7 & 7 & 8 & 8 & 8 & 8 & 9 & 0 & 7 & 8 \\ nathale & 7 & 6 & 8 & 8 & 8 & 8 & 9 & 7 & 0 & 7 \\ November & 7 & 8 & 7 & 8 & 8 & 8 & 9 & 8 & 7 & 0 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Examples of a distance matrix of a database with 10 passwords
in the openTSNE package4. The visualisation implementation was done with Matplotlib5 in Python, offering researchers and practitioners various options to interact with the password database to facilitate different follow-up analyses.
Footnote 4: [https://github.com/pavlin-policar/openTSNE](https://github.com/pavlin-policar/openTSNE)
Footnote 5: [https://matplotlib.org/](https://matplotlib.org/)
#### 3.4.2 Interactive application
In addition, we also developed an interactive web-based application that allows users to explore visualisation of smaller password databases (which can be a smaller subset of a larger password database) in an interactive manner6. Here, we offer a glimpse into the capabilities that this interactive application can offer using a smaller password database as an example.
Footnote 6: [https://github.com/samcparker/passviz-gui](https://github.com/samcparker/passviz-gui)
**Extraction**: Figure 2 shows the extraction functionality of the application, where a set of passwords can be selected and converted into a new graph for closer inspection. It is possible to take this extracted group of passwords and generate a visualisation of them.
**Searching**: Using regular expressions, a user is able to hide passwords that do not match the regular expression provided. In Figure 3, the regular expression ~[0-9]*$ is used to show only passwords that contain only numbers.
**Clustering algorithms**: This application has support for performing \(k\)-means, OPTICS and DBSCAN clustering methods to get a better understanding of where clusters are formed and to allow for easier visualisation of patterns that may not have emerged before. Figure 4 shows the graph after having the OPTICS clustering method performed on it. By performing this clustering algorithm, the application highlights the centre-most passwords within the clusters. This reveals 'andrea' to be the centre of the cluster containing passwords 'andres', 'andrew', 'andrea' and 'andrea'.
Due to the time and space complexity of processing large password databases from a web browser, this interactive application will be very slow or even impossible to load and process very large password databases. Therefore, we recommend using this interactive application as a complementary tool alongside the command-line tool that is better positioned to process and visualise large-scale password databases. This combination enables us to delve into interesting subsets of a large password database to study more hidden patterns, therefore enhancing insights and findings learned from the results from the large database. The interactive application can also be used to test some hypotheses with a small subset of a large password database, and then a more time-consuming process is run using the command-line tool with the full password database. Examples of various password analyses are presented in Section 4 using a leaked database '00bowhost'. Nevertheless, one major direction of our future work is to investigate how the time and space complexity of this interactive application can be improved to handle larger password databases, e.g., leveraging parallel processing using multiple cloud servers and GPUs on a single machine.
## 4 Experimental Results
In this section, we present our work of applying PassViz to analyse passwords in the leaked database '000webhost' to showcase its capabilities and evaluate its effectiveness. 000webhost comprises 15,251,074 clear text passwords, including 720,302 unique passwords. This leaked password database was made public in November 2015, following a security breach from a large web hosting service 000webhost.com. According to a study [20], the origin of the user accounts in 000webhost is reported to be diverse. All accounts in this database are distributed across a wide range of countries, where the largest one (United States) only accounts for 8% of the total population. In addition, the distribution indicates that English passwords are not dominant in the database [20]. By using a subset of randomly selected passwords with the size of 2,000, we were able to construct a distance matrix with the size of \(720,302\times 2,000\). After applying t-SNE dimensionality reduction, we were able to plot all passwords as a 2-D graph and show them in different clusters. As illustrated in Figure 5, PassViz could group all passwords into discernible clusters. To further learn and understand more patterns in this leaked database, more analyses were performed and the findings are presented below.
### Analysis based on password length
As shown in Figure 5, different clusters can be visually observed. However, it is not clear what the most defining factor of the clusters is. By looking into the subsets of the database through the utilisation of the interactive application introduced in Section 3.4.2, these clusters are primarily differentiated by the length of the passwords.
Figure 4: Using OPTICS to cluster the passwords
Figure 3: Using regular expressions to highlight individual passwords
Figure 2: Extracting a group of passwords and opening them in a new window in PassViz
#### 4.1.1 Clusters based on different password lengths
We conducted a subsequent analysis to encode different password lengths with different colours for visualisation. As shown in Figure 6, the visualised database illustrates each password in a colour corresponding to its length. Additionally, a number displayed over each cluster indicates the majority length of the passwords contained within. The size of a cluster corresponds to the number of passwords that have the same length. It is apparent from this method of visualisation that the length of the password is a significant factor in the formation of the clusters, showing that the length of passwords plays a significant role in the structure of passwords within the database.
However, there are exceptions to this observation. One instance is the formation of a mixed cluster, predominantly consisting of passwords of lengths 6 and 7. Despite the minor difference in length, these passwords have enough in common to be grouped into the same cluster. Another exception to this observation is for passwords that have 15 or more characters. Rather than forming individual clusters, these longer passwords combine into a single cluster. The group gradually gets smaller as the length of the passwords increases, reflecting fewer instances of longer passwords in the database. Moreover, it was observed that no cluster contained passwords with fewer than 6 characters, suggesting that 000webhost might have enforced a password-composition policy to have a minimum password length of 6 characters. The identification of larger clusters of passwords with lengths of 8, 9, and 10 is somewhat consistent with the findings reported in [9], which revealed that password-composition policies mandating a minimum of 8 characters typically result in mean password lengths ranging from 9 to 10.
#### 4.1.2 Visualising passwords of the same length
From the previous analysis, it is worth noticing that the defining factor separating leaked password databases into clusters is the length of the password. To further explore, we take passwords of the length of 8 as an example to illustrate how the 000webhost database graph transforms. Around 140,000 passwords are plotted as shown in Figure 7, which gives a better understanding of how graphs are formed, without the aforementioned bias of length down to Levenshtein distance.
By visualising the passwords in this way and after further analysis, a common pattern emerged in that many of the passwords in certain clusters had the same character at the same position in each password. In Figure 8, each password has been given a colour based on a property: blue represents passwords where the second letter is 'a', pink represents passwords where the last letter is '1', and purple represents passwords that abide by both of these properties. It seems that the most defining factor of passwords in our methodology is the position of characters within the passwords. It is striking to see how many users include 'a' as the second letter of their password or '1' as the last letter and is a pattern that would be harder to visualise using more common statistical methods.
In addition, looking closely at passwords ending with '1', by isolating the cluster, it appears that many of these passwords only contain a small amount of digits, shown by the majority of passwords appearing red as shown in Figure 7. In comparison, other clusters have more of an orange-to-green hue, showing they contain more numbers.
### _Analysis based on the composition of digits in a password_
Many existing works have looked into the composition of a password [9]. How digits play a role in creating a password is often of interest to researchers and practitioners. Here, we present a number of analyses utilising PassViz to help derive insights from the large-scale password database by looking at the composition of digits within the password.
Figure 5: Illustration of clusters of passwords
Figure 6: Illustration of colour-coded clusters based on different password lengths
Figure 7: Visualisation of passwords of the length of 8 characters
#### 4.2.1 Visualisation based on the percentage of digits in a password
Upon assessing the numerical composition of passwords, they were colour-coded based on the percentage of digits in a password, where passwords with dark green colour have the highest percentage of digits and passwords with dark red colour have the lowest percentage of digits. As shown in Figure 9, the visualisation revealed that the utilisation of the numerical composition of a password can separate passwords within their clusters, with one side of the cluster containing passwords having a high percentage of digits, and the other side containing passwords with fewer digits. There appears to be a gradient across all clusters, visualising the change in the number of digits contained within passwords.
To facilitate the analysis, we present Table 3 which displays the distribution of numerical composition in the 000webhost database. The table reveals that 21% of passwords in the database contain 20% digits, while 17% and 16% of passwords have 10% and 30% digits, respectively. Including 5% of passwords that have no digits, 59% of passwords in the 000webhost database have less than 30% numerical content, which makes the overall graph lean towards the colour of red.
#### 4.2.2 Visualisation based on the dominating position of digits
An interesting property to be examined is the positional distribution of digits within a password. This metric measures the location of digits within a password, where values near 0 signify that numbers are primarily concentrated towards the beginning of the password, and values closer to 1 indicate numerical characters mostly towards the end. A distribution ratio around 0.5 suggests either an even dispersion of digits or a lack of digits altogether. Figure 10 depicts this metric, with the light blue shaded passwords indicating a higher quantity of digits towards the start of the passwords and the dark blue passwords signifying a greater presence of digits towards the end. A noteworthy observation is the comparative scarcity of passwords with a high predominance of digits towards the start as opposed to passwords with a majority of digits towards the end. This implies a bias towards appending numbers at the end of the passwords.
### Analysis based on specific requirements
To assist the exploration of a large-scale password database to provide more insights, PassViz has the capability and flexibility to produce visualisation based on more specific requirements. Here we present some examples of utilising PassViz to learn password patterns and structures.
Figure 8: Visualisation of passwords of length 8 that have specific compositions
Figure 10: Visualisation of positional distribution of digits within a password
\begin{table}
\begin{tabular}{c c c c} \hline \hline
**\% digits** & **\% passwords** & **\% digits cont.** & **\% passwords cont.** \\ \hline
0 & 5 & 60 & 7 \\
10 & 17 & 70 & 4 \\
20 & 21 & 80 & 4 \\
30 & 16 & 90 & 0.7 \\
40 & 10 & 100 & 0.06 \\
50 & 13 & - & - \\ \hline \hline \end{tabular}
\end{table}
Table 3: Distribution of the percentage of digits in each password in the 000webhost database
Figure 9: Visualisation of passwords that have different percentages of numbers
#### 4.3.1 Visualisation based on a given string
All instances containing the word 'hello' in the 000webhost database were highlighted as shown in Figure 11. These instances are relatively scarce, however, there are occasionally groupings. This suggests that while the strings may appear similar based on their contents, it is not a decisive factor in global cluster formation, only in local formation within clusters.
#### 4.3.2 Visualisation based on passwords containing years
In addition, existing works discovered that numbers that represent dates/years have been frequently used in the process of password/PIN creation [21, 22]. We are interested in visualisation the distribution of such information in the 000webhost database using PassViz to see if there are any interesting patterns that can be discovered. As depicted in Figure 12, passwords containing dates from the years 2000-2099 are highlighted in blue, such as 'amado2009', while those containing dates from 1900-1999, like'small1970sman', are marked in red. These dates were chosen specifically as they have the most relevance to current users.
In this visualisation, it can be seen that a small portion of passwords containing these year-related numbers are scattered across the graph. The larger clusters in blue and red formed primarily consist of passwords where the year forms the end part of the password, like 'amado2009', suggesting that a substantial portion of users with a date in their password append a specific year to a base word, rather than at the start or in the middle of the password, contributing to the formation of these clusters. On the other hand, the individual points are scattered throughout the clusters representing the less common instances where the year appears in the middle of a password such as'small1970sman', rather than at the end of the password.
### Comparative analysis
#### 4.4.1 Comparing 000webhost with phbbb based on the percentage of digits
This generation methodology in this research can be extended and applied to different databases. To illustrate this, we present the graph for the leaked database 'phbbb'7 of over 184,000 unique passwords, shown in Figure 13. This shows the percentage of digits in passwords, with red indicating passwords containing no digits, green indicating passwords containing only digits, and colours in-between showing a colour in-between showing the proportion of digits in the password.
Footnote 7: [https://github.com/danielmiessler/SecLists/blob/master/Passwords/Leaked-Databases/phbbb.txt](https://github.com/danielmiessler/SecLists/blob/master/Passwords/Leaked-Databases/phbbb.txt)
Comparing this with Figure 9, an intriguing pattern becomes evident. The proportion of passwords in the phbbb database contain mostly digits is notably larger compared to the 000webhost database shown in Figure 9. Additionally, the significant amount of green highlights the prevalence of passwords composed exclusively of digits.
This gives us an insight into the general security of passwords in each database. The graph generated using the 000webhost database does not show the intense red or intense green that the phpbb database shows, indicating that the passwords used within it are more secure. This could be down to security restrictions imposed on users requiring them to use digits in their passwords. On the other hand, the phbbb database generates a predominantly red and green graph, with only a small amount of colour in between. This indicates that the security restrictions imposed on users were not the same as intense as on 000webhost.
Figure 11: Visualisation of passwords that contain the word ‘hello’
Figure 12: Visualisation of passwords that contain years
Figure 13: Visualisation of passwords that have different percentages of digits in the phbbb database
#### 4.4.2 Comparing 000webhost with phpbb based on sequences
In this section, we focus on the prevalence of numeric sequences and keyboard patterns in the passwords in each database. For this analysis, a visual representation was generated where the passwords containing numeric sequences (such as '123' and '1234') are marked in red and those satisfying keyboard passwords (consecutive keyboard entries, such as 'qwerty' and 'zxcvb') are marked in blue.
The 000webhost database illustrated a pronounced prevalence of numeric sequence patterns in password creation, as shown in Figure 14. A substantial proportion of the passwords contained easily identifiable sequences which start with '123' and increase incrementally. This '123' pattern, despite being a weak password strategy, is in much use among passwords in the 000webhost database.
On the contrary, the phpbb database demonstrated significantly fewer instances of numeric sequence patterns, as shown in Figure 15. This disparity suggests that phpbb users may have had a better understanding of secure passwords, or that the platform itself may have enforced stricter password policies. However, when comparing it with Figure 13, we can see that few passwords in the phpbb database contain digits, which is likely the reason why the '123' pattern is not common inside of the phpbb database.
#### 4.4.3 Intersection between 000webhost and phpbb
Calculating the intersection between two password databases may indicate the similarities between them and will show common passwords between the two. Figure 16 visualises passwords in the phpbb database, with passwords marked in red that also appear within the 000webhost database. The number of intersections between the two databases is 6,091 - 0.84% of 000webhost and 3.3% of phpbb - showing a small amount of commonality between the two databases. This is an important area to look at, as it shows where passwords are being re-used between the two databases and will highlight instances where users are re-using seemingly unique passwords across multiple platforms.
In the figure, it can be seen that intersecting passwords are distributed in a non-uniform manner. There are some areas in the graph where marked passwords have a higher concentration, highlighting that there are certain types of passwords that are more likely to be re-used. After further examination on these groups of passwords, it can be seen that these concentrated areas are formed of passwords ending with the characters of '123'.
Some notable instances are longer passwords that may appear random. By performing this method of visualisation, we can see that they are re-used across both databases, despite appearing seemingly unique. These passwords typically appear at longer lengths and include 'richmond1969','serkan737526' and 'nikita040683'. One could expect that these come from the same user having created accounts on both platforms.
There are other longer passwords that appear in both databases, however, these do not appear to be as random and unique as the previous one. These come in the form '1q2w3e4r5t6y', 'abc123def456' and 'qwe123asd456'. Despite being long, these passwords are not necessarily unique and are formed of common keyboard patterns. Thus, using a long set of characters does not necessarily imply a unique password.
To get a better understanding of the intersection between these two databases, Figure 17 visualises the graph generated by the in
Figure 16: Passwords in the phpbb database, with red dots representing passwords shared with the 000webhost database
Figure 14: Sequences highlighted in the 000webhost database
Figure 15: Sequences highlighted in the phpbb database
tersection of the 000webhost and phpbb databases. As shown in Figure 14, passwords that contain a numeric sequence are highlighted in red, and passwords that contain keyboard sequences are highlighted in blue. This graph reinforces how many passwords re-used between the two databases contain numeric sequences, as this is one of the most defining features of this generated graph. Other notable areas are the reuse of passwords containing other keyboard sequences, like 'qwerty', which also appear frequently throughout the graph.
## 5 Further Discussions
### Typical use cases
In this subsection, we list some typical use cases of PassViz in real-world applications, based on the examples explained in the previous section. Note that the list is not exhaustive.
* Use Case 1: As presented in Sections 4.1 and 4.2, PassViz can be used to reinforce the comprehension of password structures and patterns, thereby extracting valuable insights about human users' password creation processes. In turn, this will aid in the refinement and development of password tools like password strength meters and password policies.
* Use Case 2: As demonstrated in Section 4.3, PassViz provides flexible ways to allow researchers and practitioners to interact with a large password database with ease to explore finer structures related to subsets.
* Use Case 3: As illustrated in Section 4.4, PassViz allows the comparison between two password databases, in order to reveal cross-database/website patterns that cannot be revealed by studying the multiple password databases separately.
* Use Case 4: All the analyses supported by PassViz can help unveil different aspects of human behaviours in the password creation process, e.g., how they use numbers and keyboard patterns, how they apply character transformation rules to make a password more complicated, and how they reuse or change behaviours across different websites. Such insights related to human behaviours can be useful for a wide range of applications, including development of better tools and also better ways to educate users about password security.
### Limitations
Despite their utility, the generated visualisations do show certain limitations. We discuss some of such limitations below.
One limitation is that some passwords with a small LD could be mis-clustered. For instance, passwords such as 'hello123' and 'hello12', while notably similar with a LD of 1, are segregated as shown in Figure 18. It is likely due to bias towards strings of identical length within a distance matrix. Unavoidable errors introduced by the dimensionality reduction algorithm may be another source.
A second limitation is that the LD we used is unable to capture all aspects of semantic similarity of two passwords. For instance, 'hello123' and '123hello', despite their perceptible semantic similarity, appear substantially distanced from each other as illustrated in Figure 19. Considering their LD is indeed relatively large (5), the separation can be conceptually explained by the limitation of LD as an edit distance without considering reversing the whole string as a single editing step.
A third limitation is that the use of a dimensional reduction algorithm will unavoidably lead to loss of information for some password pairs (so their distances can be more distorted than others). How to address this is non-trivial since we have to visualise passwords in a low-dimensional (2-D or 3-D) space.
### Future Work
We have identified a range of future work directions as described below.
**More comprehensive testing and password analyses:** The password analyses we conducted and reported are relatively ad hoc, and we only tested PassViz with a number of leaked password databases and some patterns. It will be useful to conduct a more
Figure 19: An example of a second observed limitation of PassViz for ‘hello123’ and ‘123hello’
Figure 17: Intersection between the 000webhost and phpbb databases
Figure 18: An example of one observed limitation of PassViz for ‘hello123’ and ‘hello12’
comprehensive analysis with more leaked password databases and a more comprehensive set of patterns. It will also be helpful to design ways to reveal more unknown patterns about passwords. Such further analyses can also involve recruitment of human participants to more confidently confirm the usefulness of PassViz.
**Refinement our methodology:** The current study uses a distance matrix based on LD for password position generation. While this approach has its merits, it may not necessarily provide the most comprehensive or meaningful results. Future work could focus on exploring alternative distances, dimensional reduction and clustering methods, such as term frequency \(n\)-grams and other semantic analysis based vectorisation methods, which could potentially show patterns in passwords that might be overlooked or mis-handled by LD. One interesting approach is to use a large language model (LLM) to define a more semantically aware distance and use the LLM to select more representative anchor passwords of the whole password space. Incorporating a password strength meter in the distance metric may also be useful. We also plan to enhance the reconfigurability of PassViz to support different distance metrics and different dimensional reduction methods.
**Further development of visualisation tool:** An additional practical implication of this research is the further development of a more comprehensive application that implements more features required to visualise and analyse leaked passwords, especially allowing interactive visual analytics of large password databases.
**Integration of chatbots:** One prospect for the future would be the integration of a chatbot capable of interpreting both graphical input and creating command-line actions. This tool could generate, visualise and analyse graphs autonomously, streamlining the process and reducing the manual workload. With advancements in natural language processing especially LLMs, the development and implementation of such chatbots has become increasingly feasible.
## 6 Conclusion
In conclusion, the work we have produced has provided visual insights into the underlying passwords and structures within large-scale password databases. It has shown a multitude of patterns and correlations that might have remained hidden in traditional statistical analyses. These visualisations lie not only in their ability to summarise complex databases, but also have the potential to inform password security policies and user education efforts.
|
2309.15961 | Negative Immersions and Finite Height Mappings | Given a monomorphism $\Psi:\mathcal{H}\rightarrow \mathcal{F}$ where
$\mathcal{H}$ is a proper free factor of the free group $\mathcal{F}$, we show
the associated mapping torus $X$ of $\Psi$ has negative immersions iff
$\mathcal{H}$ has finite height in $\pi_1X$ iff $\Psi$ is fully irreducible. We
survey related properties and discuss possible directions to pursue further. | Brahim Abdenbi, Daniel T. Wise | 2023-09-27T19:19:08Z | http://arxiv.org/abs/2309.15961v2 | # Negative immersions and finite height mappings
###### Abstract.
Given a monomorphism \(\Psi:\mathcal{H}\to\mathcal{F}\) where \(\mathcal{H}\) is a proper free factor of the free group \(\mathcal{F}\), we show the associated mapping torus \(X\) of \(\Psi\) has negative immersions iff \(\mathcal{H}\) has finite height in \(\pi_{1}X\) iff \(\Psi\) is fully irreducible. We survey related properties and discuss possible directions to pursue further.
Key words and phrases:Negative Immersions, Height, Coherence, Fully Irreducible Maps 2020 Mathematics Subject Classification: 20F65, 20F67 Research supported by NSERC
## 1. Introduction
Free-by-cyclic groups have been of continual interest in combinatorial and geometric group theory especially because of works of G. Baumslag [1, 1], Bestvina-Feighn-Handel [2, 1], and have an increasing interest because of virtual algebraic fibering [10]. Among their most notable properties is that they are _coherent_ which means that every finitely generated subgroup is finitely presented [1]. A free-by-cyclic group \(\mathcal{G}=\mathcal{F}\rtimes_{\Psi}\mathbb{Z}\) is often studied by thinking of \(\mathcal{F}\) as \(\pi_{1}F\) for some based graph \(F\), and representing \(\Psi\) by a basepoint preserving map \(F\to F\), and regarding \(\mathcal{G}\) as \(\pi_{1}X\) where \(X\) is the mapping torus corresponding to \(\Psi\).
The study of coherent groups led to the following notion: a \(2\)-complex \(X\) has _nonpositive immersions_ if for any combinatorial immersion \(Y\to X\), with \(Y\) compact, connected, and collapsed (meaning \(Y\) has no free faces), either \(\pi_{1}Y\) is trivial or \(\chi\left(Y\right)\leq 0\). A straightforward proof was given in [22] showing that when \(X\) is the mapping torus of a \(\pi_{1}\)-injective map \(\psi:F\to F\) of a graph, then \(X\) has nonpositive immersions. Recent progress has been made on the conjecture that nonpositive immersions implies coherence of \(\pi_{1}X\)[11].
The following stricter form of nonpositive immersions was suggested in [22]: A \(2\)-complex \(X\) has _negative immersions_ if there exists \(c>0\) such that for any combinatorial immersion \(Y\to X\), where \(Y\) is compact, connected, collapsed, and with no isolated edges, either \(\pi_{1}Y\) is trivial or \(\chi\left(Y\right)\leq-c|Y|_{2}\) where \(|Y|_{2}\) is the number of \(2\)-cells in \(Y\). This stronger form permits a geometric proof of coherence, and also provides a proof of local-quasiconvexity when \(X\) is a negatively curved \(2\)-complex, (and conjecturally in general). We refer to [22, 23, 24] for more on negative immersions.
In view of the nonpositive immersion property for the mapping torus \(X=M(\psi)\) of a \(\pi_{1}\)-injective endomorphism \(\psi:F\to F\), we were motivated to explore the negative immersion property for a "partial mapping torus" \(X=M(\psi)\) where \(F\) is
###### Abstract.
Let \(F\) be a finite graph and let \(H\subset F\) be a subgraph. Let \(X\) be the mapping torus of a cellular immersion \(\psi:H\to F\). Then \(X\) has negative immersions if and only if \(\psi^{-n}\left(H\right)\) is a forest for some \(n\geq 0\).
We prove that \(\psi^{-n}\left(H\right)\) is a forest for some \(n\geq 0\).
**Keywords**: _H-graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph,, graph, graph, graph, graph, graph, graph, graph,, graph, graph, graph, graph,, graph, graph, graph, graph,, graph, graph, graph, graph,, graph, graph,, graph, graph, graph,, graph, graph, graph,, graph, graph,, graph, graph, graph, graph,, graph, graph, graph,, graph, graph, graph,, graph, graph, graph,, graph, graph, graph,, graph, graph,, graph,, graph, graph,, graph, graph,, graph,, graph, graph, graph,, graph, graph,, graph, graph,, graph, graph, graph,, graph,, graph, graph,, graph,, graph,, graph,, graph,, graph,, graph,, graph,, graph, graph,, graph,, graph,, graph,, graph,, graph, graph,, graph,, graph,, graph,, graph,, graph,, graph,, graph,, graph,, graph,, graph,, graph,, graph,, graph,, graph,, graph,, graph,, graph,, graph,, graph,, graph,, graph,, graph,, graph,, graph,, graph,, graph,, graph,, graph,, graph,, graph,, graph,,, graph,, graph,, graph,, graph,, graph,, graph, graph,, graph,, graph,, graph, graph,, graph,, graph, graph,, graph,, graph,, graph,, graph,, graph,, graph, graph,, graph,, graph,, graph,, graph,, graph,, graph,, graph,, graph,, graph,, graph,, graph,, graph,, graph,, graph,, graph,, graph,, graph,, graph,, graph,, graph, graph,, graph,, graph,, graph,, graph,, graph,, graph,, graph,, graph,, graph,, graph, graph,, graph,, graph,, graph, graph,,, graph,, graph,, graph,, graph,, graph,, graph,, graph,, graph,, graph,, graph,, graph,,, graph,, graph,, graph,, graph,, graph,, graph,, graph,, graph,, graph,, graph,, graph,, graph,, graph,, graph,,, graph,,, graph,, graph,, graph,, graph,, graph,, graph,, graph,, graph,,, graph,, graph,,, graph,, graph,, graph,, graph,, graph,, graph, graph,, graph,, graph,, graph,, graph,, graph,, graph,, graph,, graph,, graph,, graph,, graph,, graph,, graph,, graph,, graph,, graph,, graph, graph, graph,, graph,, graph, graph,, graph, graph,, graph,, graph, graph,, graph, graph,, graph, graph,, graph,, graph, graph,, graph,, graph,, graph, graph,, graph, graph,, graph,, graph, graph,, graph, graph, graph, graph,, graph,, graph, graph, graph,, graph, graph, graph, graph,, graph, graph,, graph, graph, graph, graph,, graph, graph,, graph, graph,, graph,, graph,, graph, graph,, graph, graph,, graph, graph, graph, graph,, graph, graph, graph, graph, graph, graph,, graph, graph, graph, graph, graph, graph, graph,, graph, graph, graph, graph, graph, graph, graph, graph,, graph, graph, graph, graph, graph, graph, graph,, graph, graph,, graph, graph, graph,, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph,, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph, graph,
fully irreducible endomorphism focuses on the case where \(\mathcal{H}=\mathcal{F}\)[1]. Using this language, we show the following statement proved in the text as Theorem 4.12:
**Theorem 1.4**.: _Let \(\mathcal{H}\) be a proper free factor of a finitely generated free group \(\mathcal{F}\), and let \(\Psi:\mathcal{H}\rightarrow\mathcal{F}\) be a monomorphism. Let \(X\) be the standard \(2\)-complex of the HNN extension of \(\mathcal{F}\) with respect to \(\Psi\). Then \(X\) has negative immersions if and only if \(\Psi\) is fully irreducible._
**Problem 1.5**.: Let \(\mathcal{H}\) be a proper free factor of a free group \(\mathcal{F}\), and let \(\Psi:\mathcal{F}\rightarrow\mathcal{F}\) be a fully irreducible endomorphism. Is \(\mathcal{H}\) quasiconvex in \(\mathcal{G}=\mathcal{F}\rtimes_{\Psi}\mathbb{Z}\)? Is every non-quasiconvex finitely generated subgroup of \(\mathcal{G}\) a virtual (generalized) fiber? Here "generalized" allows for a free subgroup that is conjugated properly into itself.
An affirmative answer to Problem 1.5 would show that our groups are locally quasiconvex.
It is expected that when \(X\) has negative immersions, \(\pi_{1}X\) is a locally quasiconvex hyperbolic group. There is now some evidence for the following conjecture which we hope will inform the next entries into this topic:
**Conjecture 1.6**.: _Let \(\mathcal{G}\) be a locally quasiconvex hyperbolic group. Then \(\mathcal{G}\) has a finite index subgroup \(\mathcal{G}^{\prime}\) such that \(\mathcal{G}^{\prime}=\pi_{1}X\) where \(X\) is a mapping torus of a fully irreducible partial endomorphism of a free group._
One direction of Conjecture 1.6 could be approached by proving virtual specialness of locally quasiconvex (locally indicable) hyperbolic groups - itself an interesting conjecture. And combining it with some of the current trend of using vanishing \(L^{2}\)-Betti number to obtain virtual fibering [10]. Another direction is equivalent to showing that the subgroup \(\mathcal{H}\) has the _finitely generated intersection property_ in the sense that \(\mathcal{H}\cap\mathcal{K}\) is finitely generated whenever \(\mathcal{K}\) is finitely generated.
In Section 2, we give background. In Section 3 we prove a special case of the main theorem relating malnormality and negative immersions. In Section 4, we prove the main theorems, and in Section 5 we generalize our results to \(\pi_{1}\)-injective maps. Finally, in Section 6 we discuss related properties.
## 2. Background
We work in the category of \(CW\)-complexes. Let \(Y\) be a \(CW\)-complex. We denote by \(Y^{k}\) the \(k\)-skeleton of \(Y\) and by \(|Y|_{k}\) the number of \(k\)-cells in \(Y\). Given complexes \(X\) and \(Y\), a map \(Y\to X\) is _cellular_ if it maps \(Y^{k}\) into \(X^{k}\) for all \(k\). It is _combinatorial_ if it maps open cells of \(Y\) homeomorphically onto open cells of \(X\). It is an _immersion_ if it is locally injective. A complex is collapsed if it has no free faces. A \(1\)-cell (edge) is _isolated_ if it is not a face of a \(2\)-cell. A \(2\)-complex \(X\) has _negative immersions_ if there is \(c>0\) such that for any combinatorial immersion \(Y\to X\) with \(Y\) compact, connected, collapsed (containing no free faces), and containing no isolated edges, either \(\pi_{1}Y\) is trivial or \(\chi\left(Y\right)\leq-c|Y|_{2}\) where \(|Y|_{2}\) is the number of \(2\)-cells in \(Y\) and \(\chi\left(Y\right)\) is the Euler characteristic of \(Y\).
A group \(\mathcal{G}\) is _coherent_ if every finitely generated subgroup of \(\mathcal{G}\) is finitely presented. The proof of Theorem 2.1 can be found in [20].
**Theorem 2.1**.: _Let \(X\) be a compact \(2\)-complex with negative immersions. Then \(\pi_{1}X\) is coherent._
A _graph_\(F\) is a \(1\)-dimensional \(CW\)-complex whose _vertices_ and _edges_ are the \(0\)-cells and \(1\)-cells, respectively. There exist two _incidence_ maps \(\tau_{1},\tau_{2}:F^{1}\to F^{0}\) mapping each edge \(e\in F^{1}\) to its _boundary vertices_, \(\tau_{1}\left(e\right),\ \tau_{2}\left(e\right)\) called initial and terminal vertex, respectively. Each edge is _oriented_ from its initial vertex to its terminal vertex. The _degree_ of a vertex \(v\) relative to the graph \(F\), denoted by \(\deg_{F}\left(v\right)\), is the number of edges in \(F^{1}\) containing \(v\) as an initial or terminal vertex. An edge whose initial and terminal vertices coincide with \(v\) counts twice in \(\deg_{F}\left(v\right)\). A _leaf_ is a vertex of degree \(1\) and a _spur_ is an edge containing a leaf. A graph is _trivial_ if it is a union of vertices. A _tree_ is a non-empty graph with no embedded circles and a _forest_ is a disjoint union of trees. The _empty graph_ is the graph with no edges and no vertices. We consider the empty graph as a forest.
A _graph of graphs_\(X\) with underlying graph \(\Gamma_{X}\), _vertex-spaces_\(\left\{X_{v}\right\}_{v\in\Gamma_{X}^{0}}\), and _edge-spaces_\(\left\{X_{e}\right\}_{e\in\Gamma_{X}^{1}}\) is a topological space \(X\) obtained as a quotient of graphs \(\left\{X_{v}\right\}_{v\in\Gamma_{X}^{0}}\) and \(\left\{X_{e}\times I\right\}_{e\in\Gamma_{X}^{1}}\) in the following manner: for each edge \(e\in\Gamma_{X}^{1}\) with boundary vertices \(v_{1}=\tau_{1}\left(e\right),v_{2}=\tau_{2}\left(e\right)\), the edge-space \(X_{e}\times I\) is attached to the vertex-spaces \(X_{v_{1}},X_{v_{2}}\) via an _outgoing_ attaching map \(X_{e}\times\left\{0\right\}\to X_{v_{1}}\) and an _incoming_ attaching map \(X_{e}\times\left\{1\right\}\to X_{v_{2}}\). The Euler characteristic of the resulting space is given by
\[\chi\left(X\right)=\sum_{v\in\Gamma_{X}^{0}}\chi\left(X_{v}\right)-\sum_{e\in \Gamma_{X}^{1}}\chi\left(X_{e}\right)\]
A subgroup \(\mathcal{H}\subset\mathcal{G}\) is _malnormal_ if \(g\mathcal{H}g^{-1}\cap\mathcal{H}=1_{\mathcal{G}}\) whenever \(g\notin\mathcal{H}\). The pair \(\mathcal{H},\mathcal{K}\subset\mathcal{G}\) is malnormal if \(g\mathcal{H}g^{-1}\cap\mathcal{K}=1_{\mathcal{G}}\) for all \(g\in\mathcal{G}\).
## 3. Malnormality and Negative Immersions
**Definition 3.1**.: Let \(H\) be a subgraph of a graph \(F\). The _boundary_ of \(H\) in \(F\) is
\[\partial H=\left\{v\in H^{0}\ :\ \deg_{F}\left(v\right)>\deg_{H}\left(v\right) \right\}.\]
**Lemma 3.2**.: _Let \(H\subset F\) be a subgraph of a finite leafless graph \(F\) with no trivial components. Then:_
\[\chi\left(F\right)-\chi\left(H\right)\ \leq\ \frac{-1}{2}|\partial H|_{0}\]
Proof.: A graph \(J\) satisfies \(\chi\left(J\right)=\sum_{v\in J^{0}}\left(1-\frac{\deg\left(v\right)}{2}\right)\). We temporarily use \(\chi\) to denote the number of vertices minus the number of open edges.
Let \(J=\left(F-H\right)\bigcup_{v\in\partial H^{0}}S_{v}^{1}\) be obtained by removing \(H\) and adding a circle at each vertex of \(\partial H\). Then
\[\chi\left(F\right)-\chi\left(H\right)\ =\ \chi\left(F-H\right)\ =\ \chi\left(J\right)\ \leq\ -\frac{1}{2}|\partial H|_{0}.\qed\]
**Lemma 3.3**.: _Let \(H\) be a subgraph of a finite graph \(F\). Let \(\psi:H\to F\) be a cellular immersion with \(H\subset\psi\left(H\right)\). Suppose \(H\) has no tree component and \(\psi^{-1}\left(H\right)\) is homeomorphic to a forest. Then \(\psi\left(T\right)\ \cap\ \partial H\neq\emptyset\) for each component \(T\subset\psi^{-1}\left(H\right)\). Consequently, there exists \(M{=}M\left(F,H,\psi\right)>0\) such that \(|H|_{1}\ \leq\ M\ |\partial H|_{0}\)._
Proof.: Note that \(\psi^{-1}\left(H\right)\) is not necessarily a subgraph of \(F\). Each tree \(T\subset\psi^{-1}\left(H\right)\) can be subdivided into a tree \(\bar{T}\) so that \(\psi|_{\bar{T}}\) is combinatorial. Let
\[d=\max\left\{\operatorname{Diam}\left(\bar{T}\right)\ :\ T\subset\psi^{-1}\left(H \right),\text{ where }\bar{T}\text{ is the subdivision of }T\right\}\]
Since \(H\) has no tree components, each component \(T\subset\psi^{-1}\left(H\right)\) has a leaf that maps to \(\partial H\). So \({H\subset\bigcup_{v\in\partial H}\mathcal{N}_{d}\left(v\right)}\) where \(\mathcal{N}_{d}\left(v\right)\) is a ball of radius \(d\) centered at \(v\). Let \(M=\max\left\{\left|\mathcal{N}_{d}\left(v\right)|_{1}\ :\ v\in F^{0}\right\}\). Then \(|H|_{1}\ \leq\ M|\partial H|_{0}\).
**Definition 3.4**.: Let \(F\) be a graph and let \(H\subset F\) be a subgraph. The _mapping torus_ of a map \(\psi:H\to F\) is the \(2\)-complex \(X\) obtained as follows:
\[X=\left(F\sqcup\left(H\times[0,1]\right)\right)\,/\left\{\left(x,0\right) \sim x,\ (x,1)\sim\psi\left(x\right)\ :\ x\in H\right\}\]
The \(2\)-complex \(X\) decomposes as a graph of spaces \(X\to\Gamma_{X}\), where \(\Gamma_{X}\) is a circle with one vertex \(v\) and one edge \(e\). Let \(X_{v}=F\) and \(X_{e}=H\times[0,1]\) be the vertex-space and edge-space, respectively, where \(X_{e}\) is attached to \(X_{v}\) via the maps \(H\times\left\{0\right\}\to X_{v}\) and \(H\times\left\{1\right\}\to X_{v}\). We refer to the images of \(H\times\left\{0\right\}\) and \(H\times\left\{1\right\}\) in \(X_{v}\) as the _outgoing_ and _incoming_ edge-spaces, respectively. An edge \(e\) of \(X\) is _vertical_ if \(e\subset F\), and _horizontal_ otherwise. Note that each vertex of \(H\) gives rise to a horizontal edge of \(X\), and each edge of \(H\) gives rise to a \(2\)-cell of \(X\). Moreover, each horizontal edge and each \(2\)-cell of \(X\) arises in this manner.
**Remark 3.5**.: Let \(X\) be the mapping torus of a cellular immersion \(\psi:H\to F\), where \(H\) is a subgraph of a finite graph \(F\). Let \(Y\to X\) be a combinatorial immersion where \(Y\) is a nontrivial compact, connected, and collapsed \(2\)-complex with no isolated edges. The decomposition \(X\to\Gamma_{X}\) induces a decomposition \(Y\to\Gamma_{Y}\) whose vertex-spaces are the components of the preimage of \(F\) and whose open edge-spaces are the components of the preimage of \(H\times(0,1)\). Let \(Y_{v}\) be the disjoint union of the vertex-spaces, and let \(Y_{e}\subset Y_{v}\) be the disjoint union of the outgoing edge-spaces. Then there is a cellular immersion \(\Psi:Y_{e}\to Y_{v}\) whose mapping torus is \(Y\) and the following diagram commutes:
Define the _boundary_ of \(Y\), denoted by \(\partial Y\), as the union of the boundary vertices of \(Y_{e}\) in \(Y_{v}\). We make the following remarks:
1. The \(2\)-cells of \(Y\) are in correspondence with the edges of \(Y_{e}\).
2. Distinct outgoing edge-spaces in a vertex-space are disjoint. This holds since \(Y\to X\) is an immersion and the outgoing edge-space in \(X\) is an embedding. In particular, each edge of \(Y_{v}\) is in at most one outgoing edge-space.
3. Since \(Y\) is collapsed and has no isolated edges, each edge in \(Y_{v}\) lies in \(\operatorname{image}\left(\Psi\right)\). Indeed, if there is a non-isolated edge \(e\not\subset\operatorname{image}\left(\Psi\right)\), then by Remark (2), \(e\) lies in a unique outgoing edge-space. However, outgoing edge-spaces are embedded and so \(e\) is a free face, contradicting that \(Y\) is collapsed.
4. No edge-space of \(Y\) has a leaf, since a leaf would give rise to a free face.
5. No edge-space (vertex-space) is a single vertex since otherwise \(Y\) would have an isolated edge, a free face, or be trivial.
6. Outgoing edge-spaces are embeddings and \(\Psi\) is an immersion since these mappings pull back from the combinatorial immersion \(Y\to X\).
7. No vertex-space in \(Y_{v}\) has a leaf. Indeed, by Remark (3), each edge of \(Y_{v}\) lies in an incoming edge-space. By Remark (4), no edge-space has a leaf. By Remark (6), the attaching maps of edge-spaces are immersions. Since the image of an immersed leafless graph contains no leafs, the claims holds. Furthermore, by Remark (5), no vertex-space of \(Y\) is a tree, and so, \(\chi\left(Y_{v_{i}}\right)\leq 0\) for all vertex-spaces \(Y_{v_{i}}\) of \(Y\).
**Theorem 3.6**.: _Let \(H\) be a subgraph of a finite graph \(F\). Let \(X\) be the mapping torus of a cellular immersion \(\psi:H\to F\). Suppose \(\psi^{-1}\left(H\right)\) is homeomorphic to a forest. Then \(X\) has negative immersions._
Proof.: Let \(Y\to X\) be a combinatorial immersion where \(Y\) is a nontrivial compact, connected, and collapsed 2-complex with no isolated edges. As in Remark 3.5, let \(Y\rightarrow\Gamma_{Y}\) be the induced graph-of-spaces decomposition, and let \(\Psi:Y_{e}\to Y_{v}\) be the map whose mapping torus is \(Y\). By Remark 3.5.(3), we have \(Y_{e}\subset\operatorname{image}\left(\Psi\right)\). By Remark 3.5.(6), the map \(\Psi\) projects to \(\psi\) and so \(\Psi^{-1}\left(Y_{e}\right)\) is homeomorphic to a forest. Each component \(T^{\prime}\subset\Psi^{-1}\left(Y_{e}\right)\) can be subdivided to form a tree \(\bar{T}^{\prime}\) so that \(\Psi|_{\bar{T}^{\prime}}\) is combinatorial. Since \(Y\to X\) is a combinatorial immersion, the subdivided trees of \(\Psi^{-1}\left(Y_{e}\right)\) embed into the subdivided trees of \(\psi^{-1}\left(H\right)\) (as in Lemma 3.3), and so for each component \(T^{\prime}\subset\Psi^{-1}\left(Y_{e}\right)\), we have \(\operatorname{Diam}\left(\bar{T}^{\prime}\right)\ \leq\ d\), where \(d=\max\left\{\operatorname{Diam}\left(\bar{T}\right)\ :\ T\text{ is a component in }\psi^{-1}\left(H\right)\right\}\). Moreover, since \(X\) is compact, there is an upper bound \(M=M\left(d\right)\) on the number of edges in any \(d\)-ball in \(Y_{v}\). By Remarks 3.5.(4)-(5), \(Y_{e}\) has no tree component. By Lemma 3.3, we have \(|Y_{e}|_{1}\ \leq\ M|\partial Y_{e}|_{0}\). By Lemma 3.2, and Remark 3.5.(1), we have:
\[\chi\left(Y\right)\ =\ \chi\left(Y_{v}\right)-\chi\left(Y_{e}\right)\ \leq\ \frac{-1}{2}|\partial Y_{e}|_{0}\ \leq\ \frac{-1}{2M}|Y_{e}|_{1}\ =\ \frac{-1}{2M}|Y|_{2}.\qed\]
## 4. Finite Height Mappings
**Definition 4.1**.: The _generalized composition_ of the functions \(\alpha:A\to B\) and \(\beta:C\to D\), where \(C\subseteq B\), denoted by \(\beta\bullet\alpha\), is \(\beta\bullet\alpha=\beta\circ\alpha|_{\alpha^{-1}\left(C\right)}\).
**Definition 4.2**.: Let \(F\) be a connected graph and let \(H\subset F\) be a subgraph. Let \(\psi:H\to F\) be a cellular immersion. For each \(i\geq 0\), let \(\psi^{i}\) denote the generalized composition of \(\psi\) with itself \(i\) times, where \(\psi^{0}=id_{F}:F\to F\). Let \(\psi^{-i}\left(H\right)=\left(\psi^{i}\right)^{-1}\left(H\right)\).
Let \(Z_{i}\) denote the domain of \(\psi^{i}\). Then \(Z_{i+1}=\left\{x\in Z_{i}\ :\ \psi^{i}\left(x\right)\in H\right\}=\psi^{-i} \left(H\right)\), for each \(i\geq 0\). The _combinatorial domain_\(D_{i}\) of \(\psi^{i}\) is the largest subgraph in \(Z_{i}\). Note that \(Z_{i}\) is not necessarily a subgraph of \(F\), \(Z_{i+1}\subseteq Z_{i}\), and \(D_{i+1}\subseteq D_{i}\) for all \(i\geq 0\). Moreover, \(Z_{i}\) has a part that deformation retracts to \(D_{i}\) and a part that is a disjoint union of closed intervals and singletons. Thus, when \(Z_{i}\) is not homeomorphic to a forest, at least one component of \(D_{i}\) is not a tree. Let \(D_{\infty}\subset H\) be the subgraph whose edges and vertices map into \(H\) under all powers of \(\psi\). Note that \(\emptyset\subseteq D_{\infty}\subseteq D_{i+1}\subseteq D_{i}\).
The _directed height_ of \(\psi\) is:
\[\overrightarrow{\operatorname{Height}}(\psi)=\inf\left\{i:\ \psi^{-i}\left(H \right)\text{ is a forest}\right\}\]
Note that \(\overrightarrow{\operatorname{Height}}(\psi)=0\) if and only if \(H\) is a forest. We use the following notation:
\[\left\|\psi\right\|=\max\left\{\left|\psi\left(e\right)\left|{}_{1}\ :\ e\subset H^{1}\right.\right\}\]
**Remark 4.3**.: \(\overrightarrow{\operatorname{Height}}(\psi)=\ell<\infty\) if and only if the length of embedded directed paths in the Bass-Serre tree with infinite stabilizers is bounded by \(\ell\). Note that the Bass-Serre tree is directed because of the map to the underlying graph of the HNN extension which is a directed loop.
**Definition 4.4**.: The _height_ of a subgroup \(\mathcal{H}\) in \(\mathcal{G}\), denoted by \(\operatorname{Height}\left(\mathcal{H}\right)\), is the supremal number of distinct cosets \(\left\{\mathcal{H}g_{i}\right\}_{i\in I}\) such that \(\bigcap\limits_{i\in I}\mathcal{H}^{g_{i}}\) is infinite.
**Lemma 4.5**.: _Let \(H\) be a subgraph of a finite graph \(F\) and let \(\psi:H\to F\) be a cellular immersion. Let \(X\) be the mapping torus of \(\psi\). Then \(\pi_{1}H\) has finite height in \(\pi_{1}X\) if and only if \(\psi\) has finite directed height._
Proof.: Let \(\mathcal{H}=\pi_{1}H\) and \(\mathcal{F}=\pi_{1}F\). Suppose \(\mathcal{H}\) has finite height in \(\pi_{1}X\). Then \(\operatorname{Height}\left(\mathcal{H}\right)\) bounds the number of distinct cosets \(\left\{\mathcal{H}g_{i}\right\}\) such that \(\left|\mathcal{H}^{g_{1}}\cap\cdots\cap\mathcal{H}^{g_{n}}\right|\) is infinite. So the number of edges in the Bass-Serre tree \(T\) with a common infinite stabilizer is likewise bounded. Hence \(\operatorname{Height}\left(\mathcal{H}\right)\) bounds the length of embedded paths in \(T\) with infinite stabilizer. Thus \(\overrightarrow{\operatorname{Height}}(\psi)<\infty\).
Suppose \(\overrightarrow{\operatorname{Height}}(\psi)<\infty\). Then \(\overrightarrow{\operatorname{Height}}(\psi)\) bounds the length of embedded paths in \(T\) with infinite stabilizers. There is a uniform upper bound on the degree of vertices of any subtree \(T^{\prime}\subset T\) with point-wise stabilizer of \(T^{\prime}\) infinite. Indeed, the number of incoming edges of each vertex in \(T^{\prime}\) is bounded by \(r=\operatorname{Height}\left(\psi_{*}\left(H\right)\right)\) in \(\mathcal{F}\), since every finitely generated subgroup of a free group has finite height [12]. Thus \(T^{\prime}\) is a rooted tree of length \(\leq\overrightarrow{\operatorname{Height}}(\psi)\) and incoming degree \(\leq r\). So the number of edges in \(T^{\prime}\) is \(\leq r^{\overrightarrow{\operatorname{Height}}(\psi)}\). Any set of cosets of the edge group corresponds to a set of edges in \(T\). The intersection of the corresponding conjugates point-wise stabilizes those edges, and thus point-wise stabilizes the smallest tree \(T^{\prime}\) containing them. Hence the number of cosets is bounded by \(\leq r^{\overrightarrow{\operatorname{Height}}(\psi)}\).
**Lemma 4.6**.: _Let \(H\) be a subgraph of \(F\). Let \(\psi:H\to F\) be a cellular immersion with \(\overrightarrow{\operatorname{Height}}(\psi)=\ell<\infty\). Then \(D_{\infty}\) is a (possibly empty) forest._
Proof.: We have \(D_{\infty}\subseteq D_{\ell+1}\). So, it suffices to show that \(D_{\ell+1}\) is a forest. Suppose \(C\subset D_{\ell+1}\) is an embedded circle. Then \(\psi^{\ell}\left(C\right)\subset H\) and so \(\psi^{-\ell}\left(H\right)\) is not a forest, contradicting the assumption.
**Lemma 4.7**.: _Let \(F\) be a connected graph and let \(H\subset F\) be a finite subgraph. Let \(X\) be the mapping torus of a cellular immersion \(\psi:H\to F\). If \(\psi\) has infinite directed height, then \(H\) contains a connected subgraph \(D\subset H\) with \(\chi\left(D\right)\ \leq 0\) such that \(\psi\left(D\right)=D\). Consequently, \(X\) contains a subcomplex \(Y\hookrightarrow X\), where \(Y\) is a connected, compact, and collapsed \(2\)-complex with no isolated edges and \(\chi\left(Y\right)=0\)._
Proof.: Since \(\psi\) has infinite directed height, for each \(i\geq 0\), we have \(\psi^{-i}\left(H\right)\) is not a forest. So each \(D_{i}\) contains an embedded circle. Since \(H\) is finite and \(D_{i+1}\subseteq D_{i}\), there is an integer \(p\) such that for all \(j>p\) we have \(D_{j+1}=D_{j}\). Then \(D_{j}\) contains a component \(D\) with \(\chi\left(D\right)\leq 0\) and \(\psi\left(D\right)=D\). In particular, since \(\psi\) is an immersion, \(\psi\left(D_{\text{core}}\right)=D_{\text{core}}\), where \(D_{\text{core}}\) is the core of \(D\). The mapping torus of \(\psi\) restricted to \(D_{\text{core}}\) provides \(Y\).
**Definition 4.8**.: Let \(Q\rightarrow\Gamma_{Q}\) be a graph of spaces where \(\Gamma_{Q}\) is equal to a subdivided interval \([0,k]\) directed from \(0\) to \(1\ \leq k\leq\infty\). Suppose each vertex-space \(Q_{v_{i}}\) is a tree where \(Q_{v_{0}}\) has exactly one edge \(f_{0}\). For each edge-space \(Q_{e_{i}}\times I\) there is an _outgoing_ attaching map \(Q_{e_{i}}\times\{0\}\to Q_{v_{i-1}}\) and an _incoming_ attaching map \(Q_{e_{i}}\times\{1\}\to Q_{v_{i}}\). When each outgoing attaching map is an embedding onto a single edge \(f\) of the vertex-space, then \(Q\) is a _ladder_ and \(f\) is a _connecting edge_. When each attaching map is bijective, then \(Q\) is a _fan_. The _rim_ of a fan \(Q\), denoted by \(\operatorname{Rim}\left(Q\right)\), is \(Q_{v_{k}}\). The _length_ of \(Q\) is \(\operatorname{Length}\left(Q\right)=k\). We allow the case \(k=\infty\) and say that \(Q\) is an _infinite ladder/fan_. We say \(Q\)_arises_ from \(f_{0}\), and \(f_{0}\)_gives rise_ to \(Q\).
The space \(Q\) is a cell complex as follows: we have already declared each \(Q_{v_{i}}\) is a tree and so it remains to describe the additional \(1\)-cells and \(2\)-cells of \(Q\). Each open edge-space \(Q_{e_{i}}\times(0,1)\) has a product structure induced by the graph \(Q_{e_{i}}\). See Figure 1. The edges in the vertex-spaces are _vertical_ and the remaining ones are _horizontal_. Each vertex in the image of \(Q_{e_{i}}\to Q_{v_{i-1}}\)_gives rise_ to a horizontal edge in \(Q\). Each edge \(f\) in the image of \(Q_{e_{i}}\to Q_{v_{i-1}}\)_gives rise_ to a \(2\)-cell \(S\subset Q\). We say \(S\)_arises_ from \(f\).
Let \(X\) be a \(2\)-complex with a graph-of-spaces structure whose \(1\)-skeleton is partitioned into horizontal and vertical edges, where the vertical edges are the edges of vertex-spaces, and the horizontal edges are the remaining ones. An _immersed ladder_ of \(X\) is a combinatorial immersion \(\lambda:L\to X\) that maps vertical/horizontal edges of a ladder \(L\) to vertical/horizontal edges of \(X\). An _immersed fan_\(\phi:Q\to X\) is defined analogously. An edge \(e\subset X\) has a \(k\)-ladder (resp. \(k\)-fan), if there is an immersed ladder \(\lambda:L\to X\) (resp. immersed fan \(\phi:Q\to X\)) of length \(k\) emerging from \(e^{\prime}\) such that \(\lambda\left(e^{\prime}\right)=e\) (resp. \(\phi\left(e^{\prime}\right)=e\)). When \(X\) is the mapping torus of \(\psi:H\to F\), we require that immersions preserve the orientation of horizontal edges.
Let \(X\) be the mapping torus of \(\psi:H\to F\). Let \(H_{i}=\overline{D_{i}-D_{i+1}}\) be the subgraph whose edges give rise to \(i\)-fans but not \((i+1)\)-fans. When \(H_{i}=\emptyset\), we have \(D_{i}=D_{i+1}=D_{\infty}\) is the subgraph whose edges give rise to infinite fans. Then \(D_{\infty}\) is \(\psi\)-invariant. Let \(m=m(\psi)\) denote the supremum of lengths of maximal finite fans in \(X\). Note that when \(H\) is finite we have \(m<\infty\) since any maximal finite fan is determined by the edge it arises from.
Figure 1. Left: A ladder of length \(4\) emerging from \(e_{1}\). Right: A fan of length \(3\) emerging from \(e\).
**Lemma 4.9**.: _Let \(H\) be a subgraph of a finite connected graph \(F\). Let \(X\) be the mapping torus of a cellular immersion \(\psi:H\to F\) with \(\overrightarrow{\operatorname{Height}}(\psi)<\infty\). Let \(m=m\left(\psi\right)\) be the maximal length of immersed finite fans in \(X\). Let \(Y\to X\) be a combinatorial immersion, where \(Y\) is a nontrivial, compact, connected, and collapsed \(2\)-complex with no isolated edges. Let \(Y\to\Gamma_{Y}\) be the induced graph-of-spaces decomposition and let \(\partial Y\) be the associated boundary. Then there exists \(M{=}\ M\left(H,F,\psi\right)>0\) such that each \(2\)-cell \(S\) of \(Y\) lies in the image of an immersed ladder \(\lambda:L\to Y\) with \(\operatorname{Length}\left(L\right)\leq m+1\) emerging from \(e\) where \(\operatorname{Dist}\left(\lambda\left(e\right),\partial Y\right)\leq M\)._
Proof.: Let \(S\) be a \(2\)-cell of \(Y\). Since \(Y\) is collapsed, there is an immersed ladder \(\lambda:L\to Y\) with \(\operatorname{Length}\left(L\right)=m+1\) and whose \(\left(m+1\right)\)-th \(2\)-cell maps to \(S\). Let \(\left\{e_{1},\ldots,e_{m+1}\right\}\) be the connecting edges of \(L\). For \(1\leq i\leq m+1\), let \(O_{v_{i}}\) be the outgoing edge-space containing \(\lambda\left(e_{i}\right)\) and let \(Y_{v_{i}}\) be the vertex-space containing \(O_{v_{i}}\). Let \(\phi:Q\to Y\) be the maximal immersed fan emerging from \(\lambda\left(e_{1}\right)\). See Figure 2.
**Case 1**: If \(\operatorname{Length}\left(Q\right)=k\leq m\), then \(\lambda\left(e_{k+1}\right)\ \subset\ \phi\left(\operatorname{ Rim}\left(Q\right)\right)\cap O_{v_{k+1}}\). Since \(Q\) is maximal, \(\operatorname{image}\left(\operatorname{ Rim}\left(Q\right)\to Y_{v_{k+1}}\right)\ \not\subset\ O_{v_{k+1}}\), and so \(\phi\left(\operatorname{ Rim}\left(Q\right)\right)\ \cap\ \partial O_{v_{k+1}}\neq\emptyset\). Since fans in \(Y\) project to fans in \(X\), we have \(\left|\operatorname{ Rim}\left(Q\right)\right|_{1}\leq\left\|\psi\right\|^{m}\). Thus, \(\operatorname{Dist}\left(\lambda\left(e_{k+1}\right),\partial Y\right)\leq \left\|\psi\right\|^{m}\).
**Case 2**: If \(\operatorname{Length}\left(Q\right)>m\), then the image of \(Q\to Y\to X\) is an infinite fan of \(X\). Let \(T\subset O_{v_{1}}\) be the maximal connected subgraph containing \(\lambda\left(e_{1}\right)\) and whose edges give rise to \(\left(m+1\right)\)-fans in \(Y\). Hence \(T\) immerses in \(D_{\infty}\). Since \(\overrightarrow{\operatorname{Height}}(\psi)<\infty\), it follows from Lemma 4.6 that \(D_{\infty}\) is a forest. So \(T\) is a tree with \(\operatorname{Diam}\left(T\right)\leq\operatorname{Diam}\left(D_{\infty}\right)\). Let \(u\in T\) be a leaf. Since \(Y\) is collapsed, outgoing edge-spaces have no leaves. So there is an edge \(f\subset O_{v_{1}}\) containing \(u\) with \(f\not\subset T\). By maximality of \(T\), the maximal fan \(\phi^{\prime}\left(Q^{\prime}\right)\) emerging from \(f\) has length \(k\leq m\). So \(\phi^{\prime}\left(\operatorname{ Rim}\left(Q^{\prime}\right)\right)\ \cap\ \partial O_{v_{k+1}}\neq\emptyset\). Hence, \(\operatorname{Dist}\left(\lambda\left(e_{k+1}\right),\partial Y\right)\leq \operatorname{Diam}\left(D_{\infty}\right)+\left\|\psi\right\|^{m}\).
The claim follows with \(M=\operatorname{Diam}\left(D_{\infty}\right)+\left\|\psi\right\|^{m}\).
**Theorem 4.10**.: _Let \(F\) be a finite connected graph and let \(H\subset F\) be a subgraph. Let \(X\) be the mapping torus of a cellular immersion \(\psi:H\to F\). Then \(X\) has negative immersions if and only if \(\psi\) has finite directed height._
Proof.: The "only if" direction holds by Lemma 4.7.
Suppose \(\overrightarrow{\operatorname{Height}}(\psi)<\infty\). Let \(Y\to X\) be a combinatorial immersion where \(Y\) is a nontrivial compact, connected, and collapsed \(2\)-complex with no isolated edges.
Figure 2. On the left: Case 1, and on the right case 2.
Let \(Y\to\Gamma_{Y}\) be the induced graph-of-spaces decomposition. For each \(v\in\Gamma_{Y}^{0}\), let \(Y_{v}\) be the corresponding vertex-space and let \(O_{v}\) be the disjoint union of outgoing edge-spaces in \(Y_{v}\). Let \(m=m\left(\psi\right)\) be the supremal length of maximal finite fans in \(X\). By Lemma 4.9, there exists \(M>0\) such that each \(2\)-cell of \(Y\) lies in a ladder of length \(\leq m+1\) emerging from a vertical edge \(e\) with \(\operatorname{Dist}\left(e,\partial Y\right)\leq M\). Let \(\partial^{\prime}Y\) be the set of boundary points of \(Y\) that are at a distance \(\leq M\) from such edges \(e\). So \(\partial^{\prime}Y\subseteq\partial Y=\bigsqcup_{v\in\Gamma_{Y}^{0}}\partial O _{v}\). Since \(Y\to X\) is a combinatorial immersion, there is an upper bound \(N\) on the number of edges in an \(M\)-ball in the vertex-spaces of \(Y\). Note that \(N=N\left(F,M\right)\) is a function of \(F\) and \(M\). Consider the \(M\)-balls centered at vertices of \(\partial^{\prime}Y\). In each such ball, there are at most \(N\) edges and each edge gives rise to at most \(\|\psi\|^{m}\) ladders of length \(\leq\ (m+1)\). The number of \(2\)-cells in each ladder is \(\leq\ (m+1)\). Then:
\[|Y|_{2}\ \leq\ \sum_{v\in\partial^{\prime}Y}(m+1)\|\psi\|^{m}N\ =\ (m+1)\|\psi\|^{m}N| \partial^{\prime}Y|_{0}\ \leq\ (m+1)\|\psi\|^{m}N|\partial Y|_{0}\]
and so
\[\frac{|Y|_{2}}{(m+1)\|\psi\|^{m}N}\ \leq\ |\partial Y|_{0}\]
By Remark 3.5.(7), the vertex-spaces of \(Y\) have no leaves. Then the conclusion holds by the following double inequality. Its first equality is straightforward. Its last inequality follows from above, and its middle inequality holds by Lemma 3.2.
\[\chi\left(Y\right)\ =\ \sum_{v\in\Gamma_{Y}^{0}}\left(\chi\left(Y_{v}\right)- \chi\left(O_{v}\right)\right)\ \leq\ \frac{-1}{2}|\partial Y|_{0}\ \leq\ \frac{-1}{2(m+1)\|\psi\|^{m}N}|Y|_{2}\
**Remark 4.14**.: In the proof of Theorem 4.10, we assume that \(Y\) has no isolated edges, as required by the definition of Negative Immersions. However, the claim that \(\chi\left(Y\right)\leq-c|Y|_{2}\) holds even if we allow \(Y\) to have isolated edges. This follows from a simple induction on the number of isolated edges in \(Y\). Indeed, the base case holds by Theorem 4.10. Now, let \(e\) be an isolated edge of \(Y\). Then either \(e\) is not separating and \(Y=Y_{1}\cup e\), or \(e\) is separating and \(Y=Y_{1}\cup e\cup Y_{2}\). In the former case, we have
\[\chi\left(Y\right)<\chi\left(Y-e\right)=\chi\left(Y_{1}\right)\leq-c|Y_{1}|_{2 }=-c|Y|_{2}\]
where the last inequality holds by induction. In the latter case, we have
\[\chi\left(Y\right)=\chi\left(Y_{1}\right)+\chi\left(Y_{2}\right)-1<\chi\left( Y_{1}\right)+\chi\left(Y_{2}\right)\leq-c_{1}|Y_{1}|_{2}+c_{2}|Y_{2}|_{2}\ \leq\ -c|Y|_{2}\]
where the last inequality holds by induction, and \(c=\min\left\{c_{1},c_{2}\right\}\).
Motivated by our desire to verify Property 11 of the next section, we note the following consequence of the preceding statements. This does not prove Property 11 since it does not assert that the edge groups in the splitting of \(\mathcal{K}\) equal the intersections of \(\mathcal{K}\cap\mathcal{H}^{g}\), for \(g\in\pi_{1}X\).
**Theorem 4.15**.: _Let \(F\) be a finite connected graph and let \(H\subset F\) be a subgraph. Let \(X\) be the mapping torus of a cellular immersion \(\psi:H\to F\) with \(\overline{\operatorname{Height}}\left(\psi\right)<\infty\). Let \(\mathcal{K}\subset\pi_{1}X\) be a finitely generated subgroup. Then \(\mathcal{K}\) splits over edge groups with uniformly bounded Euler characteristic._
Proof.: By Theorem 4.10, \(X\) has negative immersions. By Theorem 2.1, \(\pi_{1}X\) is coherent. So there is a combinatorial immersion \(Y\to X\), with \(\pi_{1}Y\xrightarrow{\simeq}\mathcal{K}\) where \(Y\) is compact and connected. We can assume that \(Y\) is collapsed since collapsing is a homotopy equivalence. Let \(Y\to\Gamma_{Y}\) be the graph-of-spaces decomposition induced by the decomposition \(X\to\Gamma_{X}\). Let \(V_{Y}\) and \(O_{Y}\) be the disjoint union of vertex-spaces and outgoing edge-spaces of \(Y\), respectively. We show that \(\chi\left(O_{Y}\right)\) is uniformly bounded by a function of \(\operatorname{rank}\left(\pi_{1}Y\right)\). In fact, we show that \(\chi\left(O_{Y}\right)\ \geq\ \frac{\chi\left(Y\right)}{c}\). By Theorem 4.10, Remark 4.13 and Remark 4.14, there is a constant \(c\in\left(0,1\right)\) such that \(\chi\left(V_{Y}\right)-\chi\left(O_{Y}\right)\ \leq\ -c|Y|_{2}\). So \(\chi\left(O_{Y}\right)\ \geq\ \chi\left(V_{Y}\right)+c|Y|_{2}\). We have \(\chi\left(Y\right)\ =\ \chi\left(V_{Y}\right)-E+|Y|_{2}\), where \(E\) are the number of the horizontal edges in \(Y\). Since \(c-1<0\), we have
\[\chi\left(O_{Y}\right)\ \geq\ \chi\left(Y\right)+E-|Y|_{2}+c|Y|_{2}\ \geq\ \chi\left(Y\right)+(c-1)|Y|_{2}\ \geq\ \frac{\chi\left(Y\right)}{c}\]
where the last inequality follows by By Theorem 4.10.
## 5. generalization to \(\pi_{1}\)-injective mappings
Theorem 4.10 generalizes to \(\pi_{1}\)-injective mappings. For this, we require the image of \(\psi\) to be a subgraph. Moreover, by slightly changing the definition of directed height, we are able to to use the proof for the immersion case in the \(\pi_{1}\)-injective case. To this end, we use Theorem 5.1 to factorize \(\psi^{i}\) into an immersion post-composed with a \(\pi_{1}\)-isomorphism. Then generalized versions of Lemma 4.6 and Lemma 4.7 can be proved with respect to the immersive factor of \(\psi^{i}\), whereas Lemma 4.5 and Lemma 4.9 remain true in both cases. We note that fans behave differently in this
case. Indeed, in the immersion case, infinite directed height implies the existence of infinite fans. This is not true in the non-immersion case. See Figure 3. However, when the directed height is finite, fans have the expected behaviour, namely, infinite fans arise from tree components of \(H\) and maximal finite fans have uniformly bounded length \(m\).
**Theorem 5.1** (Stallings Factorization [10]).: _Let \(H\) and \(G\) be graphs. Every cellular map \(\psi:H\to G\) factors as \(\psi=\theta\ \circ\ \rho\) where \(\rho\) is a composition of edge collapses and foldings, and \(\theta\) is an immersion. Moreover, \(\rho\) is a homotopy equivalence if and only if \(\psi\) is \(\pi_{1}\)-injective._
Given a \(\pi_{1}\)-injective map \(\psi:H\to F\), where \(\psi\left(H\right)\) is a subgraph of \(F\), define \(\psi^{i}\) as in Section 4. Note that \(\psi^{i}\) is \(\pi_{1}\)-injective for all \(i\geq 0\). Then by Theorem 5.1, we have \(\psi^{i}=\theta_{i}\ \circ\ \rho_{i}\) where \(\rho_{i}\) is a \(\pi_{1}\)-isomorphism and \(\theta_{i}\) is an immersion:
Define the _directed height_ of \(\psi\) as: \(\overrightarrow{\operatorname{Height}}\left(\psi\right)=\inf\big{\{}i:\ \theta_{i}^{-1}\left(H\right) \text{ is a forest}\big{\}}\).
**Lemma 5.2**.: _Let \(H\) be a subgraph of \(F\). Let \(\psi:H\to F\) be a \(\pi_{1}\)-injective cellular map with \(\overrightarrow{\operatorname{Height}}\left(\psi\right)=\ell<\infty\). Then \(D_{\infty}\) is a (possibly empty) forest._
Proof.: We have \(D_{\infty}\subseteq D_{\ell+1}\). So, it suffices to show that \(D_{\ell+1}\) is a forest. Suppose \(C\subset D_{\ell+1}\) is an embedded circle. Then \(\psi^{\ell}\left(C\right)\subset H\). Since \(\rho_{\ell}\) is \(\pi_{1}\)-injective, \(\rho_{\ell}\left(C\right)\) is not a tree. But \(\rho_{\ell}\left(C\right)\subset\theta_{\ell}^{-1}\left(H\right)\), and so \(\theta_{\ell}^{-1}\left(H\right)\) is not a forest, which is a contradiction.
**Lemma 5.3**.: _Let \(F\) be a graph and let \(H\subset F\) be a finite subgraph. Let \(X\) be the mapping torus of a \(\pi_{1}\)-injective cellular map \(\ \psi:H\to F\) where \(\operatorname{image}\left(\psi\right)\) is a subgraph of \(F\). If \(\overrightarrow{\operatorname{Height}}\left(\psi\right)=\infty\), then there is a subcomplex \(Y\subset X\) with \(\chi\left(Y\right)=0\) and \(Y\) is connected, compact, collapsed, and has no isolated edges._
Proof.: Let \(A_{i}=\operatorname{image}\left(\psi^{i}\right)\cap H\) for \(i\geq 0\). Then by definition, \(A_{i+1}\subseteq A_{i}\) and \(\psi\left(A_{i}\right)=A_{i+1}\). Moreover, \(\operatorname{rank}\left(A_{i}\right)>0\). Indeed, since \(\theta_{i}\) is an immersion, \(\theta_{i}^{-1}\left(H\right)\) is a forest whenever \(A_{i}\) is a forest. By assumption, \(\overrightarrow{\operatorname{Height}}\left(\psi\right)=\infty\), and so \(\theta_{i}^{-1}\left(H\right)\) is not a forest for all \(i\geq 0\). Since \(H\) is a finite graph, there exist \(j<k\) such that \(A_{j}=A_{k}\). Then \(A_{j}=A_{k}\subseteq A_{k-1}\subseteq\cdots\subseteq A_{j+1}\subseteq A_{j}\) and so \(A_{j}=A_{j+1}\). Since \(\psi\)
Figure 3. \(\psi\) has infinite directed height, but the mapping torus of \(\psi\) has one fan of maximal length \(1\).
is \(\pi_{1}\)-injective and \(\operatorname{rank}\left(A_{j}\right)>0\), a component of the mapping torus of \(\psi\) restricted to the core of the non-tree components of \(A_{j}\) provides \(Y\).
**Remark 5.4**.: The requirement that \(\psi\) be \(\pi_{1}\)-injective in Lemma 5.3 ensures that finite directed height implies negative immersions. Indeed, let \(X\) be the mapping torus of \(\psi:H\twoheadrightarrow H\) where \(H\) is a connected leafless graph of positive rank. Suppose \(\psi_{*}\left(\pi_{1}H\right)\) is trivial. Then \(\overrightarrow{\operatorname{Height}}\left(\psi\right)=1\) but \(X\) is collapsed with \(\chi\left(X\right)=0\), and so the negative immersions property fails. See Figure 4.
**Theorem 5.5**.: _Let \(F\) be a finite graph and let \(H\subset F\) be a subgraph. Let \(X\) be the mapping torus of a \(\pi_{1}\)-injective cellular map \(\psi:H\to F\). Then \(X\) has negative immersions if and only if \(\psi\) has finite directed height._
## 6. Discussion of Related Properties
Let \(H\) be a subgraph of a finite graph \(F\) and let \(\mathcal{H}=\pi_{1}H\) and \(\mathcal{F}=\pi_{1}F\). Let \(X\) be the mapping torus of a cellular immersion \(\psi:H\to F\). Consider the following properties:
1. \(\pi_{1}X\) is locally quasiconvex.
2. \(\mathcal{F}\) and \(\mathcal{H}\) are quasiconvex.
3. \(\mathcal{F}\) and \(\mathcal{H}\) have finite height.
4. \(\pi_{1}X\) has the finitely generated intersection property.
5. \(X\) has negative immersions.
6. \(\pi_{1}X\) contains no subgroup isomorphic to an ascending HNN extension of a finitely generated free group.
7. \(\pi_{1}X\) is hyperbolic.
8. \(\pi_{1}X\) contains no \(\operatorname{BS}\left(1,m\right)\) for \(m>0\).
9. \(\pi_{1}X\) has a quasiconvex hierarchy.
10. \(\pi_{1}X\) is virtually special.
11. \(\mathcal{K}\cap\mathcal{H}\) is finitely generated whenever \(\mathcal{K}\subset\pi_{1}X\) is finitely generated.
12. Each finitely generated subgroup of \(\pi_{1}X\) is tamely generated.
(1)\(\Rightarrow\) (2) is immediate. When \(\pi_{1}X\) is hyperbolic, we have (2) \(\iff\) (3), where (\(\Rightarrow\)) holds by [10] and (\(\Leftarrow\)) holds by [10]. A group has the _finitely generated intersection property_ (FGIP) if the intersection of any two finitely generated subgroups is also finitely generated. For instance, free groups have the FGIP [11]. (4)\(\Rightarrow\)(6) holds by [12] and (1) \(\Rightarrow\) (4) holds by [13]. (5)\(\Rightarrow\)(6) since ascending HNN extensions of free groups have Euler characteristic zero, and (6)\(\Rightarrow\)(3) by Lemma 4.5 and Lemma 4.7. (5) \(\Longleftrightarrow\) (3) holds by Theorem 4.10, and (5)\(\Rightarrow\)(8) is a special case of (5)\(\Rightarrow\)(6). It is well known that (7)\(\Rightarrow\)(8), e.g [1]. (7) +
Figure 4. The mapping torus of \(\psi\) has zero Euler characteristic and \(\overrightarrow{\operatorname{Height}}\left(\psi\right)=1\).
(9)\(\Rightarrow\)(10) by [20]. (11)\(\Rightarrow\)(12) since if \(\mathcal{K}\cap\mathcal{H}\) is finitely generated for each finitely generated \(\mathcal{K}\), then \(\mathcal{K}^{g}\cap\mathcal{H}\) is finitely generated for each \(g\), and so \(\mathcal{K}\cap\mathcal{H}^{g}\) is finitely generated for each \(g\). See [13] for the definition of "tamely generated". (12)\(\Rightarrow\) (1) holds by [13]. (5) \(\Rightarrow\) (9) holds by the following argument: (5) \(\Rightarrow\) (6) and by [10], we have (6)\(\Rightarrow\pi_{1}X\subset\pi_{1}X^{\prime}\) where \(X^{\prime}\) is the mapping torus of a fully irreducible nonsurjective map of a graph and \(X^{\prime}\) is hyperbolic relative to \(X\). By [12], this implies \(\pi_{1}X^{\prime}\) is hyperbolic since it contains no \(\operatorname{BS}\left(1,m\right)\) and so (7) holds. Since (5)\(\Rightarrow\)(2), we have (2)\(+\)(7) \(\Rightarrow\) (9) since \(\pi_{1}X\) splits along \(\mathcal{H}\) and the vertex-group is free.
We end this section by stating the following conjectures:
**Conjecture 6.1**.: (5) \(\Rightarrow\) (1) _and hence (5) \(\Rightarrow\) (11)._
**Conjecture 6.2**.: _If \(X\) is a compact \(2\)-complex with negative immersions, then \(\pi_{1}X\) has a finite index subgroup that is isomorphic to the fundamental group of a mapping torus of an immersion \(\psi:H\to F\) of finite directed height._
**Conjecture 6.3**.: _If \(\mathcal{G}\) is a locally quasiconvex hyperbolic group, then \(\mathcal{G}\) has a finite index subgroup that is isomorphic to the fundamental group of a mapping torus of an immersion \(\psi:H\to F\) of finite directed height._
The motivation for Conjecture 6.1 is the fact that the negative immersions property is a strengthening of _negative sectional curvature_ which was shown to imply local quasiconvexity in [20]. Conjecture 6.2 is motivated by the fact that negative immersions implies a quasiconvex hierarchy which gives virtual specialness. The hope is to show that \(X\) is virtually \(F_{\infty}\)-by-cyclic. Using a theorem of Feighn-Handel [11], we can then show that \(\pi_{1}X\) is isomorphic to a partial mapping torus of a graph immersion. Local quasiconvexity and our theorem show that it must have finite directed height. Conjecture 6.3 is motivated by the lack of counter examples. That is, there are currently no examples of locally quasiconvex hyperbolic groups that don't have negative immersions. It is exciting to believe that there is a characterization using the partial HNN framework.
|
2307.16708 | Deep Learning Meets Adaptive Filtering: A Stein's Unbiased Risk
Estimator Approach | This paper revisits two prominent adaptive filtering algorithms, namely
recursive least squares (RLS) and equivariant adaptive source separation
(EASI), through the lens of algorithm unrolling. Building upon the unrolling
methodology, we introduce novel task-based deep learning frameworks, denoted as
Deep RLS and Deep EASI. These architectures transform the iterations of the
original algorithms into layers of a deep neural network, enabling efficient
source signal estimation by leveraging a training process. To further enhance
performance, we propose training these deep unrolled networks utilizing a
surrogate loss function grounded on Stein's unbiased risk estimator (SURE). Our
empirical evaluations demonstrate that the Deep RLS and Deep EASI networks
outperform their underlying algorithms. Moreover, the efficacy of SURE-based
training in comparison to conventional mean squared error loss is highlighted
by numerical experiments. The unleashed potential of SURE-based training in
this paper sets a benchmark for future employment of SURE either for training
purposes or as an evaluation metric for generalization performance of neural
networks. | Zahra Esmaeilbeig, Mojtaba Soltanalian | 2023-07-31T14:26:41Z | http://arxiv.org/abs/2307.16708v4 | # Deep Learning Meets Adaptive Filtering:
###### Abstract
This paper revisits two prominent adaptive filtering algorithms, namely recursive least squares (RLS) and equivariant adaptive source separation (EASI), through the lens of algorithm unrolling. Building upon the unrolling methodology, we introduce novel task-based deep learning frameworks, denoted as _Deep RLS_ and _Deep EASI_. These architectures transform the iterations of the original algorithms into layers of a deep neural network, enabling efficient source signal estimation by leveraging a training process. To further enhance performance, we propose training these deep unrolled networks utilizing a surrogate loss function grounded on Stein's unbiased risk estimator (SURE). Our empirical evaluations demonstrate that the _Deep RLS_ and _Deep EASI_ networks outperform their underlying algorithms. Moreover, the efficacy of SURE-based training in comparison to conventional mean squared error loss is highlighted by numerical experiments. The unleashed potential of SURE-based training in this paper sets a benchmark for future employment of SURE either for training purposes or as an evaluation metric for generalization performance of neural networks.
Adaptive filtering, Stein's unbiased risk estimator, deep unfolding, principal component analysis, blind source separation.
## I Introduction
_Deep unfolding_, or _unrolling_[1], enables constructing interpretable deep neural networks (DNN) that require less training data and considerably less computational resources than generic DNNs. Specifically, in deep unfolding, each layer of the DNN is designed to resemble one iteration of the original algorithm of interest. Passing the signals through such a deep network is in essence similar to executing the iterative algorithm a finite number of times, determined by the number of layers. The model parameters will be reflected in weights of the constructed DNN. The data-driven nature of the emerging deep network thus enables improvements over the original algorithm. Note that the constructed network may be trained using back-propagation, resulting in model parameters that are learned from the real-world training datasets. In this way, the trained network can be naturally interpreted as a parameter optimized algorithm, effectively overcoming the lack of interpretability in most conventional neural networks [2]. In comparison with a generic DNN, the unfolded network has many fewer parameters and therefore requires a more modest size of training data and computational resources.
The deep unrolling technique has been effectively applied to various signal processing problems, yielding significant improvements in the convergence rates of state-of-the-art iterative algorithms; see [1, 2] for a detailed explanation of deep unrolling, as well as [3, 4, 5] for examples of deploying deep unrolling in different application areas.
Our goal in this paper is to develop a set of algorithms able to learn the nonlinearity and step sizes of two classical adaptive filtering algorithms, namely, recursive least squares (RLS) and equivariant adaptive source separation (EASI). We leverage Stein's unbiased risk estimator (SURE) in training, which serves as a surrogate for mean squared error (MSE), even when the ground truth is unknown [6]. Studies such as [7, 8] have reported improved image denoising results when networks were trained with SURE loss, outperforming traditional MSE loss training. Similarly, SURE has been effectively used to train deep convolutional neural networks without requiring denoised ground truth, as highlighted in [7, 9]. The SURE based training and the recurrent training procedure of our proposed methodology makes it a great candidate for unsupervised real-time signal processing.
The rest of the paper is organized in the following manner. In section II, we introduce the problem formulation for adaptive filtering-based signal estimation. In Section III, we propose the two deep unrolling frameworks _Deep EASI_ and _Deep RLS_ for adaptive filtering, alongside the SURE-based surrogate loss function employed for their training. Section IV details the numerical experiments used to evaluate our proposed methods, and Section V presents our concluding remarks.
_Notation:_ Throughout this paper, we use bold lowercase and bold uppercase letters for vectors and matrices, respectively. \(\mathbb{R}\) represents the set of real numbers. \((\cdot)^{\top}\) denotes the vector/matrix transpose. The identity matrix of is denoted by \(\mathbf{I}\) and the trace of a matrix is denoted by \(\mathrm{Tr}(.)\).
## II Problem formulation
We begin by the long-standing linear inference problem formulation in which \(m\) statistically independent signals are linearly mixed to yield \(l\) possibly noisy combinations,
\[\mathbf{x}(t)=\mathbf{A}\mathbf{s}(t)+\mathbf{n}(t). \tag{1}\]
Let \(\mathbf{x}(t)=[\mathbf{x}_{1}(t),\ldots,\mathbf{x}_{l}(t)]^{\top}\) denote the \(l-\)dimensional data vector made up of the mixture at time \(t\) that is exposed to an additive measurement noise \(\mathbf{n}(t)\). Given no knowledge of the mixing matrix \(\mathbf{A}\in\mathbb{R}^{m\times l}\), the goal is to recover the original source signal vector \(\mathbf{s}(t)=[\mathbf{s}_{1}(t),\ldots,\mathbf{s}_{m}(t)]^{\top}\) from the mixture. This problem is referred to as blind source separation (BSS) in the literature. A seminal work in this context
is [10] which suggests tuning and updating a separating matrix \(\mathbf{W}\in\mathds{R}^{l\times m}\) until the output
\[\mathbf{y}(t)=\mathbf{W}^{\top}\mathbf{x}(t), \tag{2}\]
where \(\mathbf{y}(t)=[\mathbf{y}_{1}(t),\ldots,\mathbf{y}_{m}(t)]^{\top}\), is as close as possible to the source signal vector of interest \(\mathbf{s}(t)\).
### _Nonlinear Recursive Least Squares for Blind Source Separation_
Assuming there exists a larger number of sensors than the source signals, i.e. \(l\geq m\), we can draw an analogy between the BSS problem and the task of principal component analysis (PCA). In a sense, we are aiming to represent the random vector \(\mathbf{x}(t)\) in a lower dimensional orthonormal subspace, represented by the columns of \(\mathbf{W}\), as the orthonormal basis vectors. By this analogy, both BSS and PCA problems can be reduced to minimizing the objective function
\[\mathcal{L}(\mathbf{W})=\mathbb{E}\left\{\|\mathbf{x}(t)-\mathbf{W}(\mathbf{ W}^{\top}\mathbf{x}(t))\|_{2}^{2}\right\}. \tag{3}\]
Assuming that \(\mathbf{x}(t)\) is a zero-mean vector, it can be shown that the solution to the above optimization problem is a matrix \(\mathbf{W}\) whose columns are the \(m\) dominant eigenvectors of the data covariance matrix \(\mathbf{C}_{\mathbf{x}}(\mathbf{t})=\mathbb{E}\left\{\mathbf{x}(t)\mathbf{x}( t)^{\top}\right\}\)[11]. Therefore, the principal components which are the recovered source signals are mutually uncorrelated. As discussed in [10], having uncorrelated data is not a sufficient condition to achieve separation. In other words, the solutions to PCA and BSS do not coincide unless we address the higher order statistics of the output signal \(\mathbf{y}(t)\). By introducing nonlinearity into (3), we will implicitly target higher order statistics of the signal [11]. This _nonlinear PCA_, which is an extension of the conventional PCA, is made possible by considering the signal recovery objective:
\[\mathcal{L}(\mathbf{W})=\mathbb{E}\left\{\|\mathbf{x}(t)-\mathbf{W}\mathbf{g} (\mathbf{W}^{\top}\mathbf{x}(t))\right\}\|_{2}^{2}\right\}, \tag{4}\]
where \(\mathbf{g}(\cdot)\) denotes an odd non-linear function applied element-wise on the vector argument. The proof of the connection between (4) and higher order statistics of the source signals \(\mathbf{s}(t)\) is discussed in [12]. While PCA is a fairly standardized technique, nonlinear or robust PCA formulations based on (4) tend to be multi-modal with several local optima--so they can be run from various initial points and possibly lead to different "solutions" [13]. In [14], a recursive least squares algorithm for subspace estimation is proposed, which is further extended to the nonlinear PCA in [15] for solving the BSS problem. The algorithm in [15] is useful as a baseline for developing our deep unfolded framework for nonlinear PCA.
We consider a real-time and adaptive scenario in which, upon arrival of new data \(\mathbf{x}(t)\), the subspace of signal at time instant \(t\) is recursively updated from the subspace at time \(t-1\) and the new sample \(\mathbf{x}(t)\)[14]. The separating matrix \(\mathbf{W}\) introduced in (4) is therefore replaced by \(\mathbf{W}(t)\) and updated at each time instant \(t\). The adaptive algorithm chosen for this task is the well-known recursive least squares (RLS) [16]. In the linear case, by replacing the expectation in (3) with a weighted sum, we can attenuate the impact of the older samples, which is reasonable for a time-varying environment. In this way, one can make sure the distant past will be forgotten and the resulting algorithm for minimizing (3) can effectively track the statistical variations of the observed data. By replacing \(\mathbf{y}(t)=\mathbf{W}(t)^{\top}\mathbf{x}(t)\) and using an exponential weighting (governed by a _forgetting factor_), the loss function in (3) boils down to:
\[\mathcal{L}(\mathbf{W})=\sum_{i=1}^{t}\beta^{t-i}\|\mathbf{x}(i)-\mathbf{W}(t )\mathbf{y}(i)\|^{2}, \tag{5}\]
with the forgetting factor \(\beta\) satisfying \(0\ll\beta\leq 1\). Note that \(\beta=1\) yields the ordinary method of least squares in which all samples are weighed equally while choosing relatively small \(\beta\) makes our estimation rather instantaneous, thus neglecting the past. Therefore, \(\beta\) is usually chosen to be less than one, but also rather close to one for smooth tracking and filtering.
We can write the gradient of the loss function in (5) in its compact form as
\[\nabla_{\mathbf{W}(t)}\mathcal{L}(\mathbf{W})=-2\mathbf{C}_{\mathbf{x} \mathbf{y}}(t)+2\mathbf{C}_{\mathbf{y}}(t)\mathbf{W}(t), \tag{6}\]
where \(\mathbf{C}_{\mathbf{y}}(t)\) and \(\mathbf{C}_{\mathbf{x}\mathbf{y}}(t)\) are the auto-correlation matrix of \(\mathbf{y}(t)\),
\[\mathbf{C}_{\mathbf{y}}(t)=\sum_{i=1}^{t}\beta^{t-i}\mathbf{y}(i)\mathbf{y}(i )^{\top}=\beta\mathbf{C}_{\mathbf{y}}(t-1)+\mathbf{y}(t)\mathbf{y}(t)^{\top}, \tag{7}\]
and the cross-correlation matrix of \(\mathbf{x}(t)\) and \(\mathbf{y}(t)\),
\[\mathbf{C}_{\mathbf{x}\mathbf{y}}(t)=\sum_{i=1}^{t}\beta^{t-i}\mathbf{x}(i) \mathbf{y}(i)^{\top}=\beta\mathbf{C}_{\mathbf{x}\mathbf{y}}(t-1)+\mathbf{x}(t )\mathbf{y}(t)^{\top}, \tag{8}\]
at the time instance \(t\), respectively. Setting the gradient (6) to zero will result in the close-form separating matrix
\[\mathbf{W}(t)=\mathbf{C}_{\mathbf{y}}{}^{-1}(t)\mathbf{C}_{\mathbf{x} \mathbf{y}}(t). \tag{9}\]
A recursive computation of \(\mathbf{W}(t)\) can be achieved using the RLS algorithm [17]. In RLS, the matrix inversion lemma enables a recursive computation of \(\mathbf{G}(t)=\mathbf{C}_{\mathbf{y}}{}^{-1}(t)\); see the derivations in Appendix. At each iteration of the RLS algorithm, \(\mathbf{G}(t)\) is computed recursively as
\[\mathbf{G}(t)=\beta^{-1}\mathbf{G}(t-1)-\frac{\beta^{-2}\mathbf{G}(t-1) \mathbf{y}(t)\mathbf{y}(t)^{\top}\mathbf{G}(t-1)}{1+\beta^{-1}\mathbf{y}(t)^ {\top}\mathbf{G}(t-1)\mathbf{y}(t)}. \tag{10}\]
Consequently, the RLS algorithm provides the estimate \(\mathbf{y}(t)\) of the source signals. The steps of the RLS algorithm are summarized in Algorithm 1. It appears that extending the application of RLS to the nonlinear PCA loss function in (4) is rather straightforward. To accomplish this task, solely step 3 of Algorithm 1 should be modified to \(\mathbf{y}(t)=g(\mathbf{W}^{\top}(t-1)\mathbf{x}(t))\) in order to meet the nonlinear PCA criterion [14]. In order for the RLS algorithm to optimize the nonlinear PCA loss function and converge to a separating matrix the choice of nonlinearity \(\mathbf{g}(.)\) matters. We refer to the analytical study presented in [13] in which some conditions beyond the oddity
and differentiability of the function g(.) must be satisfied. For instance, \(g(s)=s^{3}\) leads to an asymptotically stable separation only if the source signals are positively kurtotic or super-Gaussian. Whereas if we choose a sigmoidal nonlinearity \(g(s)=tanh(\beta s)\) with \(\beta>0\), then a sub-Gaussian density is required for the source signals to be separated using RLS algorithm.
In section III-A, we _unroll_ the iterations of the modified Algorithm 1, for nonlinear PCA onto the layers of a deep neural network where each layer resembles one iteration of the RLS algorithm.
```
1:Initialize \(\mathbf{W}(0)\) and \(\mathbf{G}(0)\)
2:for\(t=0,1,\ldots,T\)do
3:\(\mathbf{y}(t)=\mathbf{W}^{\top}(t-1)\mathbf{x}(t)\)
4:\(\mathbf{h}(t)=\mathbf{G}(t-1)\mathbf{y}(t)\)
5:\(\mathbf{f}(t)=\frac{\mathbf{h}(t)}{\beta+\mathbf{y}(t)^{\top}\mathbf{h}(t)}\)
6:\(\mathbf{G}(t)=\beta^{-1}[\mathbf{G}(t-1)-\mathbf{f}(t)\mathbf{h}(t)^{\top}]\)
7:\(\mathbf{e}(t)=\mathbf{x}(t)-\mathbf{W}(t-1)\mathbf{y}(t)\)
8:\(\mathbf{W}(t)=\mathbf{W}(t-1)+\mathbf{e}(t)\mathbf{f}(t)^{\top}\)
```
**Algorithm 1** RLS Algorithm for Performing PCA
### _Equivariant Adaptive Source Separation (EASI)_
In [10], the EASI algorithm is developed by recurrent updates of the separating matrix as
\[\mathbf{W}(t+1)=\mathbf{W}(t)-\lambda_{t}\mathbf{H}(\mathbf{y}(t))\mathbf{W}(t). \tag{11}\]
Where \(\lambda_{t}\) is a sequence of positive step sizes and \(\mathbf{H}(.)\) is a matrix valued function used to update the separating matrix. In [10] this function is calculated as the relative gradient with respect to the objective function for blind source separation. The cross-cumulants of the source signals in \(\mathbf{y}(t)\) is proposed as the objective function to be minimized as a measure of independence. \(\mathbf{H}(.)\) is derived in [10] as
\[\mathbf{H}(\mathbf{y}(t))=\mathbf{y}(t)\mathbf{y}(t)^{\top}-\mathbf{I}+g( \mathbf{y}(t))\mathbf{y}(t)^{\top}-\mathbf{y}(t)g(\mathbf{y}(t))^{\top} \tag{12}\]
where 1 arbitrary nonlinear functions, \(g_{1}(.),g_{2}(.),\ldots,g_{l}(.)\) are used to define
\[g(\mathbf{y}(t))=[g_{1}(\mathbf{y}_{1}(t)),\ldots,g_{l}(\mathbf{y}_{l}(t))]^{ \top}. \tag{13}\]
The choice of this nonlinear function is crucial to the performance of the algorithm and is dependent on the distribution of sources. For instance, for sources with identical distibutions \(g_{i}(.)=g(.)\) will be sufficient to perform seperation. In [10], it is illustrated that a cubic nonlinearity \(g(s)=s^{3}\) leads to stability of separation in EASI algorithm only under the constraint that sum of kurtosis of any two source signals \(s_{i}\) and \(s_{j}\), \(1\leq i,j\leq m\) are negative. \(g(s)=tanh(s)\) is reported in [15] to work satisfactorily for two sub-Gaussian sources using \(\lambda_{t}>0\). The nonlinear PCA algorithm require that the original source signals have a kurtosis with the same sign. Although this condition is somehow relaxed in the EASI algorithm so that the sum of kurtosises for any pair of two sources must be negative, still some knowledge on the probability distribution of source signals is required to choose the nonlinearity. In the following sections, we propose to learn the nonlinearity in (13) along with the step size in (11) using deep unfolding networks.
## III The Proposed Framework
The estimation performance of the algorithms discussed above depends on a number of factors such as condition of mixing matrix \(\mathbf{A}\), source signals, the step size parameter(s) and nonlinearity \(g(.)\). We propose to overparameterize the algorithms so that we can determine the optimal stepsize and proper nonlinearities as apposed to using fixed parameters. In this section, we present the proposed deep architectures _Deep RLS_ in III-A and _Deep EASI_ in III-B. The training procedure for these two architectures and the SURE based loss function will be introduced in III-C and III-D, respectively.
### _Deep RLS_
As shown in [15], when applied to a linear mixture of source signals (i.e., the BSS problem), the RLS algorithm usually approximates the true source signals well and successfully separates them. However, the number of iterations needed to converge may vary greatly depending on the initial values \(\mathbf{W}(0),\mathbf{G}(0)\) and the forgetting parameter \(\beta\). We introduce _Deep RLS_, our deep unrolling-based framework which is designed based on the modified iterations of the algorithm 1. More precisely, the dynamics of the \(k\)-th layer of _Deep RLS_ are given as:
\[\mathbf{y}(k) =\mathbf{g}_{\nu_{k}}(\mathbf{W}^{\top}(k-1)\mathbf{x}(k)), \tag{14}\] \[\mathbf{h}(k) =\mathbf{G}(k-1)\mathbf{y}(k),\] (15) \[\mathbf{f}(k) =\frac{\mathbf{h}(k)}{\omega_{k}+\mathbf{y}(k)^{\top}\mathbf{h}( k)},\] (16) \[\mathbf{G}(k) =\omega_{k}^{-1}[\mathbf{G}(k-1)-\mathbf{f}(k)\mathbf{h}(k)^{ \top}],\] (17) \[\mathbf{e}(k) =\mathbf{x}(k)-\mathbf{W}(k-1)\mathbf{y}(k),\] (18) \[\mathbf{W}(k) =\mathbf{W}(k-1)+\mathbf{e}(k)\mathbf{f}(k)^{\top}, \tag{19}\]
where \(\mathbf{x}(k)\) is the data vector at time instance \(k\). The nonlinearity \(\mathbf{g}(\cdot)\) in the original RLS algorithm, which was chosen according to the distribution of the source signals, is overparameterized to \(\mathbf{g}_{\nu_{k}}(\cdot)\). Considering that neural networks with at least one hidden layer are universal approximators and they can be trained to approximate any mapping, we use a set of fully connected layers as \(\mathbf{g}_{\nu_{k}}(\cdot)\). Weights and biases of these layers are represented by the learnable parameter \(\nu_{k}\) and \(\omega_{k}\in\mathds{R}\) represents the trainable forgetting parameter.
Given \(T\) samples of the data vector \(\mathbf{x}(t)\), our goal is to optimize the parameters \(\mathbf{\Gamma}\) of the network, where
\[\mathbf{\Gamma}=\{\nu_{k},\omega_{k}\}_{k=1}^{T}. \tag{20}\]
The output of the \(k\)-th layer, i.e. \(\mathbf{y}(k)\), in (14) is an approximation of the source signals at the time instance \(k\).
### _Deep EASI_
We consider the EASI iterations defined in (11) as our baseline to reconstruct the unfolded network. We over parameterize the iterations by introducing a learnable step-size \(\lambda_{t}\) and \(\mathbf{H}_{\phi_{t}}\) for each layer \(t\) as
\[\mathbf{W}(t+1)=\mathbf{W}(t)-\lambda_{t}\mathbf{H}_{\phi_{t}}(\mathbf{y}(t)) \mathbf{W}(t) \tag{21}\]
and
\[\mathbf{H}_{\phi_{t}}(\mathbf{y}(t))=\mathbf{y}(t)\mathbf{y}(t)^{\top}- \mathbf{I}+g_{\phi_{t}}(\mathbf{y}(t))\mathbf{y}(t)^{\top}-\mathbf{y}(t)g_{\phi _{t}}(\mathbf{y}(t))^{\top}, \tag{22}\]
where \(g_{\phi_{t}}(.)\) is realized using a few layers of fully connected layers, with weights \(\phi_{t}\), deployed to approximate the best nonlinearity for separation. The trainable parameters of the network will be
\[\boldsymbol{\theta}=\{\phi_{t},\lambda_{t}\}_{t=1}^{T} \tag{23}\]
The output of the \(t\)-th layer, i.e. \(\mathbf{y}(t)\) is
\[\mathbf{y}(t)=\mathbf{W}(t)^{\top}\mathbf{x}(t), \tag{24}\]
### _Training Procedure_
An earlier version of this work proposed in [19] and was successful in recovering the source signals. However, it could only use a few number of inputs \(\mathbf{x}(t)\) because deeper networks with huge number of parameters were not feasible to be trained. Parameter sharing is a technique in deep learning which regularizes the network to avoid this problem. Parameter sharing makes it possible to extend and apply the model to signals of different lengths and generalize across them. In _Deep RLS_ architecture proposed in [19], we designed a multi-layer feedforward neural network in which we had separate learnable parameters for each time index and we could not generalize to sequence lengths not seen during training. Recurrent Neural Networks (RNN) are introduced to overcome this limitation. Inspired by the architecture of these neural networks and _back-propagation through time (BPTT)_ as their training process, we propose the following loss function for training the proposed unrolled algortihms. An RNN maps an input sequence to an output sequence of the same length. The total loss for a given sequence of \(\mathbf{x}(t)\) values paired with a sequence of \(\mathbf{y}(t)\) values is the sum of the losses over all the time steps. For example, if \(L(t)\) is the mean squared error (MSE) of reconstructing \(\mathbf{s}(t)\) given \(\mathbf{y}(t)\) then
\[\mathcal{L}(\mathbf{s},\mathbf{y})=\sum_{t=1}^{T}L(t)=\sum_{t=1}^{T}\|\mathbf{ s}(t)-\mathbf{y}(t)\|_{2}^{2}, \tag{25}\]
where \(\mathbf{y}=\left[\mathbf{y}(1),\ldots,\mathbf{y}(T)\right]^{\top}\) and \(\mathbf{s}=\left[\mathbf{s}(1),\ldots,\mathbf{s}(T)\right]^{\top}\). In order to apply BPTT, the gradient of the loss function \(L(t)\) with respect to the trainable parameters is required. This is challenging to do by hand, but made easy by the auto-differentiation capabilities is PyTorch [20], which is used throughout our experiments in section IV.
While training _Deep RLS_, one needs to consider the constraint that the forgetting parameter must satisfy \(0<\beta\leq 1\). Hence, in order to impose such a constraint, one can regularize the loss function ensuring that the network chooses proper weights \(\{\omega_{k}\}_{k=1}^{T}\) corresponding to a feasible forgetting parameter at each layer. Accordingly, we define the loss function used for training the proposed architecture as
\[\mathcal{L}(\mathbf{s},\mathbf{y},\Gamma) =\sum_{t=1}^{T}\|\mathbf{s}(t)-\mathbf{y}(t)\|_{2}^{2}+ \tag{26}\] \[\underbrace{\lambda\sum_{t=1}^{T}\mathrm{ReLU}(-\omega_{t})+ \lambda\sum_{t=1}^{T}\mathrm{ReLU}(\omega_{t}-1)}_{\text{regularization term for the forgetting parameter}},\]
where \(\mathrm{ReLU}(\cdot)\) is the well-known Rectifier Linear Unit function extensively used in the deep learning literature. For training the _Deep RLS_ (or _Deep EASI_) network we employed the training process in Algorithm 2.
```
1:Initialize \(\mathbf{W}(0)\) and \(\mathbf{G}(0)\)
2:for\(\text{epoch}=1,\ldots,N\)do
3:for\(t=1,\ldots,T\)do
4: Feed \(\mathbf{x}(t)\) to the network
5: Apply the recursion in (14)-(19) (or (21))
6: Compute the loss function in (26) (or (25))
7: Use BPTT to update \(\mathbf{\Gamma}\) (or \(\boldsymbol{\theta}\))
```
**Algorithm 2** Training Procedure for _Deep RLS(or Deep EASI)_.
### _Stein's Unbiased Risk Estimator (SURE)_
In 1981 Charles Stein in [21] showed that for \(\mathbf{x}=\boldsymbol{\mu}+\boldsymbol{n}\) and \(\mathbf{n}\sim\mathcal{N}(\mathbf{0},\sigma^{2}\mathbf{I})\), the SURE statistic defined as
\[\text{SURE}(\hat{\boldsymbol{\mu}},\mathbf{x})=-l\sigma^{2}+\|\hat{\boldsymbol{ \mu}}(\mathbf{x})-\mathbf{x}\|_{2}^{2}+2\sigma^{2}\nabla\hat{\boldsymbol{\mu}}( \mathbf{x}) \tag{27}\]
is an unbiased estimate of the \(l_{2}\) risk of the estimator \(\hat{\boldsymbol{\mu}}\) in the sense that \(\mathrm{E}\left\{\text{SURE}\right\}=\mathrm{E}\left\{\|\hat{\boldsymbol{\mu}}( \mathbf{x})-\boldsymbol{\mu}\|^{2}\right\}\). The bottleneck in evaluation of SURE is evaluation of the divergence \(\nabla\hat{\boldsymbol{\mu}}(\mathbf{x})=\sum\frac{\partial\hat{\boldsymbol{\mu} }_{i}(\mathbf{x})}{\partial x_{i}}\). A generalization of this technique known as generalized SURE was proposed in [22] to estimate the MSE associated with estimates of \(\mathbf{s}\) from linear measurements \(\mathbf{x}=\mathbf{A}\mathbf{s}+\mathbf{n}\), where \(\mathbf{A}\neq\mathbf{I}\), and \(\mathbf{n}\) has known covariance and follows any distribution from the exponential family. For the estimators \(f_{\boldsymbol{\theta}}(.)\) parameterized over \(\boldsymbol{\theta}\) which receive the noisy observation \(\mathbf{x}\) and provide an estimate of sources \(\mathbf{s}\), expectation of the generalized SURE is [7]
\[\mathbb{E}\{\|\hat{\mathbf{s}}-\mathbf{s}\|_{2}^{2}\}\] \[=\mathbb{E}\{\|\mathbf{P}\mathbf{s}-\mathbf{P}f_{\boldsymbol{ \theta}}(\mathbf{x})\|_{2}^{2}\}=\mathbb{E}\{\|\mathbf{P}\mathbf{s}\|_{2}^{2}+\| \mathbf{P}f_{\theta}(\mathbf{x})\|_{2}^{2}\] \[+2\sigma_{n}^{2}\nabla(f_{\boldsymbol{\theta}}(\mathbf{x}))-2f_{ \boldsymbol{\theta}}(\mathbf{x})^{\top}\mathbf{A}^{\dagger}\mathbf{x}\}, \tag{28}\]
where the orthonormal projection onto the range space of \(\mathbf{A}\) is represented by \(\mathbf{P}=\mathbf{A}(\mathbf{A}^{\top}\mathbf{A})^{-1}\mathbf{A}^{\top}\) and \(\mathbf{A}^{\dagger}\) is the pseudoinverse of \(\mathbf{A}\).
The last three terms in (28) are dependant on the parameters of the estimator i.e. \(\boldsymbol{\theta}\). Considering layer \(t\) of _Deep EASI_ as an estimator of the source signal \(\mathbf{x}(t)\), we propose to train
the network's parameters \(\mathbf{\theta}\) by incorporating the SURE loss at time \(t\) as
\[L(t)=\|\mathbf{P}f_{\mathbf{\theta}}(\mathbf{x}(t))\|^{2}+2\sigma^{2}\nabla(f_{\mathbf{ \theta}}(\mathbf{x}(t)))-2f_{\mathbf{\theta}}(\mathbf{x}(t))^{\top}\mathbf{A}^{ \dagger}\mathbf{x}(t). \tag{29}\]
In this equation, the _Deep EASI_ network, as an estimator for source signal at time instance \(t\), is denoted by \(f_{\mathbf{\theta}}(\mathbf{x}(t))\). Consequently, the divergence is
\[\nabla(f_{\mathbf{\theta}}(\mathbf{x}(t))) =\sum_{i=1}^{l}\frac{\partial f_{\mathbf{\theta}}(\mathbf{x}(t))}{ \partial\mathbf{x}_{i}}\] \[=\sum\frac{\partial}{\partial x_{i}}(\mathbf{W}(t)^{\top}\mathbf{ x}(t))\] \[=\operatorname{Tr}(\mathbf{W}(t)). \tag{30}\]
By Substituting (30) in (29), the SURE loss function for training the _Deep EASI_ network is
\[\mathcal{L}(\mathbf{A},\mathbf{x})=\sum_{t=1}^{T}L(t) \tag{31}\] \[=\sum_{t=1}^{T}\|\mathbf{P}f_{\mathbf{\theta}}(\mathbf{x}(t))\|^{2}+ 2\sigma^{2}\operatorname{Tr}(\mathbf{W}(t))-2f_{\mathbf{\theta}}(\mathbf{x}(t))^{ \top}\mathbf{A}^{\dagger}\mathbf{x}(t).\]
It is worth mentioning that as discussed in [23], the SURE loss function can not be analytically derived for all estimators but it can be tractably evaluated using the methods introduced in [23]. We leave derivation of this loss for _deep RLS_ for future extension of this study.
While training _Deep EASI_ and _Deep RLS_, we synthetically produce the observations \(x(t)\) for \(t=1,\dots,T\) to use as the training set. Therefore, we have access to the sources \(\mathbf{s}(t)\), the mixing matrix \(\mathbf{A}\) and the variance of noise \(\sigma^{2}\). Therefore evaluation of Evaluating the SURE loss in (29) is possible while training the network. Backpropagation on the SURE Loss function is feasible by means of PyTorch's auto-differentiation capabilities, which is used throughout much of our experiments below.
The blindness of our method refers to the test in which we only observe the mixtures. In our numerical experiments, we will deploy the SURE loss function in (29) for training the _Deep EASI_ network and illustrate the performance improvement over the initially proposed loss function in (25). Evaluating the SURE loss in (29) does not require the ground truth \(\mathbf{s}(t)\) and therefore the learning procedure will be unsupervised.
## IV Numerical Study
In this section, we demonstrate the performance of the proposed _Deep RLS_ and _Deep EASI_ algorithms. The proposed framework was implemented using the PyTorch library [20] and the Adam stochastic optimizer [24] with a constant learning rate of \(10^{-4}\) for training purposes. The training was performed based on the data generated via the following model. For each time interval \(t=0,1,...,T\), elements of the vector \(\mathbf{s}(t)\) is generated from a sub-Gaussian distribution. For data generation purposes, we have assumed the source signals to be i.i.d. and uniformly distributed, i.e., \(\mathbf{s}(t)\sim\mathcal{U}(0,\mathbf{1})\). The mixing matrix \(\mathbf{A}\) is assumed to be generated according to \(\mathbf{A}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\). For each sample data, a new mixing matrix \(\mathbf{A}\) is generated i.e. there is not any two samples in the training and test sets that have the same mixing matrix.
We performed the training of the proposed _Deep RLS_ and _Deep EASI_ networks using the batch learning process with a batch size of 40 and trained the network for \(N=100\) epochs. A training set of size \(10^{3}\) and test set of size \(10^{2}\) was chosen. In this section we study performance of our proposed source separation algorithms in test simulations where the mixing matrix and source signals and noise variance are known. Therefore, the network will be trained in a supervised manner and also the mixing matrix and variance of the samples in the training set are used to train regularized network with SURE loss function in section III-D.
The quantitative measure to evaluate the performance of the networks is the average of the mean squared error (MSE) defined as \(\operatorname{MSE}=(1/T)\sum_{t=1}^{T}\|\mathbf{s}(t)-\mathbf{y}(t)\|_{2}^{2}\).
In Fig. 1, we demonstrate the performance of the proposed _Deep RLS_ algorithm, _Deep EASI_ algorithm and the base-line RLS [15] and EASI [10] algorithms (where no parameter is learned) in terms of \(\operatorname{MSE}\) versus the number of time samples \(T\). Observing Fig. 1, one can deduce that owing to its hybrid nature, the proposed _Deep EASI_ methodology significantly outperforms its counterparts in terms of average MSE and converges with far less iterations. In particular, we can observe that _Deep EASI_ and _Deep RLS_ achieve a very low average MSE with as few as \(50\) iterations, while the EASI and RLS algorithms require at least \(100\) and \(250\) iterations to converge, respectively. Accordingly, this results demonstrate the effectiveness of the learned parameters.
Fig. 2 illustrates the average \(\operatorname{MSE}\) on test set for _Deep EASI_ network trained using two different loss functions. The efficacy of SURE-based training in comparison with MSE loss is evident in every epoch of the training.
Figure 1: The average MSE of recovering \(m=3\) source signals from \(l=3\) observations using the _Deep-RLS_ network, RLS [15] with \(\beta=0.99\), EASI [10] and _Deep-EASI_ vs. the number of layers/iterations \(T\), when trained by MSE loss for \(N=100\) epochs with a learning rate of \(10^{-4}\).
## V Conclusion
In this paper, we introduced two deep unrolling-based frameworks for adaptive filtering and demonstrated that the unrolled networks, trained as recursive neural networks, outperform their baseline counterparts. Moreover, we incorporated Stein's unbiased risk estimator as a surrogate loss function for training the deep architectures which introduced further improvement in estimating the source signals.
## VI Appendix: The RLS Recursive Formula
Let \(\mathbf{A}\), \(\mathbf{B}\), and \(\mathbf{D}\) be positive definite matrices such that \(\mathbf{A}=\mathbf{B}^{-1}+\mathbf{c}\mathbf{D}^{-1}\mathbf{c}^{\top}\). Using the matrix inversion lemma, the inverse of \(\mathbf{A}\) can be expressed as
\[\mathbf{A}^{-1}=\mathbf{B}-\mathbf{B}\mathbf{c}(\mathbf{D}+\mathbf{c}^{T} \mathbf{B}\mathbf{c})^{-1}\mathbf{c}^{T}\mathbf{B}. \tag{32}\]
Now, assuming that the auto-correlation matrix \(\mathbf{C}_{\mathbf{y}}(t)\) is positive definite (and thus non-singular), by choosing \(\mathbf{A}=\mathbf{C}_{\mathbf{y}}(t)\), \(\mathbf{B}^{-1}=\beta\mathbf{C}_{\mathbf{y}}(t-1)\), \(\mathbf{c}=\mathbf{y}(t),\mathbf{D}^{-1}=\mathbf{I}\), one can compute \(\mathbf{G}(t)=\mathbf{C}_{\mathbf{y}}^{-1}(t)\) as proposed in (10).
## VII Acknowledgement
The authors would like to express gratitude to Dr. Shahin Khobahi for his help with developing the ideas in a preliminary version of this manuscript, currently available on arXiv [19].
|
2306.00243 | A Generalization of the Graham-Pollak Tree Theorem to Steiner Distance | Graham and Pollak showed that the determinant of the distance matrix of a
tree $T$ depends only on the number of vertices of $T$. Graphical distance, a
function of pairs of vertices, can be generalized to ``Steiner distance'' of
sets $S$ of vertices of arbitrary size, by defining it to be the fewest edges
in any connected subgraph containing all of $S$. Here, we show that the same is
true for trees' {\em Steiner distance hypermatrix} of all odd orders, whereas
the theorem of Graham-Pollak concerns order $2$. We conjecture that the
statement holds for all even orders as well. | Joshua Cooper, Gabrielle Tauscheck | 2023-05-31T23:30:58Z | http://arxiv.org/abs/2306.00243v1 | # A Generalization of the Graham-Pollak Tree Theorem to Steiner Distance
###### Abstract
Graham and Pollak ([4]) showed that the determinant of the distance matrix of a tree \(T\) depends only on the number of vertices of \(T\). Graphical distance, a function of pairs of vertices, can be generalized to "Steiner distance" of sets \(S\) of vertices of arbitrary size, by defining it to be the fewest edges in any connected subgraph containing all of \(S\). Here, we show that the same is true for trees' _Steiner distance hypermatrix_ of all odd orders, whereas the theorem of Graham-Pollak concerns order \(2\). We conjecture that the statement holds for all even orders as well.
1
Footnote 1: Personal communication.
## 1 Introduction
Graham and Pollak showed that the determinant of the distance matrix of a tree \(T\) on \(n\) vertices - the \(n\times n\) matrix whose each \((v,w)\in V(T)\times V(T)\) entry is the ordinary graph distance between \(v\) and \(w\) - depends only on \(n\). In fact, they gave a formula: \(-(n-1)(-2)^{n-2}\). Y. Mao asks1 whether this result can be extended to "Steiner distance", a generalization of distance introduced by Hakimi [5] and popularized by [1]. The Steiner distance \(d_{G}(S)\) of a set \(S\subseteq V(G)\) of vertices is the fewest number of edges in a connected subgraph of \(G\) containing all of \(S\). Note that, if \(S=\{v,w\}\), this reduces to the classical definition of the distance from \(v\) to \(w\), since a connected graph of smallest size containing \(v\) and \(w\) is a path of length \(d_{G}(v,w):=d_{G}(\{v,w\})\). (See [6] for an extensive survey on Steiner distance.) Here, we show that the result of Graham-Pollak extends to _Steiner distance hypermatrices_, at least for odd orders. Furthermore, we describe the structure of the set of nullvectors for order \(3\), a projective variety of codimension \(2\), showing along the way that the sum of the coordinates of any nullvector is zero.
Footnote 1: Personal communication.
Just as all pairwise distances in a graph can be represented by a symmetric matrix, we can write the Steiner distances of all \(k\)-tuples of vertices as an order-\(k\)_hypermatrix_ (sometimes referred to as a _tensor_): the \(\overbrace{[n]\times\cdots\times[n]}^{k}\) (super-)symmetric integer array whose \((v_{1},\ldots,v_{k})\) entry is the Steiner distance of \(\{v_{1},\ldots,v_{k}\}\). We sometimes refer to such hypermatrices as "cubical" since all the index sets are identical. There is a notion of hyperdeterminant that generalizes determinant, and shares many of its properties, though in general is much harder to compute. See, for example, [7] for discussion of the symmetric hyperdeterminant. For our purposes, what will matter about the hyperdeterminant is that it detects nontrivial simultaneous vanishing of a system of degree-\((k-1)\) homogeneous polynomials (aka \((k-1)\)-forms) in \(n\) variables, as the following result makes precise:
**Theorem 1.1** ([2] Theorem 1.3).: _The hyperdeterminant \(\det(M)\) of the order-\(k\), dimension-\(n\) hypermatrix \(M=(M_{i_{1},\ldots,i_{k}})_{i_{1},\ldots,i_{k}=1}^{n}\) is a monic irreducible polynomial which evaluates to zero iff there is a nonzero simultaneous solution to \(\nabla f_{M}=\vec{0}\), where_
\[f_{M}(x_{1},\ldots,x_{n})=\sum_{i_{1},\ldots,i_{k}}M_{i_{1},\ldots,i_{k}}\prod _{j=1}^{k}x_{i_{j}}.\]
Note that there is a choice to be made in generalizing distance matrices: instead of \(d_{G}(S)\), we could also simply set the entries corresponding to vertex sets \(S\) of cardinality less than \(k\) to zero. However, doing so yields a hyperdeterminant of zero _irrespective of the non-degenerate entries_, as we now show. For a hypermatrix \(M\in\mathbb{C}^{S\times\cdots\times S}\), call an entry \(M(i_{1},\ldots,i_{k})\) "degenerate" if \(|\{i_{1},\ldots,i_{k}\}|<k\).
**Theorem 1.2**.: _Let \(M\) be any cubical hypermatrix with all degenerate entries set equal to \(0\). Then the hyperdeterminant of \(M\) is \(0\)._
Proof.: To prove the hyperdeterminant is \(0\), we exhibit a nontrivial simultaneous zero of the partial derivatives of the \(k\)-form
\[f_{M}(x)=\sum_{i_{1},i_{2},\ldots,i_{k}=1}^{n}a_{i_{1}i_{2}\cdots i_{k}}x_{i_{ 1}}x_{i_{2}}\cdots x_{i_{k}}.\]
Since \(M\) has degenerate entries set equal to zero, any term that has \(i_{p}=i_{q}\) for some \(p,q\in[k]\) will have a matrix entry of zero and thus will not appear in the polynomial. Therefore, the only terms that will appear are \(x_{i_{1}}x_{i_{2}}\cdots x_{i_{k}}\) with each \(i_{p}\) distinct. The gradient vector of these polynomials will consist of terms of degree \(k-1\) where once again each \(i_{p}\) is distinct. Therefore, choose \(x_{i_{1}}=x_{i_{2}}=\cdots=x_{i_{k-1}}=0\) and let \(x_{i_{k}}\) be any nonzero value; this is a nontrivial point where all partial derivatives vanish, so that the hyperdeterminant is \(0\).
So, instead, we use Steiner distance to populate all entries of the hypermatrix. This is made precise as follows.
**Definition 1.3**.: _Given a graph \(G\) and a subset \(S\) of the vertices, the Steiner distance of \(S\), written \(d_{G}(S)\) or \(d_{G}(v_{1},\ldots,v_{k})\) where \(S=\{v_{1},\ldots,v_{k}\}\), is the number of edges in the smallest connected subgraph of \(G\) containing \(S=\{v_{1},\ldots,v_{k}\}\). Since such a connected subgraph of \(G\) witnessing \(d_{G}(S)\) is necessarily a tree, it is called a Steiner tree of \(S\)._
**Definition 1.4**.: _Given a graph \(G\), the Steiner polynomial of \(G\) is the \(k\)-form_
\[p_{G}^{(k)}(\mathbf{x})=\sum_{v_{1},\ldots,v_{k}\in V(G)}d_{G}(v_{1},\ldots,v_ {k})x_{1}\cdots x_{k}\]
_where we often suppress the subscript and/or superscript on \(p_{G}^{(k)}\) if it is clear from context._
Equivalently, we could define the Steiner \(k\)-form to be the \(k\)-form associated with the Steiner hypermatrix:
**Definition 1.5**.: _Given a graph \(G\), the Steiner \(k\)-matrix (or just "Steiner hypermatrix" if \(k\) is understood) of \(G\) is the order-\(k\), cubical hypermatrix \(\mathcal{S}_{G}\) of dimension \(n\) whose \((v_{1},\ldots,v_{k})\) entry is \(d_{G}(v_{1},\ldots,v_{k})\)._
Throughout the sequel, we write \(D_{r}\) for the operator \(\partial/\partial x_{r}\), and we always assume that \(T\) is a tree.
**Definition 1.6**.: _Given a graph \(G\) on \(n\) vertices, the Steiner \(k\)-ideal - or just "Steiner ideal" if \(k\) is clear - of \(G\) is the ideal in \(\mathbb{C}[x_{1},\ldots,x_{n}]\) generated by the polynomials \(\{D_{j}p_{G}\}_{j=1}^{n}\)._
Thus, the Steiner ideal is the _Jacobian ideal_ of the Steiner polynomial of \(G\).
**Definition 1.7**.: _A Steiner nullvector is a point where all the polynomials within the Steiner ideal vanish. The set of all Steiner nullvectors - a projective variety - is the Steiner nullvariety._
Although all the results contained herein concern odd order Steiner hypermatrices, extensive computation suggests that they extend to even order.
**Conjecture 1**.: _The order-\(k\) Steiner distance hypermatrix of a tree \(T\) on \(n\geq 3\) vertices has a hyperdeterminant that only depends on \(T\) through \(n\), and is \(0\) iff \(k\) is odd._
Below, we show that this conjecture holds for all odd \(k\), when the hyperdeterminant is \(0\) irrespective of the choice of \(T\). We then go on to describe the Steiner nullvariety for \(k=3\).
## 2 Main Results
**Theorem 2.1**.: _For \(k\) odd, the Steiner distance \(k\)-matrix of a tree \(T\) with at least \(3\) vertices has a hyperdeterminant equal to zero._
Proof.: Since \(T\) has at least \(3\) vertices, let \(u\) be a leaf, \(w\) a neighbor of \(u\), and \(v\neq u\) a neighbor of \(w\). Let \(\mathbf{y}\) denote the vector whose \(z\) coordinate \(y_{z}\) is given by
\[y_{z}=\left\{\begin{array}{ll}1&\text{if }z=u\\ \zeta&\text{if }z=v\\ -1-\zeta&\text{if }z=w\\ 0&\text{otherwise},\end{array}\right.\]
where \(\zeta=\exp(\pi i/(k-1))\), a \((2k-2)\)-root of unity. By Theorem 1.1, it suffices to show that \(D_{z}p_{T}(\mathbf{y})=0\) for each \(z\in V(T)\). First, suppose \(v\) is not on the \(u-z\) path in \(T\) and \(z\neq u\) (which includes the case \(z=w\)). Let \(\alpha=d_{T}(z,u,v,w)\), so that
\[\frac{1}{k}D_{z}p_{T}(\mathbf{y}) =\sum_{a+b+c=k-1}x_{u}^{a}x_{v}^{b}x_{w}^{c}\binom{k-1}{a,b,c}d_{ T}(z,u,v,w)\] \[+\sum_{a+c=k-1}x_{u}^{a}x_{w}^{c}\binom{k-1}{a,c}(d_{T}(z,u,w)-d_ {T}(z,u,v,w))\] \[+\sum_{b+c=k-1}x_{v}^{b}x_{w}^{c}\binom{k-1}{b,c}(d_{T}(z,v,w)-d_ {T}(z,u,v,w))\] \[+x_{w}^{k-1}(d_{T}(z,u,v,w)-d_{T}(z,u,w)-d_{T}(z,v,w)+d_{T}(z,w))\] \[=\alpha(x_{u}+x_{v}+x_{w})^{k-1}-(x_{u}+x_{w})^{k-1}-(x_{v}+x_{w })^{k-1}\] \[=0-(-\zeta)^{k-1}-(-1)^{k-1}=0.\]
Next, if \(v\) is on the \(u-z\) path in \(T\) and \(z\notin\{u,w\}\), we obtain
\[\frac{1}{k}D_{z}p_{T}(\mathbf{y}) =\sum_{a+b+c=k-1}x_{u}^{a}x_{v}^{b}x_{w}^{c}\binom{k-1}{a,b,c}d_{ T}(z,u,v,w)\] \[+\sum_{b+c=k-1}x_{v}^{b}x_{w}^{c}\binom{k-1}{b,c}(d_{T}(z,v,w)-d_ {T}(z,u,v,w))\] \[+x_{v}^{k-1}(d_{T}(z,v)-d_{T}(z,v,w))\] \[=\alpha(x_{u}+x_{v}+x_{w})^{k-1}-(x_{v}+x_{w})^{k-1}-x_{v}^{k-1}\] \[=0-(-1)^{k-1}-(-\zeta)^{k-1}=0.\]
Finally, if \(z=u\), then
\[\frac{1}{k}D_{z}p_{T}(\mathbf{y}) =\sum_{a+b+c=k-1}x_{u}^{a}x_{v}^{b}x_{w}^{c}\binom{k-1}{a,b,c}d_{ T}(u,v,w)\] \[+\sum_{a+c=k-1}x_{u}^{a}x_{w}^{c}\binom{k-1}{a,c}(d_{T}(u,w)-d_{T }(u,v,w))\] \[+x_{u}^{k-1}(d_{T}(u)-d_{T}(u,w))\] \[=2(x_{u}+x_{v}+x_{w})^{k-1}-(x_{u}+x_{w})^{k-1}-x_{u}^{k-1}\] \[=0-(-\zeta)^{k-1}-1^{k-1}=0.\]
Note that the hyperdeterminant of a tree on one vertex is also zero. This is because the Steiner \(k\)-form, \(p_{T}^{(k)}\), only contains one monomial: \(d_{T}(1,\ldots,1)x_{1}^{k}\). Since \(d_{T}(1,\ldots,1)=0\), the Steiner \(k\)-form as well as the partial derivative is automatically \(0\), and so the Steiner nullvector \(\mathbf{v}=(x_{1})\) can be set to anything.
For the tree on two vertices, the hyperdeterminant is not zero. It is straightforward to write the Steiner \(k\)-form as \(p_{T}^{(k)}=(x_{1}+x_{2})^{k}-x_{1}^{k}-x_{2}^{k}\). The partial derivatives are therefore \(D_{j}p_{T}^{(k)}=k(x_{1}+x_{2})^{k-1}-kx_{j}^{k-1}\) for \(j=1,2\), so \(D_{1}p_{T}^{(k)}=D_{2}p_{T}^{(k)}=0\) implies \(x_{1}^{k-1}=(x_{1}+x_{2})^{k-1}=x_{2}^{k-1}\). Then \(x_{2}=\zeta x_{1}\), where \(\zeta^{k-1}=1\), but if \(x_{1}\neq 0\) this implies
\[1=(x_{1}+x_{2})^{k-1}/x_{1}^{k-1}=(1+\zeta)^{k-1},\]
a contradiction. Therefore, there is no nontrivial nullvector and the Steiner hyperdeterminant of the tree on two vertices is nonzero.
Now that we have established that all Steiner hyperdeterminants of odd order with \(n\geq 3\) are zero, we describe in more detail the corresponding Steiner nullvariety, at least for order \(k=3\).
**Lemma 2.2**.: _For any distinct vertices \(i,j,k\) of a tree \(T\), we have_
\[2d_{T}(i,j,k)=d_{T}(i,j)+d_{T}(i,k)+d_{T}(j,k).\]
Proof.: It is easy to check the formula for each of the two cases: either the Steiner tree of \(\{i,j,k\}\) is a path or a tree with three leaves.
The following result shows that \(p_{T}^{(3)}\) is divisible by the elementary symmetric polynomial of degree 1, which we refer to as \(s\).
**Proposition 2.3**.: _The Steiner 3-form \(p_{T}^{(3)}\) is divisible by \(s=\sum_{r}x_{r}\)._
Proof.: Let \(p=p_{T}^{(3)}\). If \(s\) divides \(p\), then \(p=sg\) for some polynomial \(g\). We claim \(g=3\sum_{i<j}d_{T}(i,j)x_{i}x_{j}\). We show that
\[p=sg=\sum_{r}x_{r}\left(3\sum_{i<j}d_{T}(i,j)x_{i}x_{j}\right)=\sum_{r,i<j}3d _{T}(i,j)x_{i}x_{j}x_{r}.\]
holds by classifying summands according to the triple \((r,i,j)\).
* If \(r=i\), the contribution is \(3\sum_{i<j}d_{T}(i,j)x_{i}^{2}x_{j}\).
* If \(r=j\), the contribution is \(3\sum_{i<j}d_{T}(i,j)x_{i}x_{j}^{2}\).
* If \(r\neq i,j\), then the contribution becomes \[3\sum_{\begin{subarray}{c}i<j\\ r\neq i,j\end{subarray}}d_{T}(i,j)x_{i}x_{j}x_{r} =3\sum_{i<j<k}[d_{T}(i,j)+d_{T}(i,k)+d_{T}(j,k)]x_{i}x_{j}x_{k}\] \[=6\sum_{i<j<k}d_{T}(i,j,k)x_{i}x_{j}x_{k}\] \[=\sum_{i,j,k\text{ distinct}}d_{T}(i,j,k)x_{i}x_{j}x_{k}\] where the second equality follows from Lemma 2.2. On the other hand, \[p=3\sum_{i\neq j}d_{T}(i,j)x_{i}^{2}x_{j}+\sum_{i,j,k\text{ distinct}}d_{T}(i,j,k)x_{i}x_{j}x_{k},\] which agrees with the sum of the three types of terms in \(sg\)
**Theorem 2.4**.: _If \(\mathbf{v}=(x_{1},\cdots,x_{n})\) is a Steiner nullvector of order \(3\) and \(s=\sum_{i=1}^{n}x_{i}\), then \(s^{3}\) lies within the Steiner ideal \(J\)._
Proof.: We can write \(p=gs\), where \(p=p_{T}^{(3)}\), \(s=\sum_{i}x_{i}\), and \(g=3\sum_{i<j}d_{T}(i,j)x_{i}x_{j}\). Thus, writing \(D_{r}\) for differentiation with respect to \(x_{r}\), we obtain
\[D_{r}p=g+sD_{r}g\]
Then
\[\sum_{r}x_{r}D_{r}p =\sum_{r}x_{r}(g+sD_{r}g)\] \[=g\sum_{r}x_{r}+s\sum_{r}x_{r}D_{r}g\] \[=s(g+\sum_{r}x_{r}D_{r}g).\]
Now,
\[\sum_{r}x_{r}D_{r}g =3\sum_{r}x_{r}D_{r}\left(\sum_{i<j}d_{T}(i,j)x_{i}x_{j}\right)\] \[=3\sum_{r}x_{r}\sum_{j}d_{T}(r,j)x_{j}\] \[=6\sum_{r<j}d_{T}(r,j)x_{r}x_{j}=2g.\]
Putting these together gives that \(\sum_{r}x_{r}D_{r}p=s(g+2g)=3sg\). So \(sg\) is in the Steiner ideal \(J=\langle\{D_{r}p\}_{r}\rangle\). Since \(D_{r}p=g+sD_{r}g\in J\), we also have \(s(g+sD_{r}g)=sg+s^{2}D_{r}g\in J\), and so also \(sg+s^{2}D_{r}g-sg=s^{2}D_{r}g\in J\).
Now, \(D_{r}g=3\sum_{j}d_{T}(j,r)x_{j}\). In other words, \(\nabla g=Mx\), where \(M\) denotes the (symmetric) distance matrix of the tree and \(x\) is the vector of all variables. By the Graham-Pollak Theorem, \(M\) is invertible for trees, so \(yM=\vec{1}\) has a solution (where \(\vec{1}\) is the all-ones row vector). Let the solution be \(y=(c_{1},\ldots,c_{n})\). Then
\[y\nabla g=yMx=\vec{1}x=s\]
i.e., \(\sum_{r}c_{r}D_{r}g=s\). Thus, \(\sum_{r}c_{r}s^{2}D_{r}g=s^{3}\in J\).
In fact, tracing back through the computation gives \(s^{3}=\sum_{r}f_{r}D_{r}p\) where
\[f_{r}=c_{r}s-\frac{x_{r}}{3}\sum_{j}c_{j}.\]
It is not hard to deduce from Proposition 2.9 below that \(s^{2}\not\in J\).
**Corollary 2.5**.: _If \(\mathbf{v}\) is a Steiner nullvector, then the sum of the coordinates of \(\mathbf{v}\) is \(0\)._
Proof.: Since \(s^{3}\in J\), we have \(s\in\sqrt{J}\). Therefore, if \(\mathbf{v}\) is in the Steiner nullvariety, then \(s(\mathbf{v})=0\), i.e., the coordinates of \(\mathbf{v}\) sum to \(0\).
**Theorem 2.6** ([3] Lemma 1).: _Let \(T\) be a tree with vertex set \([n]\), let \(d_{j}\) be the degree of vertex \(j\), and let \(a_{ij}\) be the indicator function that \(ij\in E(T)\). If \(D\) is the distance matrix of \(T\) and the \(ij\)-entry of \(D^{-1}\) is \(d_{ij}^{*}\), then_
\[d_{ij}^{*}=\frac{(2-d_{i})(2-d_{j})}{2(n-1)}+\left\{\begin{array}{ll}-d_{i}/2 &\text{ if }i=j\\ a_{ij}/2&\text{ if }i\neq j\end{array}\right.\]
**Proposition 2.7**.: \(c_{r}=(2-d_{r})/(n-1)\) _and \(\sum_{r}c_{r}=2/(n-1)\)._
Proof.: Let \(M\) be the distance matrix of \(T\). Since \(yM=\vec{1}\) and \(M\) is invertible,
\[y=\vec{1}M^{-1}.\]
Therefore, applying Theorem 2.6, we can write
\[c_{r} =\sum_{j}\left(\frac{(2-d_{r})(2-d_{j})}{2(n-1)}+\left\{\begin{array} []{ll}-d_{r}/2&\text{ if }r=j\\ a_{rj}/2&\text{ if }r\neq j\end{array}\right.\right)\] \[=\frac{2-d_{r}}{2(n-1)}\sum_{j}(2-d_{j})-\frac{d_{r}}{2}+\frac{d_ {r}}{2}\] \[=\frac{2-d_{r}}{2(n-1)}(2n-2(n-1))=\frac{2-d_{r}}{n-1}.\]
Thus,
\[\sum_{r}c_{r}=\sum_{r}\frac{2-d_{r}}{n-1}=\frac{1}{n-1}(2n-2(n-1))=\frac{2}{n -1}.\]
**Corollary 2.8**.: \(s^{3}=\sum_{r}f_{r}D_{r}p\) _where_
\[f_{r}=\frac{1}{n-1}\left((2-d_{r})s-\frac{2}{3}x_{r}\right)\]
So, \(s\) is in the radical \(\sqrt{J}\) of \(J\), and we can write \(s^{3}\) (but not \(s^{2}\)) in terms of the generators of \(J\). In particular, the codimension of the Steiner nullvariety is at least one. The next few results show that the codimension is in fact, \(2\).
**Proposition 2.9**.: _The polynomials \(D_{r}p\) are not divisible by \(s\)._
Proof.: Suppose \(s|D_{r}p\). Then, since \(p=gs\), we have \(D_{r}p=g+sD_{r}g\), so \(s|g\) as well. But, \(g\) is quadratic, so there exist \(a_{1},\dots,a_{n}\in\mathbb{C}\) so that \(g=s\sum_{r}a_{r}x_{r}\), i.e.,
\[g=\sum_{i,j}a_{i}x_{i}x_{j}.\]
The \(x_{i}^{2}\) term on the right-hand side is \(a_{i}x_{i}^{2}\), but the corresponding coefficient on the left-hand side is \(0\), so \(a_{i}=0\) for each \(i\). Then \(f=0\), so \(g=0\), a contradiction.
**Theorem 2.10**.: _The codimension of an order-\(3\) Steiner nullvariety of a tree is \(2\)._
Proof.: If \(J\) is the Steiner ideal, then, by the previous result, \(\langle s\rangle\subsetneq\langle s,g\rangle\subseteq\sqrt{J}\). On the other hand, \(D_{r}p=g+sD_{r}g\in\langle g,s\rangle\), so \(\sqrt{J}=\langle s,g\rangle\).
In fact, we can go even further: for every assignment of values to \(n-2\) vertices, there is an assignment to the last two vertices that yields a Steiner nullvector:
**Corollary 2.11**.: _For any tree \(T\) on \(n\) vertices and \(n-2\) values \(a_{3},\dots,a_{n}\in\mathbb{C}\), there exist \(a_{1},a_{2}\) so that \((a_{1},\dots,a_{n})\) is a Steiner nullvector._
Proof.: We need only show that the Steiner nullvariety is not contained in any hyperplane of the form \(x_{r}=c\), i.e., no polynomial of the form \(x_{r}-c\) is an element of \(\sqrt{J}\). However, \(\sqrt{J}=\langle s,g\rangle\) is a homogeneous ideal of degree \(2\), so it does not contain any linear polynomials.
**Proposition 2.12**.: _For any tree \(T\) on \(n\) vertices and \(n-2\) values \(a_{3},\ldots,a_{n}\in\mathbb{C}\), there exist \(a_{1},a_{2}\) so that \((a_{1},\ldots,a_{n})\) is a Steiner nullvector: \(a_{1}\) is any solution to_
\[Aa_{1}^{2}+Ba_{1}+C=0,\]
_where \(A=d_{T}(1,2)\), \(B=\sum_{j\geq 3}(d_{T}(1,2)-d_{T}(1,j)+d_{T}(2,j))a_{j}\), and \(C=\sum_{j,k\geq 3}(d_{T}(2,j)-2d_{T}(j,k))a_{j}a_{k}\); and \(a_{2}=-a_{1}-\sum_{j=3}^{n}a_{j}\)._
Proof.: Assume \(v=(a_{1},\cdots,a_{n})\) is a nullvector where \(a_{3},\cdots,a_{n}\) are arbitrary complex number. Since \(v\) is a nullvector, Corollary 2.5 states that \(\sum_{j=1}^{n}a_{j}=0\). Therefore, \(a_{2}=-a_{1}-\sum_{j=3}^{n}a_{j}\).
Also, since \(v\) is a nullvector, by definition all partial derivatives to the Steiner 3-form must vanish. Notice by Theorem 2.3, \(D_{r}p=D_{r}(sg)=sD_{r}g+g\) where \(g=3\sum_{j<k}d_{T}(j,k)a_{j}a_{k}\). Since \(s=\sum_{j=1}^{n}a_{j}=0\), this means that we only need to show that \(g=3\sum_{j<k}d_{T}(j,k)a_{j}a_{k}=0\). Rewriting \(g\) to pull out any terms involving \(a_{1}\) or \(a_{2}\), we see that
\[3d_{T}(1,2)a_{1}a_{2}+3a_{1}\sum_{j=3}^{n}d_{T}(1,j)a_{j}+3a_{2}\sum_{j=3}^{n} d_{T}(2,j)a_{j}+3\sum_{\begin{subarray}{c}j<k\\ j\geq 3\end{subarray}}^{n}d_{T}(j,k)a_{j}a_{k}=0.\]
Plugging in \(a_{2}=-a_{1}-\sum_{j=3}^{n}a_{j}\) and simplifying yields
\[a_{1}^{2}d_{T}(1,2)+a_{1} \left[\sum_{j\geq 3}(d_{T}(1,2)-d_{T}(1,j)+d_{T}(2,j))a_{j}\right]\] \[+\sum_{j,k\geq 3}(d_{T}(2,j)-2d_{T}(j,k))a_{j}a_{k}=0,\]
which has a solution for every choice of \(a_{3},\ldots,a_{n}\).
|
2310.20113 | Planets Across Space and Time (PAST) IV: The Occurrence and Architecture
of Kepler Planetary Systems as a Function of Kinematic Age Revealed by the
LAMOST-Gaia-Kepler Sample | One of the fundamental questions in astronomy is how planetary systems form
and evolve. Measuring the planetary occurrence and architecture as a function
of time directly addresses this question. In the fourth paper of the Planets
Across Space and Time (PAST) series, we investigate the occurrence and
architecture of Kepler planetary systems as a function of kinematic age by
using the LAMOST-Gaia-Kepler sample. To isolate the age effect, other stellar
properties (e.g., metallicity) have been controlled. We find the following
results. (1) The fraction of stars with Kepler-like planets ($F_{\text{Kep}}$)
is about 50% for all stars; no significant trend is found between
$F_{\text{Kep}}$ and age. (2) The average planet multiplicity ($\bar{N}_p$)
exhibits a decreasing trend (~2$\sigma$ significance) with age. It decreases
from $\bar{N}_p$~3 for stars younger than 1 Gyr to $\bar{N}_p$~1.8 for stars
about 8 Gyr. (3) The number of planets per star
($\eta=F_{\text{Kep}}\times\bar{N}_p$) also shows a decreasing trend
(~2-3$\sigma$ significance). It decreases from $\eta$~1.6-1.7 for young stars
to $\eta$~1.0 for old stars. (4) The mutual orbital inclination of the planets
($\sigma_{i,k}$) increases from $1.2^{+1.4}_{-0.5}$ to $3.5^{+8.1}_{-2.3}$ as
stars aging from 0.5 to 8 Gyr with a best fit of
$\log{\sigma_{i,k}}=0.2+0.4\times\log{\frac{\text{Age}}{\text{1Gyr}}}$.
Interestingly, the Solar System also fits such a trend. The nearly independence
of $F_{\text{Kep}}$~50% on age implies that planet formation is robust and
stable across the Galaxy history. The age dependence of $\bar{N}_p$ and
$\sigma_{i,k}$ demonstrates planetary architecture is evolving, and planetary
systems generally become dynamically hotter with fewer planets as they age. | Jia-Yi Yang, Di-Chang Chen, Ji-Wei Xie, Ji-Lin Zhou, Subo Dong, Zi Zhu, Zheng Zheng, Chao Liu, Weikai Zong, Ali Luo | 2023-10-31T01:12:11Z | http://arxiv.org/abs/2310.20113v1 | Planets Across Space and Time (PAST) IV: The Occurrence and Architecture of Kepler Planetary Systems as a Function of Kinematic Age Revealed by the LAMOST-Gaia-Kepler Sample
###### Abstract
One of the fundamental questions in astronomy is how planetary systems form and evolve. Measuring the planetary occurrence and architecture as a function of time directly addresses this question. In the fourth paper of the Planets Across Space and Time (PAST) series, we investigate the occurrence and architecture of Kepler planetary systems as a function of kinematic age by using the LAMOST-Gaia-Kepler sample. To isolate the age effect, other stellar properties (e.g., metallicity) have been controlled. We find the following results. (1) The fraction of stars with Kepler-like planets (\(F_{\rm Kep}\)) is about 50% for all stars; no significant trend is found between \(F_{\rm Kep}\) and age. (2) The average planet multiplicity (\(\bar{N}_{p}\)) exhibits a decreasing trend ( \(\sim\)2\(\sigma\) significance) with age. It decreases from \(\bar{N}_{p}\sim 3\) for stars younger than 1 Gyr to \(\bar{N}_{p}\sim\)1.8 for stars about 8 Gyr. (3) The number of planets per star (\(\eta\)=\(F_{\rm Kep}\)\(\times\bar{N}_{p}\)) also shows a decreasing trend (\(\sim\)2-3\(\sigma\) significance). It decreases from \(\eta\sim\) 1.6-1.7 for young stars to \(\eta\sim\) 1.0 for old stars. (4) The mutual orbital inclination of the planets (\(\sigma_{i,k}\)) increases from 1\(\fdg\)2\(\fdg\)4 to 3\(\fdg\)5\(\fdg\)8 as stars aging from 0.5 to 8 Gyr with a best fit of \(\log\sigma_{i,k}=0.2+0.4\times\log\frac{\rm Age}{\rm 1Gyr}\). Interestingly, the Solar System also fits such a trend. The nearly independence of \(F_{\rm Kep}\sim 50\%\) on age implies that planet formation is robust and stable across the Galaxy history. The age dependence of \(\bar{N}_{p}\) and \(\sigma_{i,k}\) demonstrates planetary architecture is evolving, and planetary systems generally become dynamically hotter with fewer planets as they age.
methods: statistical - planetary systems - planet-star interactions 0000-0002-0002-4810-7886]Jia-Yi Yang
## 1 Introduction
Thanks to various surveys from ground (e.g., Mayor et al., 2011) and space (e.g., Borucki et al., 2010), the number of known planets has reached a milestone (5000, NASA Exoplanet Archive1, Akeson. et al., 2013). Such a rich planetary database has enabled substantial statistical studies of the occurrence rate and architecture of planetary systems (see the reviews by Winn & Fabrycky, 2015; Zhu & Dong, 2021), deepening our understanding of planet formation and evolution.
Footnote 1: [https://exoplanetarchive.ipac.caltech.edu/](https://exoplanetarchive.ipac.caltech.edu/)
Stellar properties (e.g., mass, effective temperature, and metallicity) play crucial roles in determining planetary occurrence rate and architecture. Although the occurrence of giant planets (Jupiter-like gas giants) has been found to increase with stellar mass (Johnson et al., 2010; Ghezzi et al., 2018), the trend is opposite for small planets. For the bulk of planets detected by the Kepler mission (so called super-Earths and sub-Neptunes, with radii between the Earth and Neptune, hereafter dubbed as Kepler planets for short), their occurrence rate in terms of number of planets per star (\(\eta\)) has an inverse relationship with stellar temperature and mass (Howard et al., 2012; Mulders et al., 2015; Kunimoto & Matthews, 2020). In fact, \(\eta\) can be further decomposed into two factors: the fraction of stars that have plane
tary systems (\(F\)) and the average number of planets in a planetary system (planetary multiplicity \(\bar{N}_{p}\)), and they are linked by the following equation:
\[\eta=F\times\bar{N}_{p}. \tag{1}\]
Further studies have shown that both \(F\) and \(\bar{N}_{p}\) tend to decrease as stellar temperature and mass increase (Yang et al., 2020; He et al., 2021).
Metallicity also plays a differential role in shaping planetary systems of giant planets and small planets. On one hand, a correlation between giant planets and metallicity has been well established (Santos et al., 2001; Fischer and Valenti, 2005), which provides the key evidence for the core-accretion model of planet formation (e.g., Pollack et al., 1996; Ida and Lin, 2004). On the other hand, such a planet-metallicity correlation is generally weaker and more complicated for small planets (Buchhave et al., 2012; Wang and Fischer, 2015; Dong et al., 2018; Petigura et al., 2018; Zhu, 2019)
Stellar environments (e.g., stellar companions, clusters and memberships of the Galactic thin/thick disks) also affect planetary occurrence and architecture. There has been substantial evidence showing that planetary occurrence is reduced and planetary architecture is modified when stellar companions are close, with separations \(\lesssim\) 100 AU (Wang et al., 2014; Kraus et al., 2016; Moe and Kratter, 2021; Fontanive et al., 2019; Su et al., 2021). Recently, it has been reported that both the period and radius distributions of exoplanets exhibit dependencies on stellar clustering (Winter et al., 2020; Kruijssen et al., 2020; Chevance et al., 2021; Longmore et al., 2021). Dai. et al. (2021) found that stellar groups with high relative velocities tend to have a lower occurrence rate of super-Earths and sub-Neptunes but a higher occurrence rate of sub-Earths. The Galactic membership and total velocity of stars are also linked with the planet occurrence rate. It has been found that stars in the thick disk (higher total velocity) generally have fewer planets than those in the thin disk (lower total velocity, Bashi and Zucker, 2019, 2022; Chen et al., 2021).
The occurrence and architecture of planets in our Galaxy could also evolve with time. Therefore, measuring planet occurrence and architecture as a function of time can provide crucial insights into planet formation and evolution. For example, recent studies (e.g., Berger et al., 2020; Sandoval et al., 2021; David et al., 2021; Chen et al., 2022) have revealed that the relative occurrence (ratio) of super-Earths and sub-Neptunes evolves on a time scale of Giga years, providing crucial constraints on the formation of the radius valley (a deficit of planets with radii of \(\sim\)1.7-2.1 \(R_{\oplus}\), Fulton et al., 2017). More recently, Bashi and Zucker (2022) found tentative evidence that suggests the occurrence rate of close in super-Earths detected by Kepler is anti-correlated with stellar age. However, such an anti-correlation is still inconclusive, probably because they adopted the isochrone ages which suffer from large uncertainties (56% for Kepler stars, Berger et al., 2020). Furthermore, they didn't isolate the effect of age from other stellar properties (e.g., metallicity), so it is still unclear whether the anti-correlation is intrinsic or just a projection of other correlations.
To investigate how planet occurrence and architecture evolve with time, we have started a series of work named Planets Across Space and Time (PAST, Chen et al., 2021). The first challenge of this work is to determine the age of main-sequence stars, which make up the bulk of planet hosts. In PAST I (Chen et al., 2021), we revisited the kinematic method to classify Galactic components and the Age-Velocity dispersion Relationship (AVR, Stromberg, 1946; Wielen, 1977; Holmberg et al., 2009), extending the viable range to 1.5 kpc to cover the majority of the known exoplanet hosts. The deduced kinematic age for an ensemble of stars has a typical internal uncertainty of 10%-20%. The second challenge is to isolate the effect of stellar age, because age is generally correlated with other properties, such as stellar mass and metallicity. Applying the revised kinematic method of PAST I, we constructed a catalog of kinematic and other basic properties for 35,835 Kepler stars in PAST II (Chen et al., 2021). Such a large and homogeneous sample enables us to further set control samples to isolate the effect of age from other stellar properties. In PAST III, we investigated how the radius distribution of small planets evolves with time (Chen et al., 2022). In this work, the fourth paper of the PAST series (PAST IV), we study the occurrence and architecture of Kepler planets as a function of stellar age.
This paper is organized as follows: in Section 2, we present the star and planet data used in this work. In Section 3, we describe the parameter control method to isolate the age effect, and present the apparent occurrence of Kepler planets. In Section 4, we adopt a forward modeling method to derive the intrinsic occurrence rate and architecture of Kepler planets. We make the discussions and summarize the main conclusions in Section 5 and 6.
## 2 Data Sample
### Star Sample
The LAMOST-Gaia-Kepler catalog constructed in PAST II is based on LAMOST DR4/DR5 and GAIA DR2. Since LAMOST have updated to DR8 (Yan et al., 2022), and Gaia has released EDR3/DR3 (Gaia Col
laboration et al., 2022), therefore we have updated the LAMOST-Gaia-Kepler catalog accordingly. We start from the stellar properties catalogue from Berger et al. (2020), which provides a homogeneous calibration of effective temperature, mass, and radius for most of the Kepler stars. The Kepler team calculated Combined Differential Photometric Precision (CDPP, Christiansen et al., 2012; Kepler Mission, 2019; Kepler Project, 2020) for each target, which defines the completeness of transit searching. We restrict our sample to targets with \(\sigma_{\rm CDPP}\) value 2. Then we cross-match the sample with the recently released LAMOST DR8 3 low-resolution catalogue (Yan et al., 2022), which contains more spectral observations of Kepler stars reprocessed with the latest pipeline version. We select stars with LAMOST metallicity and radial velocity measurements, and remove stars with [Fe/H] less than \(-1.0\) due to a lack of training set (Xiang et al., 2019). Next, we cross-match with Gaia DR3(Gaia Collaboration et al., 2022; IRSA, 2022) catalog, which includes more accurate measurements of positions, proper motions, and parallaxes of stars compared to DR2. Gaia DR3 also provides the renormalised unit weight error (RUWE, Lindegren, 2018) for identifying possible binary stars. Stars with RUWE values greater than 1.2 are excluded from our sample (Berger et al., 2020; Bryson et al., 2020). We obtain 70,239 star in total, the number of our sample after each selection step is summarized in Table 1.
Footnote 2: [https://exoplanetarchive.ipac.caltech.edu/docs/Kepler_completeness_reliability.html](https://exoplanetarchive.ipac.caltech.edu/docs/Kepler_completeness_reliability.html)
Footnote 3: [http://www.lamost.org/dr8/](http://www.lamost.org/dr8/)
Utilizing Gaia DR3, we update the kinematic method and AVR of PAST I, the details can be seen in the Appendix A. In PAST I, the calibrations of the kinematic method and AVR extended from the solar neighborhood to a larger range of stars with \(|Z|<1.5\) kpc, \(7.5<R<10\) kpc, and distance\(<\)1.5 kpc, where \(Z\) and \(R\) are the vertical and radial components of Galactocentric cylindrical coordinates, respectively. Here, we adopt a similar range of stars, but further extend the distance to 2 kpc, thanks to the improvement of the astrometric measurements from Gaia DR2 to Gaia DR3. With the updated kinematic method (see Appendix A), we calculate the probabilities of stars belonging to each Galactic component, i.e, thin disk, thick disk, halo, and Hercules stream (dub as \(D\), \(TD\), \(H\), and \(Her\)). We classify stars into different components following the commonly used method introduced by Bensby et al. (2003, 2014), and show the results in the Toomre diagram (Figure 1). Since AVR can only be applied to disk stars, so we limit our sample to stars within the Galactic disk, i.e., stars with \(D/Her\geqslant 2\), \(TD/H\geqslant 1\), and \(TD/Her\geqslant 2\).
In Figure 2, we show stars from the updated LAMOST-Gaia-Kepler sample in the Hertzsprung-Russell diagram. The effective temperature and radius data are obtained from Berger et al. (2020), and the evolve stage is calculated by the same method as Bryson et al. (2020) using the python package evolstate4. We further limit our star sample to main-sequence solar type stars, with effective temperature between 4700 to 6500 K. So far, the number of our star sample is 19,537.
Footnote 4: [http://ascl.net/1905.003](http://ascl.net/1905.003)
### Planet Sample
Our Kepler planet sample is based on Kepler DR25 (Thompson et al., 2018; NASA Exoplanet Archive, 2020). We select planets/candidates within our star sample, and exclude those flagged as 'false positives'. We only consider planets with periods less than 100 days, as the observed planet numbers and detection efficiencies both drop significantly beyond this period (Burke and Catanzarite, 2017). We also exclude planetary systems with Ultra Short Period planets (USPs, period \(<1\) day) from our Fiducial analysis (see Section 5.1 for more discussions on USPs). This exclusions is due to the standard Kepler pipeline is not well conditioned to search for USPs (Sanchis-Ojeda et al., 2014), and USPs are relatively rare (with an occurrence rate \(\sim 1\%\)) and may
Figure 1: Toomre diagram of stars in the updated LAMOST-Gaia-Kepler sample. The blue, orange, grey, green, red, and grey dots present stars in the thin disk, thick disk, in between thin and thick disk, halo, and Hercules stream, respectively. The grey dot lines represent the total Galactic velocity at 100, 200, and 300 km s\({}^{-1}\).
have undergone different formation and evolution process (Dai et al., 2018). The planet radii are derived from stellar radii (Berger et al., 2020) and the planet-to-star radius ratio (\(R_{p}/R_{s}\), Thompson et al., 2018). Planets with radii smaller than \(0.5R_{\oplus}\) are excluded due to their relatively low detection efficiency (Burke and Catanzarite, 2017). Since we focus on the occurrence rate and architecture of small planets, we exclude planet systems with planets larger than \(6R_{\oplus}\) (see Section 5.1 for more discussions on giant planets). The selection process of the planet sample is summarized in Table 1. The period-radius distribution of our planet sample is shown in Figure 3.
### Kinematic Age of Planet Host and non-Host
In PAST II, we have shown that stars with different numbers of transiting planets exhibit a drop in their kinematic age drops as a function of planet multiplicity. Here, we also bin stars according to their transiting planet number into four groups; in each group, stars have zero, one, two, and three or more transiting planets, respectively. We derive the kinematic age for each group using the updated AVR (see details in Appendix A). Since kinematic age is calculated from the dispersion of total Galactic velocity, it can be skewed by velocity outliers. To reduce the effect caused by outliers, we calculate the median value and the Mean Absolute Deviation (MAD) of the total Galactic velocity for each group. Then we remove stars with total velocities higher or lower than \(\mathrm{Median}\pm 5\times\mathrm{MAD}\) within this group. The kinematic age results for each group are presented in Figure 4, and we compare them to the results of PAST II (Chen et al., 2021). As can be seen, the kinematic ages derived in this work are generally consistent with those of PAST II in \(1\sim 2\sigma\) range. They both show a declining trend in kinematic age with increasing planet multiplicity. Nevertheless, the ages obtained in this work are systematically lower by about 0.2-0.5 Gyr compared to those in PAST II. This difference is expected, because we remove outlier stars in this work, which usually have high velocities. The removal of outliers causes a decrease in velocity dispersion and, consequently, a lower value of kinematic age.
In total, we obtain 19,358 stars in the the star sample, and 663 planets in 467 systems. The size of our sample after each step of selection can be seen in Table 1.
\begin{table}
\begin{tabular}{l c c} \hline \hline & Star & Planet \\ \hline Berger et al. (2020) & 186,301 & 3,826 \\ With \(\sigma_{\mathrm{CDPP}}\) & 185,161 & 3,826 \\ Match with LAMOST DR8 & 70,251 & 1,562 \\ With RV data and [Fe/H] \(\geq-1\) & 68,567 & 1,549 \\ Math with _Gaia_ DR3 & 67,922 & 1,535 \\ RUWE\(\leq\)1.2 (Remove potential binary) & 55,332 & 1,320 \\ \hline \(|Z|<\)1.5 kpc, 7.5\(<R<\)10 kpc, & & \\ and distance\(<\)2 kpc & 45,914 & 1,279 \\ \(TD/H\geq\)1, \(TD/Her\geq\)2, \(D/Her\geq\)2 & 40,347 & 1,109 \\ (In the thin, thick disk, or in between) & & \\ \hline Main sequence & 27,213 & 940 \\
4700K \(\leq\)\(T_{\mathrm{eff}}\leq 6500\)K & 19,537 & 784 \\ \hline \end{tabular}
\end{table}
Table 1: Data selection
Figure 3: Planet sample in the period-radius diagram. The grey dots show the whole planet sample before data selection. The blue, orange, and green dots present planets in single, double, and three or more planet systems after we apply all the selection criteria.
Figure 2: Hertzsprung–Russell diagram of stars in the updated LAMOST-Gaia-Kepler sample. The blue, orange, and red dots show stars in the main-sequence stage, subgiant, and red giant, respectively.
## 3 Apparent Trend Analysis from Parameter Control
In this section, we derive the apparent planet occurrence rate as a function of stellar kinematic age. The apparent planet occurrence rate is defined as the _observed_ planet multiplicity function (number of stars that have one, two, and three or more transit planets, dubbed as \(N_{1}\), \(N_{2}\), and \(N_{3+}\)) divided by the number of stars (\(N_{star}\)) in each bin.
### Parameter Control
To isolate the effect caused by stellar age, we use the parameter control method to reduce the influences induced by other stellar properties. In this work, we control five properties: effective temperature, mass, metallicity, stellar radius, and \(\sigma_{\rm CDPP}\). The former three parameters need to be controlled because they are found to affect the intrinsic planet occurrence rate (e.g., Buchhave et al., 2012; Yang et al., 2020; He et al., 2021). The latter two also need to be controlled because they directly affect the detection efficiency of transiting planets.
The basic idea of the parameter control is to let stars in different age bins have similar distributions in the controlled stellar properties. To achieve this goal, we apply a 'finding star neighbors' method, similar to the one described in Chen et al. (2022), which involves the following steps:
1. Grouping stars. For the whole star sample with a size of \(N\), we sort the stars according to _TD/D_ ascendingly, which is an effective indicator of stellar age (Chen et al., 2021). Then we group the stars into an odd number of bins. To implement parameter control, the middle bin contains fewer stars, while the other bins have more stars. The farther away from the middle bin, the more stars there are. In this study, we first consider a case of three bins, each bin containing 40%, 20%, and 40% of the stars, to have a qualitative view of the age trend. To further quantify the age-occurrence rate trend, we consider a case of five bins, each containing 25%, 20%, 10%, 20%, and 25% of the stars, respectively. Due to the limited sample size, we do not consider cases with more bins.
2. Choosing a standard sample. We dub the stars in the middle bin as the'standard sample', and the number of stars in the central bin is denoted as \(N_{st}\).
3. Finding the nearest neighbor stars. In each bin (except the middle one), we select \(N_{st}\) stars that are the closest neighbors of the standard sample in the space of the controlled parameters. This is done by adopting the nearest neighborhood method from the scikit-learn(Pedregosa et al., 2012) package.
4. Checking parameter control result. We calculate the differences of the 25, 50, and 75 percentiles of each controlled parameter for every two bins. If all the differences are less than the typical errors, we consider these parameters have been controlled. The typical errors of temperature, mass, and radius are 112K, 7%, and 4%, respectively (Berger et al., 2020). For metallicity we choose 0.05 dex as the typical error, which is the median value of the internal measurement uncertainties in our star sample. The Kepler team has reported \(\sigma_{\rm CDPP}\) for different timescales. We choose \(\sigma_{\rm CDPP}\) of 3.5 hours because it is the closest to the median transit duration of our
\begin{table}
\begin{tabular}{l c c} \hline \hline & Star & Planet \\ \hline Orbit Period\(\leqslant\)100 days &... & 720 \\ Remove ultra short period system &... & 703 \\ (No planet with period \(<\)1 day) & & \\ Planet radii\(\geqslant\)0.5 RE &... & 698 \\ Remove giant planet system &... & 663 \\ (No planet with radii \(>\)6 RE) & & \\ \hline Median\(-5\times\)MAD\(\leqslant V_{tot}\) & & \\ and \(V_{tot}\leqslant\)Median\(+5\times\)MAD & 19,358 & 641 \\ \hline \end{tabular}
\end{table}
Table 1: _(continued)_
Figure 4: The blue dots and errorbars show the kinematic age and \(\pm 1\sigma\) ranges for stars with zero, one, two, and three or more planets for this work, and the orange dots and errorbars are values from PAST II (Chen et al., 2021), the x-axis is offset by 0.1 for clearance.
planet sample (3.35 hours). Since the SNR of Kepler planet is in proportion to \((R_{p}/R_{\rm s})^{2}/\sigma_{\rm CDPP}\), we choose a typical error of 10% for \(\sigma_{\rm CDPP}\), to match the uncertainty induced by stellar radius and planet-star radius ratio.
To get an intuitive view of how well the parameters have been controlled, we plot Figure 5 and Figure 6 in which we perform parameter control for the cases of three age bins and five age bins. In the first row of Figure 5 and Figure 6, we show the Cumulative Distribution Function (CDF) diagrams of temperature, mass, metallicity, radius, and \(\sigma_{\rm CDPP}\) 3.5 hours for the observation data. Using the above method, we control all parameters and show the CDF of controlled star samples in the bottom row. By applying the parameter control method, we have achieved the goal to let stars in different age bins have similar distribution in stellar temperature, mass, metallicity, radius, and \(\sigma_{\rm CDPP}\) (Figure 5 and 6).
### Apparent Planetary Occurrence as a Function of Age
We first consider a three bins case and calculate the kinematic age of each bin using AVR (Appendix A). We adopt the above controlled sample and calculate the numbers of stars that have one, two, three or more planets, i.e., the planet multiplicity function (\(N_{1}\), \(N_{2}\), and \(N_{3+}\)). The apparent occurrence rate of one, two, and three or more planet systems is derived by dividing \(N_{1}\), \(N_{2}\), and \(N_{3+}\) by the number of stars (\(N_{star}\)) in each bin. In Figure 7, from the left column to the right column, we present the apparent occurrence rate for one, two, and three or more planet systems. As can be seen, the young stars generally have a higher apparent occurrence than the old stars. For the original data without parameter control (first row of Figure 7), \(N_{1}/N_{star}\), \(N_{2}/N_{star}\), and \(N_{3+}/N_{star}\) are \(2.12^{+0.18}_{-0.17}\)%, \(0.61^{+0.10}_{-0.09}\)%, and \(0.35^{+0.08}_{-0.07}\)% for stars less than 1 Gyr, which are about \(4.8\sigma\), \(4.4\sigma\), and \(4.8\sigma\) higher than those (\(1.37^{+0.15}_{-0.13}\)%, \(0.26^{+0.07}_{-0.06}\)%, and \(0.08^{+0.05}_{-0.03}\)%) for stars about 8 Gyr, respectively. For the data after all parameter control (bottom row of Figure 7), \(N_{1}/N_{star}\), \(N_{2}/N_{star}\), and \(N_{3+}/N_{star}\) are \(1.86^{+0.25}_{-0.22}\)%, \(0.46^{+0.14}_{-0.11}\)%, and \(0.31^{+0.12}_{-0.09}\)% for the youngest group, which are about \(1.6\sigma\), \(1.4\sigma\), and \(3.3\sigma\) higher than those (\(1.50^{+0.22}_{-0.20}\)%, \(0.31^{+0.12}_{-0.09}\)%, and \(0.05^{+0.07}_{-0.03}\)%) for the oldest group, respectively. The differences in apparent rate between young and old groups become smaller when taking into account of parameter control. Such a change is more prominent for low multiplicity systems (\(N_{1}/N_{star}\) and \(N_{2}/N_{star}\)) than for high multiplicity systems (\(N_{3+}/N_{star}\)). We also calculate the Pearson correlation coefficients and \(p\)-values for the correlations between age and apparent planet occurrence, which are printed in each panel of Figure 7. The \(p\)-values are derived using the following steps.
1. We calculate the Pearson correlation coefficients for the observation data as \(\rho_{obs}\).
2. We generate simulated apparent occurrence rates for each bin assuming Poisson error, and randomly scramble their order. Then we calculate the Pearson correlation coefficient between age and the scrambled data (\(\rho_{sim}\)).
Figure 5: Cumulative Distribution Function (CDF) diagrams of effective temperature, mass, metallicity, radius, and \(\sigma_{\rm CDPP}\) 3.5 hours for the three bins method before and after parameter control. The errorbar in the upper left corner shows the typical error of each stellar property, and the number in the lower right corner presents the size of the standard sample.
Figure 6: Similar to Figure 5, here we show CDF diagrams for the five bins case. The blue, orange, green, red, and purple lines represent star data sample with _TD/D_ in the ranges of (0,0.0427), [0.0427–0.0618), [0.0618-0.0.0808), [0.0808–0.184], and \([0.184,+\inf\)).
3. We repeat Step 2 for 10,000 times and calculate the fraction of the simulated data that produce a stronger anti-correlation, i.e., \(\rho_{sim}<\rho_{obs}\). This fraction gives the \(p\)-value of the Pearson correlation for the observed data.
As we can see, the anti-correlations between age and \(N_{1}/N_{star}\), \(N_{2}/N_{star}\) become weaker after parameter control, with the \(p\)-values rising from 0.0978 to 0.145 and from 0.0237 to 0.181, respectively. Nevertheless, the anti-correlation between age and \(N_{3+}/N_{star}\) becomes stronger, with the \(p\)-value decreasing from 0.056 to 0.0117.
To further trace the apparent planet occurrence trend with age, we group the stars into five age bins, and calculate the corresponding kinematic age. In Figure 8, we present the apparent planet occurrence rate as a function of kinematic age for systems with one, two, and three or more planets. In line with the results of the three bins case, we find that the apparent occurrence rate generally has a declining trend with kinematic age. For the original data (before parameter control), \(N_{1}/N_{star}\), \(N_{2}/N_{star}\), and \(N_{3+}/N_{star}\) are \(2.17^{+0.23}_{-0.21}\%\), \(0.76^{+0.15}_{-0.13}\%\), and \(0.35^{+0.11}_{-0.08}\%\) for the youngest stars, which are 4.6%, 4.2%, and 3.6% higher than those (\(1.26^{+0.18}_{-0.16}\%\), \(0.29^{+0.10}_{-0.08}\%\), and \(0.08^{+0.07}_{-0.04}\%\)) for the oldest stars, respectively (top row of Figure 8). For the data after all parameter control, \(N_{1}/N_{star}\), \(N_{2}/N_{star}\), and \(N_{3+}/N_{star}\) are \(1.65^{+0.35}_{-0.29}\%\), \(0.57^{+0.23}_{-0.17}\%\), and \(0.41^{+0.20}_{-0.14}\%\) for stars less than 1 Gyr, which are about \(0.5\sigma\), \(1.1\sigma\), and \(2.8\sigma\) higher than those (\(1.50^{+0.33}_{-0.28}\%\), \(0.36^{+0.19}_{-0.13}\%\%\), and \(0.05^{+0.12}_{-0.04}\%\)) for stars about 8 Gyr, respectively (bottom row of Figure 8). Again, being consistent with the results in the three bins case, the differences in the apparent occurrence rate of stars with different ages become smaller after parameter control. Nevertheless, the differences are still significant (\(\sim 3\sigma\)) for systems with high transiting multiplicities (\(N_{3+}/N_{star}\), right column of Figure 8). We also calculate the Pearson correlation coefficients and \(p\)-values, as in the three bins case, and print them in each panel of Figure 8. Similar to the three bins case, the anti-correlation between age and \(N_{1}/N_{star}\) becomes weaker after parameter control, with the \(p\)-value rising from 0.0075 to 0.191. Nevertheless, the anti-correlations between age and multiple planet systems (\(N_{2}/N_{star}\) and \(N_{3+}/N_{star}\)) become a little stronger, with the \(p\)-values dropping from 0.212 to 0.0108 and from 0.0134 to 0.0011, respectively.
We have also employed Canonical Correlation Analysis (CCA) to investigate the relationship between stellar properties and the apparent planet occurrence. The CCA method derives similar results as shown above, indicating that stellar age is anti-correlated with planet occurrence without the need for performing parameter control. However, the CCA method is unable to identify the star and planet samples required for the forward modeling method (see Section 4) in order to derive the intrinsic planet occurrence. For more detailed information, please refer to Appendix B.
## 4 Intrinsic Trend Analysis from Forward Modeling
### Forward Modeling Method
The above apparent occurrence rates only reflect the observed planet population. In order to derive the intrinsic planet occurrence rates of the underlying planet population, we adopt a forward modeling method that takes into account of _transit_ observation bias and detection/vetting efficiencies. The framework of the method has been described in detail in Zhu et al. (2018) and Yang et al. (2020). In this section, we summarize the general procedure and emphasize the modifications considered in this work.
#### 4.1.1 General procedure of the modeling
We derive the observed planet multiplicity function (\(N_{1}\), \(N_{2}\), and \(N_{3+}\)) from the star sample. Then we generate simulated planet systems, and calculate the modeled multiplicity function (\(\bar{N}_{1}\), \(\bar{N}_{2}\), and \(\bar{N}_{3+}\)). For our
Figure 7: Apparent planet occurrence rate for the three bins case with one (left), two (middle), and three or more (right) transiting planets. The top and bottom rows correspond to the results before and after parameter control, as shown in Figure 5. The dots and errorbars present the median value and \(\pm 1\sigma\) range, assuming Poisson error. The numbers at the top of each panel show the corresponding planet system numbers (\(N_{1}\), \(N_{2}\), and \(N_{3+}\)). In the corner of each panel, we also print the Pearson correlation coefficient (\(\rho\)) and the corresponding \(p\)-value.
star sample with a size of \(\sim 20,000\), we need to generate about \(\sim 10,000\) planet systems, assuming on average 50% of the stars own planet systems (the true value of \(F_{\rm Kep}\) differs for each group and is automatically adjusted in the MCMC process). The total number of generated planet systems is about 400,000,000 when the simulation is converged. We assume the multiplicity function follows a Poisson distribution, and optimize the likelihood function
\[\mathcal{L}=\prod_{k=1}^{3+}\frac{\bar{N}_{k}^{N_{k}}\exp(-\bar{N}_{k})}{N_{k}!} \tag{2}\]
with python package emcee(Foreman-Mackey et al., 2013) applying the Markov Chain Monte Carlo (MCMC) method. Three free parameters are constrained in our model, which are the fraction of stars with Kepler-like planets (\(F_{\rm Kep}\)), the average planet number for stars that have Kepler-like planets (\(\bar{N}_{p}\)), and the inclination slope index (\(\alpha\), see below). Details of generating the modelled multiplicity function (\(\bar{N}_{1}\), \(\bar{N}_{2}\), and \(\bar{N}_{3+}\)) can be seen in Yang et al. (2020). We briefly summarize the general procedure as follows:
1. Assuming the intrinsic planet occurrence. For each group of stars, we assume that a fraction of \(F_{\rm Kep}\) percent of stars have Kepler-like planets, and on average, each planet system has \(\bar{N}_{p}\) planets. For each host star, we generate \(k\) planets following a zero-truncated Poisson distribution of \(\bar{N}_{p}\)(Fang and Margot, 2012).
2. Assuming transit parameters and radii for planets. We generate the debiased distributions of transit parameter (\(\epsilon\), \(\epsilon=R_{s}/a_{p}\), \(R_{s}\) is the stellar radius and \(a_{p}\) is the semi-major axis of the planet) and planet radii (\(R_{p}\)) considering three kinds of bias (Mulders et al., 2018), namely, the transit geometry bias (\(f_{\rm tra}\)), detection efficiency bias (\(f_{\rm S/N}\)), and vetting efficiency bias (\(f_{\rm vet}\)). For each planet, we assign values of \(\epsilon\) and \(R_{p}\) that are randomly drawn from the debiased distributions.
3. Adjusting period ratios and radius ratios. Planets within the same system tend to have similar period ratios (Fabrycky et al., 2014; Brakensiek and Ragozzine, 2016; Jiang et al., 2020) and radius ratios (Ciardi et al., 2013; Weiss et al., 2018). To account for this correlation, we adjust the period ratios and radius ratios for planets within the same system. These adjustments are based on debiased distributions calculated by CORBITS(Brakensiek and Ragozzine, 2016) for multiple planet systems.
4. Checking Orbital stability. To ensure that the simulated planetary systems are physically plausible, we assess their orbital stability. We apply the criterion provided by Deck et al. (2013) to prevent planets within the same system from being located too close to each other.
5. Assigning orbital inclination to generate transits. We calculate the inclination (\(I_{p}\)) of the planets with respect to the observer by \[\cos I_{p}=\cos I\cos i~{}-~{}\sin I\sin i\cos\phi,\] (3) where \(I\) represents the inclination of the system invariable plane, \(\phi\) is the phase angle, and \(i\) is the inclination of the planet with respect to the invariable plane. Both \(I\) and \(\phi\) are assumed to be isotropic. Following Zhu et al. (2018), for a planet system with \(k\) planets, the inclination dispersion of the planets follows a power-law function, \[\sigma_{i,k}\equiv\sqrt{\left\langle\sin^{2}i\right\rangle}=\sigma_{i,5}\left( \frac{k}{5}\right)^{\alpha},\] (4) where we adopt \(\sigma_{i,5}\) as a Gaussian distribution with a mean of \(0\fdg 8\) and a standard deviation of \(0\fdg 15\) from Zhu et al. (2018). We fit the inclination slope index \(\alpha\) as the third parameter. A planet is considered to be a transit if its impact factor is less than 1 (\(|\cos I_{p}/\epsilon|<1\)).
Figure 8: Apparent planet occurrence as a function of kinematic age for systems with one (left), two (middle), and three or more (right) transiting planets. The top and bottom rows correspond to the results before and after parameter control as shown in Figure 6. The dots and errorbars present the median value and \(\pm 1\sigma\) range assuming Poisson error. The numbers on the top of each panel show the corresponding planet system numbers (\(N_{1}\), \(N_{2}\), and \(N_{3+}\)). In the corner of each panel, we also print the Pearson correlation coefficient (\(\rho\)) and the corresponding \(p\)-value.
6. Checking detection and vetting efficiencies. Due to detection and vetting efficiencies, not all the transiting planets can be detected. For each transiting planet, we generate a random number from a uniform distribution ranging from 0 to 1. We consider this planet can be detected if the generated random number is less than the product of the detection efficiency (\(f_{\rm S/N}\), see Appendix C) and the vetting (\(f_{\rm vet}\)) efficiency.
#### 4.1.2 Emphasize the difference from Yang et al. (2020)
Comparing to our previous work, we do not consider the TTV multiplicity function, namely, the number of systems that show a TTV signal. The TTV function is omitted for two reasons. First, the number of stars in our sample in this work is less than 20,000, and the number of systems that show a TTV signal is only 31. This is much smaller compared to the 100,000 star sample and 127 systems that show TTV signals in Yang et al. (2020). The smaller number of the TTV multiplicity function leads to larger uncertainty. Second, as shown in the Appendix of Yang et al. (2020), although removing the TTV multiplicity function from the likelihood leads to less constraint on the parameter \(\alpha\), it has little impact on the results of \(F_{\rm Kep}\) and \(\bar{N}_{p}\), which are the core parameters of this work.
### Intrinsic Planetary Occurrence and Architecture as a Function of Age
We show the forward modeling results for the case of three bins in Figure 9. From the left panels to the right panels, we present the posterior distributions of \(F_{\rm Kep}\), \(\bar{N}_{p}\), \(\alpha\), and \(\eta\) (which is the product of \(F_{\rm Kep}\) and \(\bar{N}_{p}\), see Equation 1). For data without parameter control, the youngest group generally has higher intrinsic planet occurrence rates than the oldest group. \(F_{\rm Kep}\), \(\bar{N}_{p}\), and \(\eta\) are \(63.2^{+11.4}_{-9.4}\%\), \(2.71^{+0.43}_{-0.40}\), and \(1.71^{+0.16}_{-0.16}\) for stars less than 1 Gyr, which are \(1.7\sigma\), \(1.8\sigma\), and \(5.4\sigma\) higher than those (\(47.5^{+8.9}_{-8.4}\%\), \(1.97^{+0.41}_{-0.32}\), and \(0.93^{+0.13}_{-0.12}\)) for stars about 8 Gyr, respectively (top row of Figure 9). For the data after all parameter control, \(F_{\rm Kep}\), \(\bar{N}_{p}\), and \(\eta\) are \(56.9^{+13.3}_{-11.6}\%\), \(2.74^{+0.64}_{-0.49}\), and \(1.57^{+0.23}_{-0.22}\) for the youngest group, which are \(0.2\sigma\), \(2.1\sigma\), and \(3.0\sigma\) higher than those (\(54.3^{+12.3}_{-11.4}\%\), \(1.75^{+0.47}_{-0.32}\), and \(0.96^{+0.19}_{-0.17}\)) for the oldest group, respectively (bottom row of Figure 9). All the three groups have nearly the same \(F_{\rm Kep}\) when parameter control is taken into account.
This result is expected because \(F_{\rm Kep}\) is mainly determined by the apparent occurrence rate of single planet systems (\(N_{1}/N_{star}\)). The difference in apparent occurrence rate for single planet systems between the youngest and oldest groups drops from \(4.8\sigma\) to \(1.6\sigma\) after parameter control (Figure 7), and consequently, the difference of \(F_{\rm Kep}\) drops from \(1.7\sigma\) to \(0.2\sigma\).
\(\bar{N}_{p}\) is largely determined by the apparent occurrence rate of high multiplicity systems (\(N_{3+}/N_{star}\)). The difference in \(N_{3+}/N_{star}\) between the youngest and oldest groups drops mildly from \(4.8\sigma\) to \(3.3\sigma\) after parameter control (Figure 7). Interestingly, the difference in \(\bar{N}_{p}\) increases slightly from \(1.8\sigma\) to \(2.1\sigma\). Due to the limited number of planetary systems with three or more planets in the last bin after parameter control (2 systems in the three bins case), the Poisson error is relatively high (\(2^{+1.8}_{-1.3}\)). We may have underestimated the declining trend of \(N_{3+}/N_{star}\), and the results of the forward modeling show that the decrease in \(\bar{N}_{p}\) becomes slightly more prominent. As \(\eta\) is the product of \(F_{\rm Kep}\) and \(\bar{N}_{p}\), therefore, the decrease of the difference in \(\eta\) from \(5.4\sigma\) to \(3.0\sigma\) is mainly due to the decrease of the difference in \(F_{\rm Kep}\). We calculate the Pearson correlation coefficients and \(p\)-values for the correlations between age and the intrinsic planet occurrence (\(F_{\rm Kep}\), \(\bar{N}_{p}\), and \(\eta\)), and print them in each panel in Figure 9. The \(p\)-values are derived using a similar method as shown in Section 3.2. For \(F_{\rm Kep}\) and \(\bar{N}_{p}\), the anti-correlations between age and them are statistically insignificant before and after parameter control with \(p\)-values larger than 0.05. The anti-correlation between age and \(\eta\) maintains with \(p\)-value smaller than 0.05.
The parameter \(\alpha\) is not well constrained in all the cases before and after parameter control, because it is mainly constrained by TTV multiplicity function (Zhu et al., 2018), which is ignored in this work.
Figure 9: Posterior distributions of \(F_{\rm Kep}\), \(\bar{N}_{p}\), \(\alpha\), and \(\eta\) for the three bins case are presented. The top and bottom rows show the forward modeling results corresponding to samples before and after parameter control as in Figure 5 and Figure 7. The dots and errorbars show the 50% and \(\pm 1\)–\(\sigma\) range. In the upper right of the panels in the first, second, and fourth columns, we also print the Pearson correlation coefficient (\(\rho\)) and the corresponding \(p\)-value.
To further investigate the intrinsic planet occurrence as a function of age, we group the stars into five bins as mentioned before in Section 3.2, and adopt the forward modeling method to derive \(F_{\rm{Kep}}\), \(\bar{N}_{p}\), \(\alpha\), and \(\eta\). The results are shown in Figure 10. For data without parameter control (top row of Figure 10), the values of \(F_{\rm{Kep}}\), \(\bar{N}_{p}\), and \(\eta\) are \(68.0^{+12.8}_{-11.9}\%\), \(2.68^{+0.52}_{-0.41}\), and \(1.82^{+0.22}_{-0.20}\), respectively, for stars in the youngest group. These values are \(2.1\sigma\), \(1.2\sigma\), and \(4.8\sigma\) higher than those (\(43.7^{+10.9}_{-8.9}\%\), \(2.09^{+0.54}_{-0.39}\), and \(0.92^{+0.17}_{-0.15}\)) for stars in the oldest group. After parameter control (bottom row of Figure 10), \(F_{\rm{Kep}}\) is \(47.1^{+17.9}_{-16.7}\%\) for stars less than 1 Gyr, which is \(0.6\sigma\) lower than stars around 8 Gyr (\(56.6^{+17.4}_{-14.9}\%\)). As to \(\bar{N}_{p}\) and \(\eta\), they are \(3.69^{+1.58}_{-0.96}\) and \(1.71^{+0.35}_{-0.31}\) for stars in the first bin, which are \(2.3\sigma\) and \(2.2\sigma\) higher than the values (\(1.80^{+0.67}_{-0.39}\) and \(1.04^{+0.29}_{-0.24}\)) in the last bin.
The forward modeling results for the five bins are consistent with those for the three bins (Figure 9). After parameter control, the difference in \(N_{1}/N_{star}\) between young and old stars shows a significant decrease from \(4.6\ \sigma\) to \(0.5\ \sigma\) (Figure 8). The difference in \(F_{\rm{Kep}}\) decreases from \(2.1\sigma\) to \(0.6\sigma\). The first bin has a higher \(N_{1}/N_{star}\) compared to the last bin, however, it shows a slightly lower value of \(F_{\rm{Kep}}\). That is because the first bin has more intrinsic multi-planet systems, which can also contribute to the apparent occurrence of \(N_{1}/N_{star}\). The difference in \(N_{3+}/N_{star}\) between young and old stars drops moderately from \(3.6\sigma\) to \(2.8\sigma\). Forward modeling indicates that the difference in \(\bar{N}_{p}\) increases slightly from \(1.2\sigma\) to \(2.3\sigma\). Similar to the three bins case, we have only one planet system with three or more planets in the last bin, resulting in a high Poisson error (\(1^{+2.3}_{-0.8}\)). As a consequence, we might underestimate the declining trend of \(N_{3+}/N_{star}\). As for \(\eta\) (the product of \(F_{\rm{Kep}}\) and \(\bar{N}_{p}\)) the difference drops from \(4.8\sigma\) to \(2.2\sigma\), which is mainly due to the decrease of the difference in \(F_{\rm{Kep}}\).
The Pearson correlation coefficients and \(p\)-values for the correlations between age and the intrinsic planet occurrence (\(F_{\rm{Kep}}\), \(\bar{N}_{p}\), and \(\eta\)) are also printed in the corner of each panel in Figure 10. Regarding the anti-correlation between age and \(F_{\rm{Kep}}\), it becomes weaker after parameter control with \(p\)-value rising from \(0.0503\) to \(0.729\). The anti-correlation between age and \(\bar{N}_{p}\) becomes statistically significant, with \(p\)-value dropping from \(0.0839\) to \(0.0052\). The anti-correlations between age and \(\eta\) maintain, with \(p\)-value changing from \(0.0021\) to \(0.0096\).
Similar to the three bins case, \(\alpha\) is not well constrained before and after parameter control.
## 5 Discussions
In this paper, we first investigate the apparent planet occurrence in terms of \(N_{1}/N_{star}\), \(N_{2}/N_{star}\), and \(N_{3+}/N_{star}\), then from which we derive the intrinsic planet occurrence in terms of \(F_{\rm{Kep}}\), \(\bar{N}_{p}\), and \(\eta\) as a function of stellar age using a forward modeling method. We have applied a parameter control method in our analyses to remove the effects caused by other stellar properties. We find that after parameter control, younger stars generally have higher apparent occurrence than older stars. Specifically, the intrinsic planet occurrence in terms of the number of planets per star (\(\eta\)) decreases with stellar age with a confidence level of about \(2\sim 3\sigma\). Such a declining trend is mainly driven by the decrease in the average multiplicity (\(\bar{N}_{p}\), by about \(2\sigma\)), and partially by the change in the fraction of stars with planet systems (\(F_{\rm{Kep}}\), by less than \(1\sigma\)). In what follows, we will compare our results to those from literature and discuss the implications of these findings for our understanding of planet formation and evolution.
### Giant and Ultra Short Period Planets
We exclude the giant and Ultra Short Period (USP) planets from our planet sample in Section 2.2. We dub this planet sample as the 'Fiducial' sample. To investigate the influence of giants and USPs on planet occurrence, in this section, we re-run our simulation including the giants, the USPs, and both of them. In Figure 11, from the top to the bottom, we show the results for the Fiducial sample, the sample including giant planets, the sample including USPs, and the sample including both
Figure 10: Similar to Figure 9, posterior distributions of \(F_{\rm{Kep}}\), \(\bar{N}_{p}\), \(\alpha\), and \(\eta\) for the five bins case are presented. The top and bottom rows show the forward modeling results corresponding to samples before and after parameter control, as shown in Figure 6 and Figure 8. The dots and errorbars show the 50% and \(\pm\)1–\(\sigma\) range. In the upper right of the panels in the first, second, and fourth columns, we also print the Pearson correlation coefficient (\(\rho\)) and the corresponding \(p\)-value.
of giant planets and USPs for the three bins case. As we can see, after applying the parameter control method, all results show similar trends. For \(F_{\rm Kep}\), the differences between the youngest and oldest groups are less than \(1\sigma\). The youngest groups generally have \(\bar{N}_{p}\sim 2\sigma\) higher than the oldest groups, and \(2\sim 3\sigma\) higher for \(\eta\). Including giant planets and USPs adds more planets into our sample, leading to a slightly higher value of \(F_{\rm Kep}\). At the same time, since giant planets are more likely to be detected in single planet systems, including giant planets causes a very small decrease in \(\bar{N}_{p}\).
Similar to the three bins case, the results for the five bins case are basically unchanged after including giants and USPs. As we can see in Figure 12, the anti-correlations between age and \(F_{\rm Kep}\) are statistically insignificant, with \(p\)-values higher than 0.5. For \(\bar{N}_{p}\) and \(\eta\), the \(p\)-values are less than 0.05, showing significant anti-correlations between them and age.
### Comparison with Previous Studies
McTier and Kipping (2019) have studied the dependence of planet occurrence rate on Galactocentric velocity. After correcting for selection biases, they found that Kepler planet hosts have a similar velocity distribution to the non-host Kepler stars. Based on such a similarity, they inferred that the planet occurrence rate is independent on Galactocentric velocity. Their inference is against our results, which show that planet occurrences in terms of \(\bar{N}_{p}\) and \(\eta\) are anti-correlated with kinematic age and thus with Galactocentric velocity based on the Age Velocity dispersion Relation (AVR). In fact, we ar
Figure 11: Posterior distributions of \(F_{\rm Kep}\), \(\bar{N}_{p}\), \(\alpha\), and \(\eta\) for the three bins case are presented. From the top to the bottom, each row shows the forward modeling results after parameter control for our Fiducial planet sample, the sample including the giants, the sample including the USPs, and the sample including both of the giants and the USPs, respectively. The dots and errorbars show the 50% and \(\pm 1\sigma\) range of the posterior distributions. In the upper right corner of the panels in the first, second, and fourth columns, we also print the Pearson correlation coefficient (\(\rho\)) along with the corresponding \(p\)-value.
Figure 12: Similar to Figure 11, posterior distributions of \(F_{\rm Kep}\), \(\bar{N}_{p}\), \(\alpha\), and \(\eta\) for five bins case are shown. From the top to the bottom rows, each row shows the forward modeling results corresponding to the Fiducial sample, the sample including the giants, the sample including the USPs, and the sample including both of the giants and the USPs, respectively. The dots and errorbars show the 50% and \(\pm 1\sigma\) range of the posterior distributions. In the upper right corner of the panels in the first, second, and fourth columns, we also print the Pearson correlation coefficient (\(\rho\)) and the corresponding \(p\)-value.
gue that their inference may not necessarily be valid for the following reasons.
First, since the occurrence rates of Kepler planets are generally found to be high (\(\sim\)50%, Mulders et al., 2018; Yang et al., 2020; He et al., 2021), a large fraction of the _apparent_ 'non-host' stars are actually hosts of planets that are not detected by Kepler. Therefore, it is not surprising that Kepler planet hosts have a velocity distribution similar to that of the non-host Kepler stars as found by McTier and Kipping (2019).
Second, multiple transiting systems count multiple times when counting planets, but only one time when counting host star. Therefore, multiples, which play a critical role in deriving the planet occurrence (e.g., \(\bar{N}_{p}\)) have little effect on determining the velocity distribution of host stars. In fact, as shown in Figure 8 of PAST II (Chen et al., 2021), Kepler planet hosts are dominated by single transiting systems, which have a velocity distribution similar to that of non-host stars.
Our results in PAST-II have shown that multiple transiting systems have significantly different Galactocentric velocity distributions compared to the single transiting systems. Such differences lead to the occurrence trends with Galactocentric velocity and thus with kinematic age seen in this work (Figure 9 and Figure 10, while still maintaining the similarity in Galactocentric velocity distribution between the Kepler planet hosts and the non-host Kepler stars seen in (McTier and Kipping, 2019). In one word, the similarity in Galactocentric velocity between Kepler planet hosts and non-host Kepler stars does not necessarily infer that the intrinsic occurrence of Kepler planets is independent of Galactocentric velocity.
In a series of papers, Bashi and Zucker (2019, 2022) have studied the planet occurrence rate in the Galactic context. Bashi and Zucker (2019) found that for stars with metallicity higher than \(-0.25\), the planet occurrence rate generally decreases with the Galactocentric velocity of host stars. Bashi and Zucker (2022) found that the planet occurrence, in terms of the fraction of stars with planets and the number of planets per star, are higher in the Galactic thin disk stars than in the thick disk stars. In addition, Bashi and Zucker (2022) also showed an apparent anti-correlation between planet occurrence and the stellar isochrone age. Generally speaking, their results are consistent with ours. In this paper, we find that planet occurrence rate decreases with _TD_/_D_ and age. Nevertheless, we emphasize some differences in our work as compared to theirs. First, we use the kinematic age, which has relatively smaller internal uncertainty of \(\sim\)10-20% (Chen et al., 2021) compared to isochrone age with a typical uncertainty of up to 56% (Berger et al., 2018). Second, we use the parameter control method to isolate the effect of age from other stellar properties. After removing these effects, we find that the anti-correlations are weaker between planet occurrence and age, especially for \(F_{\rm{Kep}}\), though they remain significant for \(\eta\) (Figure 9 and 10).
### Implications to Planet Formation and Evolution
In this study, we have revealed observational evidence that the occurrence and architecture (in terms of \(F_{\rm{Kep}}\) and \(\bar{N}_{p}\)) of Kepler planetary systems evolve over time. To gain deeper insights into planet formation and evolution, one would compare our observational results to theoretical models. Unfortunately, we did not find any models that predict \(F_{\rm{Kep}}\) or \(\bar{N}_{p}\) as a function of time, which would allow for a quantitative comparison with our findings. Nevertheless, there are still theoretical and numerical works in the literature that allow us to make qualitative comparisons.
Systems with more than two bodies are generally chaotic, and essentially unstable. Planetary systems are usually formed with more than one planet, and their architecture will be further shaped during the long-term dynamical evolution afterwards, e.g., triggered by orbital instability. The timescale of the orbital instability depends on many factors, such as mass, number, eccentricity, and orbit spacing of the planets within the system. For a planetary system born with a large number of planets and tight orbital spacing, the orbital instability occurs quickly, which causes planet ejections and collisions, leading to a decrease in the planet number and an increase in orbital spacing (Zhou et al., 2007). This in turn increases the timescale of subsequently instability, which means the system needs to evolve on a longer time scale to trigger next instability. As the system evolves, the instability timescale can grow to as long as billions of years. Our Solar System may have undergone such an evolutionary process (Tsiganis et al., 2005; Liu et al., 2022). In some models of our Solar System (e.g., Nesvorny and Morbidelli, 2012), it is thought there were initially five or even more giant planets formed in a tightly packed orbital configuration. The current architecture of the Solar System was mainly shaped by an orbital instability events that ejected at least one giant and scatted other planets into a more loosely packed configuration. For exoplanet systems, Pu and Wu (2015) found that the orbital spacing of Kepler multi-planet systems are clustered around the threshold of orbital instability. Based on this observation they hypothesized that most of the Kepler systems were formed with tighter spacing configuration, and most of them have undergone orbit instability, leading to fewer planets left on larger orbit spacing. Using \(N\)-body simulations, Izidoro et al. (2017)
proposed an evolutionary scenario for the bulk of Kepler-planet (super-Earths or mini-Neptunes) systems. In this scenario, planets were formed in a compact resonant chains through migration in proto-planetary gas disk in the early stage. As the gas dissipated, the chains became dynamically unstable, which led to planets merging, ejection, and being scattered to form a spread out configuration.
In this work, using a forward modeling method and after applying parameter control, we find that the planet occurrence rate in terms of planet number per star (\(\eta\)) decreases by about 2-3\(\sigma\) as a function of time. For three bins case, as shown in Figure 9, \(\eta\) drops from 1.57 to 0.96, and for the five bins case in Figure 10, it decreases from 1.71 to 1.04.
The first major contribution to the \(\eta\) decreasing trend comes from the planet number in planetary system, i.e., \(\bar{N}_{p}\), since \(\eta\) is the product of \(\bar{N}_{p}\) and \(F_{\rm{Kep}}\). \(\bar{N}_{p}\) shows a moderate decline about \(\sim 2\sigma\) in our fitting results. In the bottom row of Figure 9, \(\bar{N}_{p}\) drops from 2.74 for stars less than 1 Gyr to 1.75 for stars about 8 Gyr, and in Figure 10, \(\bar{N}_{p}\) drops from 3.69 for the first age group to 1.80 for the last group. This is qualitatively consistent with the above theories, that the dynamical evolution of planetary systems causes the merging and ejecting of planets. Furthermore, from our results we infer that the evolution of \(\bar{N}_{p}\) can continue to several gigayears, which implies that planetary systems keep evolving through the whole stellar lifetime.
The second potential contribution to the \(\eta\) decreasing trend comes from the fraction of star that have planet, i.e., \(F_{\rm{Kep}}\). In Figure 9, \(F_{\rm{Kep}}\) changes from 56.9% to 54.3%, and in Figure 10, it changes from 47.1% to 56.6%, both are less than 1\(\sigma\). Due to the limited star and planet sample, we cannot conclude that the change in \(F_{\rm{Kep}}\) is statistically significant. Future studies with larger samples of planetary systems may help us to further constrain \(F_{\rm{Kep}}\) as a function of time and unveil the planet formation rate in the history of the Milky Way, combining the information on the star formation rate as a function of age (Binney et al., 2000).
Not only does the number of planets in a system evolve with time, but the orbital properties also undergo changes. Since \(\bar{N}_{p}\) is related to the orbital inclination (Equation 4), we can investigate the mutual orbital inclination as a function of time. We calculate the posterior distributions of \(\sigma_{i,k}\) for the five bins after parameter control, and show the distributions as well as the median value and 1\(\sigma\) range of \(\sigma_{i,k}\) in Figure 13. As we can see, \(\sigma_{i,k}\) gradually evolves with time. From less than 1 Gyr to about 8 Gyr, the median value of \(\sigma_{i,k}\) grows from about 1\(\fdg\)2 to 3\(\fdg\)5, and the 1\(\sigma\) range expands from 0\(\fdg\)7-2\(\fdg\)6 to 1\(\fdg\)3-11\(\fdg\)7. To further quantify the age-\(\sigma_{i,k}\) trend, we fit \(\sigma_{i,k}\) with age. Although bearing large uncertainty (as seen from the orange shaded region), the best fit is
\[\log\sigma_{i,k}=0.2+0.4\times\log\frac{\rm{Age}}{\rm{Gyr}}. \tag{5}\]
For comparison, we also plot the data points for our Solar System and Kepler multiple transiting systems in Figure 13. They are all generally fit such an age-\(\sigma_{i,k}\) trend. This result indicates that as planetary systems get older, they become dynamically hotter, which is consistent with the theoretical expectation (Zhou et al., 2007). The Kepler multiples show smaller mutual inclinations compared to our Solar System, which can be explained by their younger ages according to the age-inclination trend shown in Figure 13. In other words, Figure 13 may hint that planets in our Solar System were in a flatter architecture in the early time, then gradually evolve to the current state.
## 6 Summary and Conclusion
Figure 13: Inclination dispersion (\(\sigma_{i,k}\)) as a function of age. We show the distribution of \(\sigma_{i,k}\) for five age bins after parameter control with blue bars. The median value and the \(\pm 1\sigma\) percentile range of \(\sigma_{i,k}\) are indicated by orange dots and errorbars, with the corresponding \(\pm 1\sigma\) percentile range is displayed in the lower part of the plot. The orange dashed line represents the best fit, and the orange shaded region denotes the corresponding uncertainties of the fit by resampling \(\sigma_{i,k}\) 10,000 times. The \(\sigma_{i,k}\) value of the Solar System terrestrial planets is shown with the red star marker, and the \(\sigma_{i,k}\) values of Kepler multiple transiting systems measured by Fabrycky et al. (2014) and by Xie et al. (2016) are shown with the green and purple star markers, respectively. The age of the Solar System is adopted as 4.57 Gyr (Bouvier and Wadhwa, 2010), and the age of Kepler multiples is adopted as the kinematic age of stars that host two or more planets in our star sample.
In this work, which is the fourth paper of the PAST series, we update the LAMOST-Gaia-Kepler catalog utilizing the recently released LAMOST DR8 and Gaia DR3. Based on this catalog, we study the occurrence rate and architecture of Kepler-like planets as a function of stellar kinematic age. We find the following results.
1. Younger stars generally show higher apparent planet occurrence rates (more than 3\(\sigma\)) for one, two, and three or more planets (\(N_{1}/N_{star}\), \(N_{2}/N_{star}\), and \(N_{3+}/N_{star}\)) than older stars (top rows of Figure 7 and Figure 8).
2. Applying a parameter control method can effectively reduce the effects caused by other stellar properties, such as effective temperature, mass, metallicity, radius, and \(\sigma_{\rm CDPP}\). After parameter control, the differences in \(N_{1}/N_{star}\) and \(N_{2}/N_{star}\) between younger and older stars decrease to less than 2\(\sigma\), while the difference in \(N_{3+}/N_{star}\) maintains a confidence level of about 3\(\sigma\) (bottom rows of Figure 7 and Figure 8).
3. Adopting a forward modeling method can help us to investigate the intrinsic planet occurrence in terms of the fraction of stars with planetary systems (\(F_{\rm Kep}\)), the average planet multiplicity (\(\bar{N}_{p}\)), and the number of planets per star (\(\eta\)). For stars without parameter control, we find that the younger stars have higher \(F_{\rm Kep}\) and \(\bar{N}_{p}\) by about 2\(\sigma\) than older stars. The difference in \(\eta\) between younger and older stars is more obvious, at about 5\(\sigma\) level (top rows of Figure 9 and Figure 10).
4. After parameter control, the differences in \(F_{\rm Kep}\) drops to less than 1\(\sigma\), hinting the planetary system occurrence remains at a similar rate throughout the history of the Milky Way. The difference in \(\bar{N}_{p}\) is about 2\(\sigma\) between the younger and older stars. This result is consistent with theories that planet systems keep evolving as a result of the merging and ejecting of the planets. Younger stars have a higher \(\eta\) (the product of \(F_{\rm Kep}\) and \(\bar{N}_{p}\)) by about 2 \(\sim\) 3\(\sigma\) than older stars, which is the combining effects caused by the evolution of \(F_{\rm Kep}\) and \(\bar{N}_{p}\) (bottom rows of Figure 9 and Figure 10).
5. The orbital properties of planet systems also evolve with time. We find that in stars aging from less than 1 Gyr to about 8 Gyr, the mutual orbital inclination (\(\sigma_{i,k}\)) between their planets increases from 1\(\fdg\)2 to 3\(\fdg\)5, and the \(\pm\)1\(\sigma\) range of \(\sigma_{i,k}\) expands from 0\(\fdg\)7-2\(\fdg\)6 to 1\(\fdg\)3-11\(\fdg\)7 (Figure 13), hinting that planet systems become dynamically hotter as a function of time. Both our Solar System and Kepler multiple transiting systems fit such a trend.
Our work qualitatively agrees with theoretical expectations that planet occurrence decreases, and planetary systems become dynamically hotter with age. Future dedicated theoretical and numerical modeling on the occurrence and architecture of Kepler planets as a function of age are needed to allow us to make quantitative comparisons to our results in this work and to place key constraints on planet formation and evolution.
The current and upcoming missions also aid in exploring exoplanets in the dimension of time. The TESS mission has found thousands of candidates (Guerrero et al., 2021), covering a wide range of ages (e.g., Newton et al., 2019; Gan et al., 2020; Weiss et al., 2021). In this paper, our studies only rely on a portion of planets from the Kepler sample. Both the sample size and the number of bins are still limited, leading to relatively large uncertainties in our fitting results. In the near future, missions such as Gaia, TESS (Ricker et al., 2015), and PLATO (Rauer et al., 2014) will detect many more exoplanets, leading to the expansion of planet sample by one order of magnitude or even more. With the help of more data, future studies will further refine our measurements and test our results.
This work is supported by the National Key R&D Program of China (No. 2019YFA0405100) and the National Natural Science Foundation of China (NSFC; grant Nos. 12150009, 11933001, 12273011). J.-W. X. also acknowledges the support from the National Youth Talent Support Program and the Distinguish Youth Foundation of Jiangsu Scientific Committee (BK20190005). D.-C.C. also acknowledges the Cultivation project for LAMOST Scientific Payoff and Research Achievement of CAMS-CAS. This work has included data from Guoshoujing Telescope (the Large Sky Area Multi-Object Fiber Spectroscopic Telescope, LAMOST), which is a National Major Scientific Project built by the Chinese Academy of Sciences. Funding for LAMOST (www.lamost.org) has been provided by the Chinese NDRC. LAMOST is operated and managed by the National Astronomical Observatories, CAS. This work presents results from the European Space Agency (ESA) space mission Gaia. Gaia data are being processed by the Gaia Data Processing and Analysis Consortium (DPAC). Funding for the DPAC is provided by national institutions, in particular the institutions participating in the Gaia MultiLateral Agreement (MLA). emcee (Foreman-Mackey et al., 2013), scikit-leart (Pedregosa et al., 2012), KeplerPORTs (Burke
& Catanzarite, 2017), numpy (Harris et al., 2020), matplotlib (Hunter, 2007), astropy (Astropy Collaboration et al., 2013, 2018, 2022), evolstate (Huber, 2019), RGCCA (Girka et al., 2023)
Appendix A Updating Kinematic Characteristics of Galactic Components and Age-Velocity Dispersion Relation with Gaia DR3 Astrometry Data
In the first paper of the PAST series (PAST I, Chen et al., 2021), we revisited the kinematic method to classify the Galactic components and the Age-Velocity dispersion Relation (AVR) with Gaia DR2 (Gaia Collaboration et al., 2018, 2018) and the LAMOST main-sequence turn-off and subgiant (MSTO-SG) star sample (Xiang et al., 2017). On June 13th, 2022, Gaia Data Release 3 (DR3; Gaia Collaboration et al., 2022) was released, providing five astrometric parameters (positions, parallaxes, and proper motions) for 1.468 billion sources. Comparing to Gaia DR2, the standard uncertainties have been reduced for the positions, parallaxes, and proper motions, which make the astrometric results considerably more robust and reduce the systematic errors. Therefore, here we revisit the kinematic methods and AVR with Gaia DR3 by adopting the same procedures shown in the Section 2 and 3 of PAST I.
### Updated Calibration Sample
To construct the calibration sample, we first cross-match the above LAMOST MSTO-SG catalog with the Gaia DR3 catalog using the X-match service provided by the Centre de Donnees astronomiques de Strasbourg (CDS, [http://cdsxmatch.u-strasbg.fr](http://cdsxmatch.u-strasbg.fr)). Second we carry out a angular distance cut of 1.25 arcseconds and a Gaia G-band magnitude difference cut of 2.5 mag. For stars with multiple matches, we keep those with the smallest angular separation.
Then we calculate the stellar kinematic properties (i.e., Galactocentric cylindrical coordinates \((R,\theta,Z)\), and Galactic rectangular velocities \((U_{\rm LSR},V_{\rm LSR},W_{\rm LSR})\)) relative to the local standard of rest (LSR), with the procedure detailed in Section 2.1 of PAST I. We adopt the location of the Sun of \(R_{\odot}\)= 8.18 kpc (Gravity Collaboration et al., 2019; Gaia Collaboration et al., 2021) and \(Z_{\odot}\) = 25 pc (Chen et al., 2001). The solar peculiar motions are taken as \([U_{\odot},\,V_{\odot},\,W_{\odot}]\) = [9.58, 10.52, 7.01] km s\({}^{-1}\)(Tian et al., 2015).
After that, we apply the following filters to further clean the calibration sample.
(1) Binary filter. We remove binary systems because their kinematics contain additional motions (Dehnen & Binney, 1998). This is done by choosing stars flagged as 'Normal star' (i.e., single stars with spectral types of AFGKM) in the LAMOST MSTO-SG catalog (Xiang et al., 2017). We also remove potential binaries by eliminating stars with Gaia DR3 re-normalized unit-weight error (RUWE) \(>1.4\)(Lindegren, 2018).
(2) Parallax precision filter. Following Dehnen & Binney (1998), we remove stars with relative parallax errors larger than 10 percent as reported in the Gaia DR3.
(3) Age precision filter. We remove stars with ages older than 14 Gyr, errors of age exceeding 25%, or blue straggler stars (\(|Z|>1.5\) kpc and ages younger than 2 Gyr) in the LAMOST MSTO-SG catalog.
(4) Distance filter (similar to Binney et al., 1997). The majority of the remaining stars are brighter than G mag=16, where the median parallax error is 0.0494 mas. Recalling the above 10 percent parallax precision requirement, this translates to a distance limit \(\sim 1/(0.0494/0.1)\sim 2.0\) kpc. We therefore remove stars with distances exceeding than this limit.
After applying the above filters, we are left with 134,244 stars, which are mainly (129,089/134,244, 96.2%) located at \(7.5<R<10.0\) kpc, \(|\theta|<15\) deg, and \(|Z|<1.5\) kpc.
### Revisiting the Kinematic Method To Classify the Galactic Components
With stellar kinematic and age following the same criteria in the Section 2.3.3 of PAST I, we then classify the calibration sample into different Galactic components, i.e., thin disk (\(D\)), thick disk (\(TD\)), halo (\(H\)), and Hercules stream (\(Herc\)). In order to calculate the characteristic kinematic parameters for each Galactic component as a function of \((R,\ Z)\), we bin the calibration sample as the same interval in PAST I. For \(|Z|\), we set 8 bins with boundaries at
\(|Z|=\) 0, 0.1, 0.2, 0.3, 0.4, 0.55, 0.75, 1.0, and 1.5 kpc. For \(R\), we set 5 bins with boundaries at \(R=7.5\), 8.0, 8.5, 9.0, 9.5, and 10 kpc. In total, there are \(5\times 8=40\) grids in the \(R\)-\(Z\) space, and all bins have enough (\(>400\)) stars.
We then revise the normalized fraction \(X\) (Equations 9, 10, 11, 12 in PAST I) and the velocity ellipsoid (i.e., \(\sigma_{U}\), \(\sigma_{V}\), \(\sigma_{W}\), and \(V_{\rm asym}\) (Equation 3 in PAST I) of each Galactic component for each grid in the \(R\)-\(Z\) plane following the same procedure of Section 2.3.4 and 2.3.5 of PAST I. The calculated values of \(X\), \(\sigma_{U}\), \(\sigma_{V}\), \(\sigma_{W}\), and \(V_{\rm asym}\) are tabulated in Table 2 and visualized in Figure 14, Figure 15, and Figure 16.
Figure 14 shows the \(X\) values of various Galactic components as functions of Galactic radius \(R\) and absolute value of height, \(|Z|\). As expected, \(X_{\rm D}\) (\(X_{\rm TD},\ X_{\rm H}\)) generally decreases (increases) with \(|Z|\) in all the \(R\) bins.
With the same procedure detailed described in Section 2.3.4 of PAST I, we also fit the velocity dispersions, \(\sigma_{U}\), \(\sigma_{V}\), and \(\sigma_{W}\) in the following formula according to Williams et al. (2013):
\[\sigma=b_{1}+b_{2}\times\frac{R}{\rm kpc}+b_{3}\times\big{(}\frac{Z}{\rm kpc }\big{)}^{2}\rm km\ s^{-1}.\] (A1)
We then use the following formula to calculate \(V_{\rm asym}\) according to Robin et al. (2003); Binney & Tremaine (2008)
\[V_{\rm asym}=\sigma_{U}^{2}/C_{0}.\] (A2)
The values of fitting parameters and their 1\(\sigma\) uncertainties are summarized in Table 3.
Comparing to the results obtained from Gaia DR2 and LAMOST MSTO-SG sample in PAST I, we find that for the normalized fraction \(X\), the typical (median) relative differences are only 0.6%, -3.5%, 4.6%, and 5.2% for the thin disk, thick disk, halo, and Hercules stream, respectively. For the velocity ellipsoid (i.e., \(\sigma_{U}\), \(\sigma_{V}\), \(\sigma_{W}\), and \(V_{\rm asym}\)) obtained with the calibration sample using astrometry data from Gaia DR2 and DR3, as can be seen in Figure 15 and 16, the median values and 1\(\sigma\) errorbars are very similar, and the best fits are nearly the same with each other. It can also be seen from Table 4, the fitting parameters (i.e., \(b_{1}\), \(b_{2}\), \(b_{3}\), and \(C_{0}\)) are well consistent with those of PAST I within their 1\(\sigma\) errorbars. Therefore, we conclude that the \(X\) factors and velocity ellipsoid obtained with the updated calibration sample are broadly unchanged from those of PAST I.
### Revisiting the Age-Velocity dispersion relation (AVR)
According to PAST I, we divide the calibration sample into 30 bins with approximately equal sizes (\(\sim\) 4,475 stars in each bin) according to their ages. Then we fit the AVRs following Holmberg et al. (2009b); Aumer et al. (2016) by using a simple power law formula, i.e.,
\[\sigma=k\times\left(\frac{t}{\rm Gyr}\right)^{\beta}\ \rm km\ s^{-1},\] (A3)
where \(t\) represents stellar age, \(\sigma\) is the velocity dispersion, and \(k\) and \(\beta\) are two fitting parameters. The best fits and uncertainties (1\(\sigma\) interval) of the fitting parameters (\(k,\ \beta\)) are calculated with the same procedure described in Section 3.1 of PAST I and summarized in Table 4.
Figure 17 shows the velocity dispersion as a function of the median age of each bin. As can be seen, the best fits (black lines) for the relationship between age and the dispersion of velocity components (\(U_{\rm LSR},V_{\rm LSR},W_{\rm LSR}\)) and the total velocity (\(V_{\rm tot}\)) are all indistinguishable from those of PAST I (red lines). In Table 4, we compare the fitting parameters (\(k,\ \beta\)) of AVRs obtained from the updated calibration sample to those from PAST I. As can be seen, the median values are nearly identical, and the 1\(\sigma\) uncertainties of \(k\) decrease by a factor of \(\sim 10\%\) due to the improvement on the precision of stellar parallax and proper motion measurements. Thus, the AVRs derived with calibration sample using astrometry data from Gaia DR3 and DR2 are nearly the same with each other.
## Appendix B Deriving the Age-Planet Occurrence Relationship Through Canonical Correlation Analysis
Canonical Correlation Analysis (CCA) is an effective method to discover the correlations between different sets of variables. It was first introduced by (HOTELLING, 1936). The basic idea is to identify linear combinations of two sets of variables that the resulting combined variables exhibit the highest possible correlation. KETTERING (1971) summarized various methods for establishing connections among multiple sets of variables. Here, we apply the CCA method to investigate the relationship between planet and star properties. Specifically, we use the SABSCOR method to maximize the sum of the absolute values of the correlation among different sets.
We group the planet and star properties into three sets of variables: the planet system property (occurrence rate), the interesting stellar property (kinematic age), and the stellar properties we want to eliminate (effective temperature, mass, metallicity, radius, and \(\sigma_{\rm CDPP}\)), respectively.
The planet system property we are interested in is the apparent occurrence rate. Following Zhu (2019), we use two tracers to represent the occurrence rate, which are defined by the following equations:
\[{\rm Tracer}(\eta) =\frac{1}{N_{star}}\sum_{j=1}^{K}jN_{j} \tag{10}\] \[{\rm Tracer}(F_{p}) =\frac{1}{N_{star}}\left(N_{1}+\sum_{j=1}^{K}N_{j}\right). \tag{11}\]
\({\rm Tracer}(\eta)\) is related to the average number of planets per star, and \({\rm Tracer}(F_{p})\) is correlated with the fraction of stars processing planet systems. In these equations, \(N_{star}\) represents the number of stars in each bin, \(N_{j}\) (\(j=1,2...K\)) is the number of systems with \(j\) planets, and \(K\) is the maximum number of planets we observed in the planetary system. Given the transit method can only detect a very small fraction of planets, in our sample of 19,358 stars, we have observed only 641 planets in 467 planetary systems. Therefore, to calculate these two tracers, we need to group stars into bins. Additionally, the distribution of planet systems is not uniform, especially for systems with three or more planets, with fewer systems found around older stars. To reduce Poisson errors, it is necessary to limit the number of bins. Consequently, we group the stars into 40 bins with an equal number of stars (using 30 or 50 bins yields similar results).
The interesting stellar property, kinematic age, is determined using the Age-Velocity dispersion Relationship (AVR, see Appendix A). This relationship requires the calculation of velocity dispersion and, as a result, is applicable only to a group of stars. Therefore, we once again need to divide our star sample into bins.
For the uninteresting stellar properties, we use the median value of each bin as the representative value. Given the limited number of bins (40), to prevent overfitting, we select only three properties (mass, [Fe/H], and \(\sigma_{\rm CDPP}\)) instead of all five. Previous studies (e.g., Yang et al.2020; He et al.2021) have shown that the mass and effective temperature of stars have a similar influence on planet occurrence. From late type to early type stars, the increase in mass and temperature leads to a decrease in planet occurrence. We choose mass to represent the influence of stellar type. Furthermore, an increase in radius and \(\sigma_{\rm CDPP}\) both result in a reduction in detection efficiency, which lowers the probability of planet detection. Here we choose \(\sigma_{\rm CDPP}\) to present the effect of detection efficiency (choosing temperature or radius leads to a similar result).
In summary, we categorize all the planet/star properties into three groups. The first group, \(\mathbf{X}_{1}\)={\({\rm Tracer}(\eta)\), \({\rm Tracer}(F_{p})\)}, represents planet occurrence. The second group, \(\mathbf{X}_{2}\)={Age}, describes the interesting stellar properties, specifically, stellar kinematic age. The third group, \(\mathbf{X}_{3}\)={Mass, [Fe/H], \(\sigma_{\rm CDPP}\)}, is related to uninteresting stellar properties.
We use the R package RGCCA (Girka et al., 2023), to maximize the sum of the correlations between planet occurrence and stellar age, as well as between planet occurrence and uninteresting star properties. This can be formulated as solving the following optimization problem:
\[{\rm Maximize}\ (|{\rm Cor}(\mathbf{y}_{1},\mathbf{y}_{2})|+|{\rm Cor}(\mathbf{ y}_{1},\mathbf{y}_{3})|). \tag{12}\]
where \(\mathbf{y}_{1}=\mathbf{X}_{1}\mathbf{a}_{1}\), \(\mathbf{y}_{2}=\mathbf{X}_{2}\mathbf{a}_{2}\), and \(\mathbf{y}_{3}=\mathbf{X}_{3}\mathbf{a}_{3}\), and \(\mathbf{a}_{1}\), \(\mathbf{a}_{2}\), and \(\mathbf{a}_{3}\) are the weight vectors for each variable set. All the variables have been standardized. The results can be seen in the following figure.
In Figure 18, each ellipse represents a group, and each box represents a star/planet property. We also print the weight and correlation on each line. As we can see, both \({\rm Tracer}(\eta)\) and \({\rm Tracer}(F_{p})\) have positive contributions to the planet occurrence (\(\mathbf{a}>0\)), showing that as the number of planets per star and the fraction of stars with planets increase, the planet occurrence rises. The kinematic age is in anti-correlated with planet occurrence (Cor=-0.802), which is consistent with our result before parameter control (see top rows of Figure 7 and Figure 8). As stars aging from less than 1 Gyr to about 8 Gyr, both of the fraction of stars with planetary systems and the number of planets per star decrease. The uninteresting star properties are correlated with planet occurrence. Among these properties, the stellar mass shows a negative weight, which is in agreement with literature (e.g. Yang et al.2020; He et al.2021). An increase in mass results in a decrease in Kepler-like planet occurrence. Metallicity shows a positive weight, which
is generally consistent with previous studies (e.g., Zhu, 2019; Wang and Fischer, 2015), indicating that an increase in metallicity can stimulate the formation of planets. \(\sigma_{\rm CDPP}\) demonstrates a negative weight because higher \(\sigma_{\rm CDPP}\) leads to lower detection efficiency, which hinders the detection of planets.
The CCA method shows a similar result to ours before parameter control, indicating that the increase in stellar age results in a decrease in the planet occurrence rate. This decrease is reflected in both the fraction of stars with planetary systems and the number of planets per star.
## Appendix C Detection Efficiency
We show the average detection efficiencies and planet samples for both the three and five bins cases in Figure 19 and 20. The detection efficiency metrics are calculated by the Package KeplerPORTs(Burke and Catanzarite, 2017), and the associated data are downloaded from NASA exoplanet archive5.
Footnote 5: [https://exoplanetarchive.ipac.caltech.edu/docs/](https://exoplanetarchive.ipac.caltech.edu/docs/)
As we can see in the top rows of Figure 19 and 20, young stars have slightly higher detection efficiencies than old stars. This is because first, young stars generally have smaller stellar radii, which lead to deeper transit depths, and second, young stars have lower noise levels (\(\sigma_{\rm CDPP}\)) that increase the signal to noise ratio (top rows of Figure 5 and Figure 6). After we apply parameter control to remove the effects caused by stellar properties, the young and old stars have similar distributions of stellar radii and \(\sigma_{\rm CDPP}\) (bottom rows of Figure 5 and Figure 6). As a result, the average detection efficiencies in each bin (red lines) are similar to the mean value of the whole sample (black lines, bottom rows of Figure 19 and 20), showing that the influence caused by detection efficiencies on planet occurrence is effectively removed.
\begin{table}
\begin{tabular}{c|c c c c} \hline & — & \(k\) (km s\({}^{-1}\)) — & — & \(\beta\) — \\ & value & 1 \(\sigma\) interval & value & 1 \(\sigma\) interval \\ \hline \hline \multicolumn{5}{c}{Gaia DR3} \\ \hline \(U\) & 23.74 & (23.47, 24.65) & 0.34 & (0.32, 0.35) \\ \(V\) & 12.87 & (12.47, 13.37) & 0.42 & (0.40, 0.44) \\ \(W\) & 8.29 & (7.92, 8.63) & 0.56 & (0.54, 0.58) \\ \(V_{\rm tot}\) & 27.74 & (27.32, 28.67) & 0.39 & (0.37, 0.41) \\ \hline \hline \multicolumn{5}{c}{Gaia DR2, PAST I} \\ \hline \(U\) & 23.66 & (23.07, 24.32) & 0.34 & (0.33, 0.36) \\ \(V\) & 12.49 & (12.05, 12.98) & 0.43 & (0.41, 0.45) \\ \(W\) & 8.50 & (8.09, 8.97) & 0.54 & (0.52, 0.56) \\ \(V_{\rm tot}\) & 27.55 & (26.84, 28.37) & 0.40 & (0.38, 0.42) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Fitting parameters of the Age-Velocity dispersion Relationship with the calibration samples using astrometry data from Gaia DR3 and DR2.
\begin{table}
\begin{tabular}{c c|c c|c c} \hline & \multicolumn{2}{c}{—
Figure 16: The asymmetric velocity, \(V_{\rm asym}\) as a function of \(\sigma_{U}^{2}\) for the thin disk (left panel) and thick disk (right panel). For the updated calibration data conducted from the LAMOST MSTO-SG and Gaia DR3 catalogs, the data is plotted as blue/red points and the blue/red line segments represent \(1\sigma\) errors. The blue/red solid lines denote the results of the best fit using Equation 2. While for the calibration sample conducted from the LAMOST MSTO-SG and Gaia DR2 catalogs in the PAST I, the data and \(1\sigma\) errors are plotted in light blue/red color. The light blue/red dashed lines denote the results of the best fit using Equation 2.
Figure 15: The velocity dispersions as functions of position (\(R\), \(|Z|\)) in the Galaxy. For the updated calibration data conducted from the LAMOST MSTO-SG and Gaia DR3 catalogs, the velocity dispersions are plotted as solid points, and the line segments represent \(1\sigma\) errors in two colors: blue for thin disk and red for thick disk. The solid line in each panel denotes the result of the best fit of Equation 1 using the coefficients in Table 3. While for the calibration sample conducted from the LAMOST MSTO-SG and Gaia DR2 catalogs in the PAST I, the velocity dispersions are plotted as solid points, and the line segments represent \(1\sigma\) errors in two colors: light blue for thin disk and light red for thick disk. The dashed black line in each panel denotes the result of the best fit of Equation 1 using the coefficients in Table 3.
Figure 17: The velocity dispersions for \(U_{\rm LSR},V_{\rm LSR},W_{\rm LSR}\), and \(V_{\rm tot}\) vs. age for the selected calibration star sample. The solid black lines denote the respective best fit of refitting AVR (Equation 15) using the coefficients in Table 4.
Figure 19: Detection efficiencies and planet samples in the period-radius diagram for the three bins case. From top to bottom, each row corresponds to the control samples in Figure 5 and Figure 7. The reds lines present the average 90%, 50%, and 10% detection efficiencies for the star sample in each bin, the grey lines show the mean 90%, 50%, and 10% detection efficiencies for the whole star sample. The blue, orange, and green dots show planets in one, two, and three or more planet systems, respectively.
Figure 18: Results of Canonical Correlation Analysis for planet and stellar properties are provided, including the weights assigned to each property and the correlations between properties and groups.
Figure 20: Similar to Figure 19, 90%, 50%, and 10% detection efficiencies and planet samples for the five bins case. From top to bottom, each row corresponds to control samples in Figure 6 and Figure 8. |
2308.00116 | A Modular Ontology for MODS -- Metadata Object Description Schema | The Metadata Object Description Schema (MODS) was developed to describe
bibliographic concepts and metadata and is maintained by the Library of
Congress. Its authoritative version is given as an XML schema based on an XML
mindset which means that it has significant limitations for use in a knowledge
graphs context. We have therefore developed the Modular MODS Ontology (MMODS-O)
which incorporates all elements and attributes of the MODS XML schema. In
designing the ontology, we adopt the recent Modular Ontology Design Methodology
(MOMo) with the intention to strike a balance between modularity and quality
ontology design on the one hand, and conservative backward compatibility with
MODS on the other. | Rushrukh Rayan, Cogan Shimizu, Heidi Sieverding, Pascal Hitzler | 2023-07-31T19:36:07Z | http://arxiv.org/abs/2308.00116v1 | # A Modular Ontology for MODS - Metadata Object Description Schema
###### Abstract
The Metadata Object Description Schema (MODS) was developed to describe bibliographic concepts and metadata and is maintained by the Library of Congress. Its authoritative version is given as an XML schema based on an XML mindset which means that it has significant limitations for use in a knowledge graphs context. We have therefore developed the Modular MODS Ontology (MMODS-O) which incorporates all elements and attributes of the MODS XML schema. In designing the ontology, we adopt the recent Modular Ontology Design Methodology (MOMo) with the intention to strike a balance between modularity and quality ontology design on the one hand, and conservative backward compatibility with MODS on the other.
## 1 Introduction
XML - a markup language - is designed to organize information [6]. The main design goal is to store and share information while maintaining human and machine readability. Also, the purpose of XML Schema is to serve as a description for an XML document, within it detailing the constraints on the structure, syntax, and content type. The schema outlines rules and constraints for elements, attributes, data types and relationships between them. It also helps ensure that the XML document conforms with the expected structure, serving as a way of validation. It is important to note that XML structures information in a hierarchical form, essentially representing a tree structure.
The Metadata Object Description Schema (MODS) [7] is an XML schema developed by the Library of Congress' Network Development in 2002 to be used to describe a set of bibliographic elements. MODS contains a wide range of elements and attributes using which a well-rounded description can be provided about bibliographic elements. For instance, it has _elements_ to describe Title Information, Type of Resource, Genre of Resource, Origin Information, Target Audience, Access Restrictions of the material, etc. Furthermore, MODS also has _attributes_ to outline additional important information, to name a few: Display Label (describes how the resource under description is being displayed), Lang (points to the language that is used for the content of an element: imagine a book title that is French), Authority (specifies the organization that has established the
usage of, for instance, an acronym), etc. General example use-cases of MODS lie within the realm of describing metadata of Journal Publications (one or more), Research Projects, Experiments, Books, etc.
While XML schema does a decent job in imposing structure on XML data, it lacks some desirable features. In the age of data, where cleaning, pre-processing, and managing data takes up a large chunk of resources in data operation, it is desirable to have the ability to organize data in such a way that allows semantic expressiveness of the data and conveys information on relationships between various _concepts_ by means of a _graph structure_[5] as opposed to the XML tree structure, in the sense of modern knowledge graphs [3], e.g. based on RDF [2] and OWL [8]. An XML schema
* lacks semantic expressiveness to convey relationship among concepts, context of data;
* lacks native support for automated reasoning and inference;
* lacks a common framework that allows integration of data from various sources;
* possesses a hierarchical nature with a rigid structure which makes it rather less flexible with respect to incorporation of different perspectives;
* and lacks native support for querying.
Ontologies as knowledge graph schemas, on the other hand, provide a structured and graph-based way to represent knowledge in an application domain. By defining the necessary vocabulary, concepts, entities, and relationship between concepts, ontologies allow a meaningful interpretation of the data.
The reason we have developed the Modular MODS Ontology (MMODS-O) is to address some of the challenges which the MODS XML schema exhibits. Indeed MMODS-O is designed to strike a balance between conservative backward-compatibility with the MODS XML schema and quality modular ontology design principles following the MOMo methodology [10, 11]. The modular structure in particular is supportive of simplified extending, modifying or removing parts of the ontology.
We have created 34 modules and patterns to capture the entire MODS XML schema. To provide semantic robustness, we have re-engineered some of the modules from their XML schema definition. The schema is expressed in the form of an OWL Ontology and extensive documentation is available on Github4.
Footnote 4: [https://github.com/rushrukh/mods_metadata_schema/tree/main/documentation](https://github.com/rushrukh/mods_metadata_schema/tree/main/documentation)
One of our target use-cases for MMODS-O is to provide a metadata structure to a large-scale collaborative research project, where the knowledge graph would contain information such as different research groups, experiments performed, geo-location information, associated publications, presentations, book-chapters, collaborators etc.
We would like to point out that this is not the first attempt towards developing a MODS Ontology. However, our version is an improvement over the existing ontology across multiple aspects, including modular structure, adherence to
MOMo quality control principles, rich axiomatization, extensive documentation. We will outline some of the key improvements over previous work in Section 3. In general, our contributions are:
1. Development of the modular ontology, where some of the modules differ significantly from the original MODS XML schema in order to reflect good ontology design principles.
2. Carefully considered and rich axiomatization to scope intended usage and to provide automated reasoning capabilities.
3. Complete documentation of the graph schema outlining each of the modules, associated axioms, competency questions.
The rest of the paper is organized as follows. Section 2 contains the description of key modules from our ontology. In Section 3, we describe related work and highlight some of the key differences of our modeling with previous efforts. We conclude in Section 4. The ontology is available as serialized in the Web Ontology Language OWL from [https://github.com/rushrukh/mods_metadata_schema/tree/main/modules](https://github.com/rushrukh/mods_metadata_schema/tree/main/modules).
## 2 Description of the MODS Ontology
The general usage of the MMODS-O (and MODS) lies in the realm of expressing bibliographic metadata. Indeed, the details in the XML schema reflect the association with bibliographic applications. From the top level elements and their attributes in the MODS XML schema, we have identified 34 modules to be part of MMODS-O. Some of the key modules are briefly described below. The primary goal of using formal axiomatization5 in MOMo is to limit unintended use and to disambiguate the modules, but axioms can also be used for logical inferences [4]. The axioms are expressed using the OWL 2 DL profile [8]. Note that for all the modules outlined here, the list of axioms is not complete as we only highlight some of the most important axioms for brevity. The complete list of axioms and modules can be found in the documentation pointed to earlier.
Footnote 5: A primer on description logic and the notation can be found in [1, 5]
The modules that we selected for presentation in this paper include some that deviate most from the underlying MODS XML schema. We touch upon the differences throughout and will discuss them further in Section 3.
We make extensive use of schema diagrams when discussing modules following the suggested visual coding from the MOMo methodology [10] where further explanations can be found: orange (rectangular) boxes indicated classes; teal (dashed) boxes indicate other modules (and usually also the core class of that module); purple (dashed) boxes with _.txt_ indicate controlled vocabularies (i.e., formally, classes with pre-defined individuals as members, which have meaning that is defined outside the ontology); yellow (ovals) indicate datatype values; white-headed arrows are rdfs:subClassOf relationships, all other arrows are object or data properties, depending on type of node pointed to.
### Overview of the Modules in the Ontology
Figure 1 represents a brief overview of all the modules that are part of the ontology. Each of the modules has its separate schema. MODS Item is a reference to the MODS resource under description. The ontology has 34 modules, while we highlight some of the key modules later in the paper, details about the other modules are available in the documentation. Figure 1 suggests almost a tree structure, which is actually not the case but this is not quite apparent from this high-level perspective.
### Role-Dependent Names
Role-Dependent Names is an ontology design pattern [4, 10] that is useful when there is an Agent Role that is performed by an Agent. Naturally, Agent will have a Name. There are instances when an Agent assumes a Role under a particular Name, but the same Agent will assume a different role under a different Name. An example for such a scenario would be a writer writing different books under different pseudonyms. For example, Ian Banks publishes science fiction as "Iain M. Banks" and mainstream fiction as "Iain Banks". Another example use case within the application scope we are primarily interested in could be as follows: if the resource under description refers to a journal publication, there would be Agent Roles for authors, which would be assumed by Agents under some name. Note that names associated with an author may differ between different publications for a variety of reasons, including different transcriptions from other languages, inclusion or not of middle names, name changes, etc., and the MODS XML schema reflects this. While we do not discuss the ontology design pattern at length here, details can be found in [9].
#### Selected Axioms
\[\top\sqsubseteq\leq 1\text{providesAgentRole}^{-}.\top \tag{1}\] \[\text{AgentRole}\sqsubseteq\geq 0\text{hasRoleUnderName.Name}\] (2) \[\exists\text{assumesAgentRole.Agent}\sqsubseteq\text{AgentRole}\] (3) \[\text{AgentRole}\sqsubseteq\leq 1\text{assumesAgentRole}^{-}.\text{Agent}\] (4) \[\text{Agent}\sqsubseteq\geq 0\text{assumesAgentRole.AgentRole}\] (5) \[\text{Agent}\sqsubseteq\exists\text{hasName.Name}\] (6) \[\text{assumesAgentRole}\circ\text{hasRoleUnderName}\sqsubseteq\text{ hasName}\] (7) \[\text{hasName}\circ\text{hasRoleUnderName}^{-}\sqsubseteq\text{ assumesAgentRole} \tag{8}\]
If an Agent Role is provided, we argue that there must be at most 1 entity that provides the role which is expressed using an _inverse functionality_ in (1). Furthermore, we claim that if an Agent Role is assumed, there can be at most 1 Agent who assumes the role, expressed through an _inverse qualified scoped functionality_ in (4). Axioms (1) and (4) essentially state that an AgentRole is unique
to both the Agent and the entity providing the role, i.e., these axioms give guidance as to the graph structure for the underlying data graph. It is not necessary for an Agent Role to be assumed under a Name which is why we use a _structural tautology_ in (2).6 We also argue that, naturally, an Agent must have a name. Hence we use an _existential_ to convey that in (6).
Footnote 6: Structural tautologies are logically inert, however they provide structural guidance on use for the human using an ontology; see [10].
Figure 1: An Overview of all the Modules
The Role-Dependent Names module exemplifies very well why an RDF graph structure is much more natural than an XML tree structure for expressing relevant relationships. In particular, the _triangular_ relationships indicated by the role chain axioms (7) and (8) cannot be naturally captured in a tree structure, but really demand a directed graph.
### Element Information
There are many elements within the MODS XML schema which may have a display label, a combination of attributes that provide external links, and a set of attributes to describe the language for the resource under description. The Element Information module is created such that the aforementioned connections can be expressed conveniently. Concretely, whenever in a module it needs to be said that the module may have a Display Label, Link Attributes, and Language Attributes, we use the module to be a sub-class of the module Element Information which is expressed using a _sub-class of_ relationship in (9).
#### Selected Axioms
\[\top \sqsubseteq\] ElementInfo (9) \[\top \sqsubseteq\] 1hasLinkAttributes. \[\top\] 10 \[\sqsubseteq\] 0hasLinkAttributes.LinkAttributes (11) \[\top \sqsubseteq\] 0hasLanguageAttributes.LanguageAttributes (12) \[\top \sqsubseteq\] 1hasLanguageAttributes. \[\top\] 13 \[\sqsubseteq\] 0hasLanguageAttributes.LanguageAttributes (14)
A module which is a sub-class of Element Information can have at most 1 set of Link Attributes and 1 set of Language Attributes which in axioms have been
Figure 2: Schema Diagram for the Role-Dependent Names Pattern
conveyed using _functionalities_ in (10) and (13). Additionally, it is not mandatory for a module to have a set of Link Attributes and Language Attributes, therefore we make use of _structural tautologies_ in (11) and (14).
### Organization
The Organization module works in conjunction with the Role-Dependent Names and Name module. It is important to note that the MODS XML schema does not have an element named Organization. In order to instill natural semantics into the ontology, we introduce the Organization module to replace the attribute "Affiliation" and element "Alternative Names". The concrete differences are outlined in Section 3. Organization is used as the main entity which provides an Agent Role. Naturally, it makes sense for an organization to have a Name. In the case where an organization is referred to using different names, we denote the primary name with _hasStandardizedName_ and the rest of the names using _hasName_.
#### 2.4.1 Selected Axioms
\[\text{Organization}\sqsubseteq\geq 0\text{providesAgentRole}.\text{AgentRole} \tag{15}\] \[\text{Organization}\sqsubseteq\exists\text{hasName}.\text{Name}\] (16) \[\text{Organization}\sqsubseteq\geq 0\text{hasStandardizedName}.\text{Name}\] (17) \[\top\sqsubseteq\leq 1\text{hasLinkAttributes}.\top\] (18) \[\text{Organization}\sqsubseteq\geq 0\text{hasLinkAttributes}.\text{LinkAttributes} \tag{19}\]
It is not necessary that the Organization under description must provide an Agent Role. It can be referred in any general context, as such we say in (15) that an Organization _may_ provide an Agent Role by using a _structural tautology_.
Figure 3: Schema Diagram for the Element Information Module
Furthermore, we argue that an Organization, naturally, must have a name and express that using an _existential_ in (16). To distinguish between different names and the standardized name, we use (17) to say that the Organization _may_ have a Standardized Name. Also an Organization _may_ have a set of Link Attributes to provide additional information (19).
### Name
The Name module is intended to be used for describing entities associated with the resource under description which may have one or more names. A necessary element of the Name module is Name Part. All the parts of a name (one or more) are described through Name Parts. In some cases, a name can refer to an acronym which is dictated by some Authority where the information regarding authority is expressed using Authority Information module. It is not uncommon for a name to have a specific form to display (e.g. Last name, First name), which is specified using Display Form. Furthermore, if a name has an associated identifier (e.g. ISBN, DOI), it is expressed using Name Identifier which is a sub-class of the module Identifier.
In the Name module, there are a few controlled vocabulary nodes (purple nodes in Figure 5). To begin with, a Name can be assigned with a Name Type. MODS XML schema allows 4 name types: Personal, Corporate, Conference, Family. To let the user select a value from the available options, we make use of controlled vocabulary. Similarly, if among multiple instances of names, one particular name is to be regarded as the primary instance, the controlled vocabulary Usage is used to identify that. Another example of controlled vocabulary's usage can be seen in Name Part Type. To identify a part of name to be first name, middle name, or last name the Name Part Type controlled vocabulary can be used.
Figure 4: Schema Diagram for the Organization Module
### Selected Axioms
\[\text{Name}\sqsubseteq\exists\text{hasNamePart.NamePart} \tag{20}\] \[\text{NamePart}\sqsubseteq\text{hasNamePart}^{-}\text{.Name}\] (21) \[\top\sqsubseteq\leq 1\text{hasNamePart}^{-}\top\] (22) \[\text{Name}\sqsubseteq\geq 0\text{hasNamePart.NamePart}\] (23) \[\top\sqsubseteq\text{VhasNamePartType.NamePartType.txt}\] (24) \[\text{Name}\sqsubseteq\geq 0\text{hasDescription.Description}\] (25) \[\text{Name}\sqsubseteq\geq 0\text{hasNameType.NameType.txt}\] (26) \[\text{Name}\sqsubseteq\geq 0\text{isPrimaryInstance.Usage.txt}\] (27) \[\top\sqsubseteq\leq 1\text{hasAuthorityInfo.}\top\] (28) \[\text{Name}\sqsubseteq\geq 0\text{hasAuthorityInfo.AuthorityInfo}\] (29) \[\text{NamePart}\sqsubseteq\text{ElementInfo}\] (30) \[\text{NamePart}\sqsubseteq\neg(\exists\text{hasLinkAttributes.} \exists\text{hasID.}\top)\] (31) \[\text{NameIdentifier}\sqsubseteq\text{Identifier} \tag{32}\]
As described in the beginning of this module, a Name must have at least one NamePart. Otherwise, having a Name which does not have any string value as part of it would not be natural. We express this using an _existential_ in (20). On the other hand, to restrict the usage of NamePart outside of Name, we use an _inverse existential_ to convey that if there is a hasNamePart property, its domain must be a Name. A Name can also have any number of NameParts, to allow which we use _structural tautology_ in (23). Axioms (20) and (23) together mean that there can be one or more NameParts.
The Name module is a _sub-class_ of Element Information (30) which says that a Name instance may have a set of Link Attributes and/or Language Attributes. One axiom to note here is (31) which essentially says that, an instance of a Name cannot have an ID which is a part of Link Attributes. The Link Attributes module has not been discussed here, we refer to the documentation for further details.
### Date Information and Date Attributes
Date Information is a key module that has numerous usage within MMODS-O. A Bibliographic resource may have associated date information to express the timeline of creation, last updated, physical and/or digital origin information, etc. Throughout the MODS XML schema, all the date information under different names follow more or less a similar structure. That is why, we realized the necessity of having a Date Information module which conforms with our general intention of having a modular, reusable design. Primarily, a DateInfo instance may have a set of Language Attributes (e.g. date mentioned in multiple languages), some essential Date Attributes. We have created a Date Attributes
module to further aid reusability and compact design. Another important aspect of the DateInfo module is that it must have a type of DateInfoType. Note, that there is no DateInfoType available in MODS XML schema. We outline the differences in detail in Section 3.
Different types of dates across the MODS XML schema generally offer a similar set of attributes, as such we make use of the DateAttributes module. The Qualifier identifies the date under description to be either _approximate_, _inferred_, or _questionable_ which is why this is a controlled vocabulary in Figure 7. The DateEncoding controlled vocabulary identifies the encoding type of the date (e.g. _w3cdtf_, _iso8601_). It is also possible to identify one DateInfo instance to be the Key Date among different instances of DateInfo using the DateAttributes with the property isKeyDate which provides a boolean value.
#### 4.2.1 Selected Axioms
\[\top\sqsubseteq \leq 1\text{hasDateInfo}^{-}.\top \tag{33}\] \[\text{Thing}\sqsubseteq \geq 0\text{hasDateInfo}.\text{DateInfo}\] (34) \[\text{DateInfo}\sqsubseteq \exists\text{hasDateAttributes}.\text{DateAttributes}\] (35) \[\top\sqsubseteq \leq 1\text{hasDateAttributes}.\top\] (36) \[\text{DateInfo}\sqsubseteq \geq 0\text{hasDateAttributes}.\text{DateAttributes}\] (37) \[\text{DateInfo}\sqsubseteq \exists\text{isOfType}.\text{DateInfo}\text{Type}.\text{txt}\] (38) \[\text{DateInfo}\sqsubseteq \exists\text{hasValue}.\text{xsd}:\text{string}\] (39) \[\text{DateAttributes}\sqsubseteq \geq 0\text{hasDateEncoding}\text{Type}.\text{DateEncoding}.\text{txt} \tag{40}\]
Figure 6: Schema Diagram for the Date Info Module
\[\text{DateAttributes}\sqsubseteq \geq 0\text{isKeyDate.xsd:boolean} \tag{41}\] \[\text{DateAttributes}\sqsubseteq \geq 0\text{isStartOrEndPoint.Point.txt}\] (42) \[\text{DateAttributes}\sqsubseteq \geq 0\text{hasAlternativeCalendar.Calendar.txt} \tag{43}\]
In order to formalize the intended use, the property hasDateInfo can only be associated with at most one instance of _Thing_, expressed using an _inverse functionality_[33] wherein a _Thing_ can have 0 or more instances of DateInfo, expressed using a _structural tautology_[34]. An instance of DateInfo must have exactly one set of DateAttributes which is conveyed by using a combination of _existential_[35] and _functionality_[36]. Furthermore, a DateInfo must have a DateInfo type [38]. The DateInfo type is a controlled vocabulary that contains a list of Date elements available in MODS XML schema, for example: datelsued, dateCreated, dateCaptured, dateModified, dateValid, etc.
We have outlined 7 out of the 34 modules we have created as part of the MMODS-O ontology. In those 7 modules, we have only discussed the formal axioms which we considered the most interesting. The documentation contains a detailed description of all the modules including a comprehensive formalization.
## 3 Related Work and Comparison with Previous Work
To the best of our knowledge, there is very few published work available regarding ontologies based on MODS. The closest effort appears to be the MODS RDF Ontology7 available from Library of Congress pages. It appears to be a
Figure 7: Schema Diagram for the Date Attributes Module
mostly straightforward transcription of the XML schema without significant effort to make modifications to adjust to the ontology paradigm. We will use this for comparison; as it is very close to the MODS XML schema, we make only reference to the XML schema in the discussion.8 Our ontology design in many cases accounts for the natural relationships between entities which creates distinctions between our modeling and the MODS RDF Ontology and the XML schema.
Footnote 8: We also found [http://arco.istc.cnr.it:8081/ontologies/MODS](http://arco.istc.cnr.it:8081/ontologies/MODS) which appears to be abandoned work-in progress without meaningful documentation.
The Name entity in the XML schema raises a few issues when it comes to assessing the inherent meaning. For instance, the Name entity is treated to be both the name of a person and the person itself. There is no distinction between an individual and the individual having a name. This poses a lot of modeling issues and complications that can be overcome with an appropriate ontology-based approach. Questions arise such as: if an Agent is to be defined by its Name, what happens when the same Agent has multiple Names? Do we create separate instances of Name that in essence speak about the same Agent? How do we bind together the different names of the same Agent? In our case, we separate the notion of Agent and its Name which resolves the questions naturally. An Agent may have more than one name which is completely fine as is reflected in its axiomatization.
Another issue we see with the Name entity is that, in XML schema a Name entity has an affiliation which is again another Name-like entity. Much like above, if we associate the name, the agent, and the affiliation all together with the name and agent, one may ask: if the agent has multiple names, do we create separate instances of names and write the same affiliations in all name instances? Perhaps more importantly, does it make more sense semantically to have an Organization entity that provides an affiliation? We argue that, an Agent, much less a Name, should not have an affiliation which is a Name, rather an Agent has an affiliation with an Organization, and that Organization will have a Name.
Furthermore, the XML schema states that the Name entity has a Role. We argue that it is more natural for an Agent to have a Name and for that same Agent to assume a particular Role. There are cases where it is possible for the same Agent to assume multiple roles under different pseudonyms. The XML schema and the existing RDF Ontology do not account for such intricate scenarios. The XML schema also allows for Names to have Alternative Names. It can be easily seen that it is not the Name which has Alternative Names, rather it is an Agent or an Organization which may have Alternative Names.
Another instance where we argue that our approach is more modularized and has reusable aspects is concerning DateInfo. Both the XML schema and the MODS RDF Ontology use separate elements of dates to convey different use-cases of dates. Namely, dateIssued, dateCreated, dateCaptured, dateModified, dateValid, etc. What we have done instead is, we have created a common module for DateInfo, where for each of the use-cases of dates can just be defined as a type of date through the use of controlled vocabularies. This module also
recognizes the fact that all date-related elements within MODS share the same set of attributes, which gives rise to the DateAttributes model.
In our opinion, it is important to define and limit the applicability of modules within an ontology which we achieve through our carefully thought-out axiomatizations. It is imperative to leverage the different types of axioms available such as Scoped Domain, Scoped Range, Existential, Inverse Existential, Functionalities, Inverse Functionalities in order to formalize the scopes and boundaries. The existing RDF Ontology only uses Domain, Range, and Subproperties as formalization of the ontology, which in our opinion does often not suffice [4].
## 4 Conclusion
We have presented the MMODS-O Ontology which has been developed from the MODS XML schema that has general use-cases in dealing with bibliographic metadata. We have developed the ontology in a way such that it is modularized, the distinct modules are reusable, and it paves the way for future improvement and module additions to the ontology. It incorporates modules that are concerned with Title information, Origin information, Geographic location, Target audience, Name, Subject, etc., of the resource under description. The ontology is serialized in OWL and has been formalized by extensive axiomatization.
_Acknowledgement._ The authors acknowledge funding under the National Science Foundation grants 2119753 "RII Track-2 FEC: BioWRAP (Bioplastics With Regenerative Agricultural Properties): Spray-on bioplastics with growth synchronous decomposition and water, nutrient, and agrochemical management" and 2033521: "A1: KnowWhereGraph: Enriching and Linking Cross-Domain Knowledge Graphs using Spatially-Explicit AI Technologies."
|
2309.06896 | Domain-Aware Augmentations for Unsupervised Online General Continual
Learning | Continual Learning has been challenging, especially when dealing with
unsupervised scenarios such as Unsupervised Online General Continual Learning
(UOGCL), where the learning agent has no prior knowledge of class boundaries or
task change information. While previous research has focused on reducing
forgetting in supervised setups, recent studies have shown that self-supervised
learners are more resilient to forgetting. This paper proposes a novel approach
that enhances memory usage for contrastive learning in UOGCL by defining and
using stream-dependent data augmentations together with some implementation
tricks. Our proposed method is simple yet effective, achieves state-of-the-art
results compared to other unsupervised approaches in all considered setups, and
reduces the gap between supervised and unsupervised continual learning. Our
domain-aware augmentation procedure can be adapted to other replay-based
methods, making it a promising strategy for continual learning. | Nicolas Michel, Romain Negrel, Giovanni Chierchia, Jean-François Bercher | 2023-09-13T11:45:21Z | http://arxiv.org/abs/2309.06896v1 | # Domain-Aware Augmentations for Unsupervised Online General Continual Learning
###### Abstract
Continual Learning has been challenging, especially when dealing with unsupervised scenarios such as Unsupervised Online General Continual Learning (UOGCL), where the learning agent has no prior knowledge of class boundaries or task change information. While previous research has focused on reducing forgetting in supervised setups, recent studies have shown that self-supervised learners are more resilient to forgetting. This paper proposes a novel approach that enhances memory usage for contrastive learning in UOGCL by defining and using stream-dependent data augmentations together with some implementation tricks. Our proposed method is simple yet effective, achieves state-of-the-art results compared to other unsupervised approaches in all considered setups, and reduces the gap between supervised and unsupervised continual learning. Our domain-aware augmentation procedure can be adapted to other replay-based methods, making it a promising strategy for continual learning.
+
Footnote †: 2023: The copyright of this document resides with its authors.
It may be distributed unchanged freely in print or electronic forms.
This work has received support from Agence Nationale de la Recherche (ANR) for the project APY, with reference ANR-20-CE38-0011-02. This work was granted access to the HPC resources of IDRIS under the allocation 2022-AD011012603 made by GENCI.
## 1 Introduction
Continual Learning (CL) is the ability to learn from a continuously evolving stream of data while accommodating shifts in distribution over time. Recent years have witnessed numerous attempts to simulate such an environment for image classification, including domain and class-incremental learning scenarios [1]. While much of the prior research has been focused on a fully supervised scenario that assumes specific prior knowledge, unsupervised CL methods operate under more challenging circumstances where there is no task boundary or the total number of classes available. This work focuses on a more realistic learning scenario where only one pass over non-iid, unlabeled data is allowed without prior task knowledge,
task change information, or known number of classes during training. This setup is known as Unsupervised Online General Continual Learning (UOGCL) [1] and only a handful of approaches have been designed to address it. STAM [1] employs a patch-based online clustering with novelty detection and expandable memory. SCALE [3] leverages a pseudo-labeled contrastive loss and knowledge distillation with a fixed memory to learn data representation. By design, both STAM and SCALE strongly focus on reducing forgetting.
Although forgetting is widely recognized as the main issue in CL environments, self-supervised learners have been found to be exceptionally resilient to forgetting compared to cross-entropy trained models [1, 2, 3]. Additionally, several studies demonstrate that replay-based methods can easily take advantage of memory data more efficiently. One way is to use implementation tricks for reviewing memory data [3, 4], and another is to train for multiple iterations for each batch [1, 2]. Similarly, some methods have obtained state-of-the-art results while training using memory data only [3, 4]. Previous observations indicate that replay-based self-supervised learners might not need anti-forgetting mechanisms to cope with UOGCL. Rather, a promising strategy would be to learn more efficiently from memory data.
This paper focuses on replay-based methods showing the best performances in online CL. We introduce a novel replay-based method that improves memory utilization with contrastive loss by combining stream-dependent data augmentations with implementation tricks for UOGCL. Despite its simplicity, our method performs better than other unsupervised methods in all evaluation setups. Additionally, the proposed Domain-Aware Augmentation procedure could easily be integrated into other replay-based approaches with minor adaptations to improve their performance as well.
The paper is structured as follows: Section 2 presents related work. Section 3 describes the training procedure, the strategy used to improve memory usage, and our new Domain-Aware Augmentation framework for replay-based methods. Section 4 relates our experiments and eventually, section 5 concludes the paper.
## 2 Related work
This section defines learning strategies related to the work presented here.
Figure 1: Overview of the Domain-Aware Augmentation procedure. From left to right, unlabeled images are sampled from stream \(\mathcal{S}\) and memory \(\mathcal{M}\) to create the incoming batch \(\mathcal{B}\). This batch is augmented to obtain a many-view batch \(\mathcal{B}_{I}\). Here \(\mathcal{B}_{I}\) is composed of 2 standard augmentations and 3 DAA. Images from \(\mathcal{S}\) are used to create DAA for every image in \(\mathcal{B}\). The model then learns image representation by minimizing the contrastive loss defined in eq. 1. Best viewed in color.
### Online General Continual Learning
In the following, we define the considered CL setups.
**Online Continual Learning (OCL)** addresses the problem of learning from a continuous stream of data. Formally, we consider a sequential learning setup with a sequence \(\{\mathcal{T}_{1},\cdots,\mathcal{T}_{K}\}\) of \(K\) tasks, and \(\mathcal{D}_{k}=(X_{k},Y_{k})\) the corresponding data-label pairs. In CL, we often assume that for any value \(k_{1},k_{2}\in\{1,\cdots,K\}\), if \(k_{1}\neq k_{2}\) then we have \(Y_{k_{1}}\cap Y_{k_{2}}=\emptyset\) and the number of classes in each task is the same. Contrary to standard CL, in OCL only one pass over the data is allowed. This setup has been studied mostly in a fully supervised scenario [11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 111, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 231, 232, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 325, 326, 333, 341, 342, 353, 354, 355, 361, 362, 363, 370, 371, 372, 373, 374, 375, 376, 377, 378, 383, 390, 311, 325, 327, 379, 384, 385, 386, 387, 388, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 411, 422, 434, 44, 44, 451, 452, 453, 454, 46, 47, 48, 49, 42, 44, 46, 49, 43, 46, 49, 44, 47, 48, 49, 45, 49, 46, 49, 47, 49, 48, 49, 40, 41, 42, 43, 44, 45, 46, 49, 45, 47, 49, 48, 49, 40, 41, 42, 43, 44, 46, 49, 45, 49, 46, 47, 48, 49, 49, 40, 41, 42, 44, 45, 46, 49, 47, 48, 49, 49, 40, 41, 42, 43, 44, 45, 46, 49, 45, 48, 49, 40, 42, 44, 49, 41, 43, 45, 46, 49, 47, 48, 49, 41, 44, 49, 42, 45, 46, 49, 43, 48, 49, 45, 49, 40, 41, 42, 43, 44, 46, 49, 45, 47, 48, 49, 49, 40, 41, 42, 43, 44, 45, 46, 49, 42, 45, 47, 48, 49, 40, 41, 43, 44, 49, 45, 46, 49, 41, 44, 47, 48, 49, 42, 45, 49, 46, 47, 48, 49, 41, 45, 49, 42, 45, 46, 49, 47, 48, 49, 49, 40, 41, 43, 44, 48, 49, 42, 45, 49, 43, 46, 47, 49, 45, 48, 49, 40, 41, 42, 43, 44, 45, 46, 49, 42, 47, 48, 49, 43, 49, 40, 41, 44, 45, 46, 49, 40, 42, 45, 47, 48, 49, 41, 45, 49, 42, 46, 49, 43, 48, 49, 40, 41, 44, 49, 42, 45, 49, 43, 44, 46, 49, 45, 47, 48, 49, 45, 49, 46, 47, 49, 48, 49, 40, 41, 42, 45, 49, 49, 40, 42, 46, 49, 41, 45, 47, 48, 49, 42, 49, 43, 44, 45, 46, 47, 49, 40, 41, 45, 48, 49, 42, 49, 43, 49, 45, 46, 47, 49, 48, 49, 40, 41, 43, 49, 45, 49, 40, 41, 42, 45, 46, 49, 42, 47, 48, 49, 45, 49, 40, 41, 43, 45, 46, 47, 48, 49, 49, 42, 45, 49, 46, 47, 48, 49, 49, 40, 43, 49, 45, 47, 49, 48, 49, 40, 44, 49, 41, 45, 49, 40, 41, 42, 45, 46, 49, 45, 47, 48, 49, 40, 42, 49, 43, 45, 46, 47, 48, 49, 40, 41, 45, 49, 42, 45, 49, 46, 47, 49, 48, 49, 40, 43, 49, 41, 45, 46, 49, 45, 47, 49, 46, 48, 49, 40, 47, 49, 48, 49, 40, 41, 45, 49, 42, 49, 43, 46, 49, 45, 47, 49, 48, 49, 40, 45, 49, 40, 46, 49, 41, 42, 45, 46, 49, 47, 48, 49, 49, 40, 41, 45, 49, 42, 49, 43, 49, 40, 45, 46, 49, 41, 45, 49, 42, 46, 47, 48, 49, 40, 45, 49, 46, 47, 49, 48, 49, 40, 49, 40, 41, 45, 49, 40, 41, 45, 49, 42, 46, 49, 43, 49,
### Training procedure
In the following, we define the training procedure of our method.
**Many-view batch**. We propose an extension to the multi-view batch concept as described by Khosla et al. [10] that involves using more than two augmentations. Specifically, we define the many-view batch as the union of \(p\) augmentations for a batch \(\mathcal{B}\) such that \(\mathcal{B}_{I}=\mathcal{B}\bigcup_{i=1}^{p}\text{Aug}(\mathcal{B})\), where \(p\) is the number of augmentations and \(I\) represents the indices over \(\mathcal{B}_{I}\). To train the model on many-view batches, we adapt the SupCon loss for unsupervised scenarios by treating every augmentation of the same input image as having the same label. We formulate this approach as minimizing the Multi-View Contrastive (MVCont) loss, defined as follows:
\[\mathcal{L}_{MVCont}(\mathcal{B}_{I},\theta)=-\sum_{i\in I}\frac{1}{|P(i)|} \sum_{p\in P(i)}\log\frac{e^{z_{i}z_{p}/\tau}}{\sum_{a\in I \backslash\{i\}}e^{z_{i}z_{a}/\tau}} \tag{1}\]
Here, \(P(i)=\{j\in I\backslash\{i\}\mid y_{j}=y_{i}\}\) represents the set of images having the same input source as input \(i\), \(Z_{I}=\{z_{i}\}_{i\in I}\), \(f_{\theta}\) denotes the learnable model with parameters \(\theta\), and \(z_{i}=f_{\theta}(x_{i})\) represents the feature vector of the input image \(x_{i}\).
**Experience Replay with Contrastive Learning**. We propose to combine Experience Replay (ER) [10] with unsupervised contrastive learning on a many-view batch by minimizing \(\mathcal{L}_{MVCont}\) defined in equation 1. Similar to ER, we mitigate forgetting by using a fixed sized memory that is filled following a reservoir sampling strategy [10] and a random retrieval. The overall training procedure is detailed in Algorithm 1.
```
Input: Data stream \(\mathcal{S}\); Augmentation procedure Aug\((.)\); Model \(f_{\theta}(.)\) Output: Model \(f_{\theta}\); Memory \(\mathcal{M}\) \(\mathcal{M}\leftarrow\{\}\) \(\triangleright\) Initialize memory for\(\mathcal{B}_{s}\in\mathcal{S}\)do\(\triangleright\) Data stream for\(q\) iterations do\(\triangleright\) Memory iterations \(\mathcal{B}_{m}\leftarrow\text{Retrieve}(\mathcal{M})\)\(\triangleright\) Retrieve data from memory \(\mathcal{B}\leftarrow\mathcal{B}_{s}\cup\mathcal{B}_{m}\)\(\triangleright\) Combined Batch \(\mathcal{B}_{\mathcal{I}}\gets\mathcal{B}\bigcup_{i=1}^{p}\text{Aug}( \mathcal{B})\)\(\triangleright\) Many-view batch \(\theta\gets SGD(\mathcal{L}_{MVCont}(f_{\theta}(\mathcal{B}_{\mathcal{I}}), \theta))\)\(\triangleright\) Loss defined in 1 \(\mathcal{M}\leftarrow\text{MemoryUpdate}(\mathcal{B}_{s},\mathcal{M})\) return:\(\theta\); \(\mathcal{M}\)
```
**Algorithm 1** Proposed Training Procedure
### Improving memory usage
In the following, we discuss several strategies to improve memory usage in the training procedure defined in Algorithm 1. Experimental results regarding such tricks are presented in Table 1.
**Larger Memory batch size \(|\mathcal{B}_{m}|\)**. One common hyper-parameter impacting the performance of replay-based methods is the memory batch size \(|\mathcal{B}_{m}|\), the amount of data retrieved from memory when encountering a new stream batch. As the size of \(|\mathcal{B}_{m}|\) increases, the
model will be exposed to memory data more frequently, which can lead to overfitting. However, in UOGCL, we found that increasing \(|\mathcal{B}_{m}|\) results in steadily increasing performances.
**More Memory Iterations \(q\)**. In Algorithm 1, \(q\) represents the number of memory iterations, indicating how often the model will be exposed to memory data during training. As \(q\) increases, the model will have more opportunities to learn from the memory data and potentially improve its performance on the task at hand. This technique has been applied in previous works [] with supervised methods with the risk of overfitting to the current task. In UOGCL, we observe little overfitting.
**More augmentations**. Using more data augmentation can improve the learning process in online continual learning scenarios. It helps the model learn better by enabling it to see the same data from different perspectives, recognize patterns, and generalize. Augmentation also generates new training samples from existing ones, making the model adaptable to evolving data distributions. In that sense, increasing the value of \(p\), the number of views in the many-view batch can similarly increase performances. However, standard augmentations like random crop and color-jitter are limited as they do not use external information. For example, a random crop augmentation only has a limited number of crops and throughout training, the model is likely to be trained on every variation of augmented memory data, encouraging overfitting. This phenomenon is exacerbated when using multiple memory iterations. Therefore, more sophisticated augmentations are presented in section 3.3.
### Domain Aware Augmentations (DAA)
As introduced in section 3.2, traditional data augmentation can be limited for replay methods. This section proposes a framework for stronger domain-aware augmentations that leverages stream information. This allows the model to view memory data through an unlimited amount of perspectives along training.
**DAA framework** We define a DAA as an augmentation that combines an input image \(x_{i}\) with a domain-related image \(x_{d}\), resulting in an augmented version of \(x_{i}\) denoted as \(x_{a}=\text{DAA}(x_{i},x_{d})\) via the DAA procedure. In replay-based approaches, \(x_{i}\) comes from the current batch \(\mathcal{B}\), while \(x_{d}\) comes from the stream.
**Domain-Aware Mixup (DAM)**. Mixup has been introduced in 2018 [] in the supervised scenario as a new augmentation technique that linearly interpolates between two data-label pairs. Recently, mixup has been adapted to the CL setting []. Notably, in LUMP [], Madaan et al. introduced mixup strategies between memory and stream images to create new images for replay-based unsupervised CL. For \(x_{i}\in\mathcal{M}\) from memory and \(x_{d}\in\mathcal{S}\) from stream the author trained a model on \(x_{a}=\lambda\cdot x_{i}+(1-\lambda)\cdot x_{d}\). Notably, the obtained images are considered as entirely new images. In this work, we define DAM by constructing augmented images \(x_{a}=\lambda\cdot x_{i}+(1-\lambda)\cdot x_{d}\), however, mixup-generated images are used as views of the original image. Additionally, we use \(\lambda\sim\mathcal{U}(0.5,1)\), \(x_{i}\in\mathcal{B}\), \(x_{d}\in\mathcal{S}\) and \(x_{a}=\text{DAM}(x_{i},x_{d})\). The interpolation factor is set such that the augmented image \(x_{a}\) has at least half of its information coming from the input image \(x_{i}\). This strategy is inspired by the SMOTE [] oversampling strategy.
**Domain-Aware CutMix (DAC)**. CutMix is another augmentation technique [], which bears similarities with mixup. Likewise to the DAM adaptation we consider \(x_{i}\in\mathcal{B}\) and \(x_{d}\in\mathcal{S}\) to create \(x_{a}\), a new view of \(x_{i}\) such that \(x_{a}=M\odot x_{i}+(1-M)\odot x_{d}\) with \(M\in\{0,1\}^{W\times H}\) a binary mask where \(W\) and \(H\) are the width and the height of the image. \(1\) is a binary mask filled with ones, \(\odot\) is the Hadamard product and \(\lambda\sim\mathcal{U}(0.5,1)\). The binary mask is constructed according to the bounding box coordinates \(B=(r_{x},r_{y},r_{w},r_{h})\) which correspond
to the region to crop from \(x_{d}\) and integrate into \(x_{i}\). Following the work proposed by [2] we sample the bounding box for a given \(\lambda\) value according to:
\[\begin{split} r_{x}&\sim\mathcal{U}(0,W),\ \ r_{w}=W\sqrt{1-\lambda}\\ r_{h}&\sim\mathcal{U}(0,H),\ \ r_{h}=H\sqrt{1- \lambda}\end{split} \tag{2}\]
As with DAM we use \(\lambda\geq 0.5\) to ensure that a significant part of the original image is present in the augmented version.
**Domain-Aware Style (DAS)**. Style transfer is the transfer of non-semantic visual information from one image \(x_{d}\) to another image \(x_{i}\) to create the resulting image \(x_{a}\), with content from \(x_{i}\) and style from \(x_{d}\). The original style transfer method proposed by [2] relies on a slow optimization process which cannot reasonably be applied as a data augmentation procedure. [2] proposed a method based on instance normalization that can compute and transfer any style from any image efficiently, but has to be pre-trained beforehand. A model pre-trained on MS-COCO [2] is used to transfer the style from \(x_{d}\in\mathcal{S}\) to \(x_{i}\in\mathcal{B}\). The obtained image is considered as another view of \(x_{i}\) such that \(x_{a}=\text{DAS}(x_{i},x_{d})\).
## 4 Experiments
In this section, we first describe our setup: evaluation protocol, datasets used, baseline methods considered for comparisons, and implementation details; before presenting our experimental results.
### Evaluation Protocol
Since we focus on UOGCL, the training procedure defined in Algorithm 1 outputs a trained encoder \(f_{\theta}(.)\) and a subset of images \(\mathcal{M}\). An extra transfer-learning step is required for classification. For a fair comparison, we use only the images stored in memory \(\mathcal{M}\) at the end of training for transfer learning. This is equivalent to adding an extra step for labeling memory after training. As it in common in representation learning [2, 2, 2] we consider the trained model \(f_{\theta}(.)\) as being the succession of a feature extractor \(h_{\theta_{r}}(.)\) and a projection head \(g_{\theta_{p}}(.)\) such that \(f_{\theta}(.)=g_{\theta_{p}}(h_{\theta_{r}}(.))\). For the transfer learning step, the representations obtained from \(h_{\theta_{r}}(.)\) are used, as described in Algorithm 2.
```
Input: Data stream \(\mathcal{S}\); Memory \(\mathcal{M}\); Augmentation procedure Aug\((.)\); Feature extractor \(h_{\theta_{r}}(.)\); Projection head \(g_{\theta_{p}}(.)\); Nearest Class Mean classifier \(\phi_{\omega}(.)\) Output: End-to-end classifier \(\phi_{\omega}(h_{\theta_{r}}(.))\) Training Phase: \(\theta_{r}\), \(\mathcal{M}\leftarrow\text{Train}(\mathcal{B}_{s},Aug(.),f_{\theta}(.))\)\(\triangleright\) Train as in Algorithm 1 with \(f_{\theta}(.)=g_{\theta_{p}}(h_{\theta_{r}}(.))\) Testing Phase: \(R\gets h_{\theta_{r}}(\mathcal{M})\)\(\omega\leftarrow\) TrainNCM(\(\omega\), \(R\))\(\triangleright\) Train a Nearest Class Mean classifier on representations. return:\(\omega\); \(\theta_{r}\)
```
**Algorithm 2** Proposed Evaluation procedure
### Evaluation Protocol
In this section, we first describe our setup: evaluation protocol, datasets used, baseline methods considered for comparisons, and implementation details; before presenting our experimental results.
### Datasets
We use variations of standard image classification datasets [\(\boxed{\mathbf{\Box}}\), \(\boxed{\mathbf{\Box}}\)] to build continual learning environments. The original datasets are split into several tasks of non-overlapping classes. Specifically, we experimented on split-CIFAR10, split-CIFAR100 and split-Tiny ImageNet. In this paper, we omitted the split- suffix for simplicity. **CIFAR10** contains 50,000 32x32 train images and 10,000 test images and is split into 5 tasks containing 2 classes each for a total of 10 distinct classes. **CIFAR100** contains 50,000 32x32 train images and 10,000 test images and is split into 10 tasks containing 10 classes each for a total of 100 distinct classes. **Tiny ImageNet** is a subset of the ILSVRC- 2012 classification dataset and contains 100,000 64x64 train images as well as 10,000 test images and is split into 20 tasks containing 10 classes each for a total of 200 distinct classes.
### Baselines
In the following, we describe considered baselines. While proposing an unsupervised approach, we compare our method to supervised an unsupervised baselines to better demonstrate its efficiency. For methods using replay strategies, we add the suffix _-ER_ to the name and use reservoir sampling [\(\boxed{\mathbf{\Box}}\)] for memory update and random retrieval. **fine-tuned**: Supervised lower bound corresponding to training using a cross entropy loss in a continual learning setup without precautions to avoid forgetting.
**offline**: Supervised upper bound. The model is trained without any CL specific constraints.
**Experience Replay** (ER) [\(\boxed{\mathbf{\Box}}\)]: ER is a supervised memory based technique using reservoir sampling [\(\boxed{\mathbf{\Box}}\)] for memory update and random retrieval. The model is trained using cross
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline \hline \multicolumn{2}{c}{} & \multicolumn{1}{c}{CIFAR10} & CIFAR100 & Tiny IN \\ \hline \hline \multirow{3}{*}{Memory} & 10 & 34.7\(\pm\)1.8 & 11.3\(\pm\)0.4 & 8.8\(\pm\)0.04 \\ & 20 & 36.3\(\pm\)2.7 & 11.8\(\pm\)1.0 & 10.1\(\pm\)0.2 \\ batch size & 50 & 41.1\(\pm\)2.0 & 16.8\(\pm\)1.0 & 13.2\(\pm\)0.5 \\ \(|\mathcal{B}_{m}|\) & 100 & 42.9\(\pm\)0.1 & 19.2\(\pm\)0.5 & 15.2\(\pm\)0.3 \\ & 200 & **43.2\(\pm\)2.3** & **21.2\(\pm\)0.9** & **16.7\(\pm\)0.5** \\ \hline \multicolumn{2}{c}{} & \multicolumn{1}{c}{\(|\mathcal{B}_{m}|=200\)} & \multicolumn{1}{c}{\(|\mathcal{B}_{m}|=200\)} & \multicolumn{1}{c}{\(|\mathcal{B}_{m}|=200\)} \\ \hline \multirow{3}{*}{Memory} & 1 & 43.2\(\pm\)2.3 & 21.2\(\pm\)0.9 & 16.7\(\pm\)0.5 \\ & 2 & 44.0\(\pm\)1.5 & 23.1\(\pm\)0.2 & 17.2\(\pm\)0.6 \\ iterations & 3 & 44.0\(\pm\)2.0 & 23.0\(\pm\)0.3 & **18.3\(\pm\)0.3** \\ \(q\) & 4 & **45.2\(\pm\)2.7** & 23.8\(\pm\)0.4 & 17.6\(\pm\)0.2 \\ & 5 & 42.6\(\pm\)1.9 & **24.0\(\pm\)0.4** & 18.1\(\pm\)0.5 \\ \hline \multicolumn{2}{c}{} & \multicolumn{1}{c}{\(q=1\)} & \multicolumn{1}{c}{\(q=4\)} & \multicolumn{1}{c}{\(q=1\)} & \multicolumn{1}{c}{\(q=4\)} \\ \hline \multirow{3}{*}{Number} & 1 & 43.2\(\pm\)2.3 & **45.2\(\pm\)2.7** & 21.2\(\pm\)0.9 & 23.8\(\pm\)0.4 & 16.7\(\pm\)0.5 & 17.6\(\pm\)0.2 \\ & 2 & 44.4\(\pm\)0.5 & 42.4\(\pm\)2.0 & 24.6\(\pm\)0.7 & 24.6\(\pm\)1.0 & 17.2\(\pm\)0.6 & 18.8\(\pm\)0.6 \\ \cline{1-1} & 3 & 45.6\(\pm\)1.4 & 41.8\(\pm\)5.0 & 25.7\(\pm\)0.4 & 25.9\(\pm\)0.6 & 18.0\(\pm\)0.4 & 18.7\(\pm\)0.4 \\ \cline{1-1} views & 4 & 45.3\(\pm\)1.7 & 41.5\(\pm\)5.7 & 26.4\(\pm\)0.2 & 26.3\(\pm\)0.3 & 17.9\(\pm\)0.1 & 18.6\(\pm\)0.0 \\ \(p\) & 5 & 45.6\(\pm\)1.0 & 39.0\(\pm\)6.1 & 26.7\(\pm\)0.3 & **27.3\(\pm\)0.7** & **18.2\(\pm\)0.4** & **19.1\(\pm\)0.2** \\ \cline{1-1} & 6 & **45.7\(\pm\)1.0** & 40.0\(\pm\)7.7 & **26.8\(\pm\)0.5** & 26.8\(\pm\)0.1 & 18.1\(\pm\)0.4 & 18.5\(\pm\)0.9 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Impact of \(|\mathcal{B}_{m}|\), \(q\) and \(p\) on the final AA (%) for CIFAR10, CIFAR100 and Tiny ImageNet. The top part shows performances for \(|\mathcal{B}_{m}|\in[10,200]\), \(p=1\), \(q=1\). The middle part shows performances for \(q\in[1,5]\), \(|\mathcal{B}_{m}|=200\), \(p=1\). The bottom part show performances for \(p\in[1,6]\), \(q\in\{1,5\}\), \(|\mathcal{B}_{m}|=200\). The performances are obtained by following algorithm 2. We use standard augmentations described in section 4.4. Each experiment is run 3 times and their average and standard deviation are displayed. The best results are displayed in bold.
entropy.
**Supervised Contrastive Replay** (SCR) []: Replay-based method trained using the SupCon loss [].
**ER-ACE**[]: Replay based method using an Asymmetric Cross Entropy to overcome feature drift.
**GSA**[]: Replay-based method dealing with cross-task class discrimination with a redefined loss objective using Gradient Self Adaptation.
**GDumb**[]: Simple method that stores data from the stream in memory, with the constraint of having a balanced class selection. At inference time, the model is trained offline on memory data.
**SimCLR-ER**[]: Memory-based approach where the model is trained using the unsupervised contrastive loss of SimCLR. The memory management strategy is the same as the one used in ER.
**BYOL-ER**[]: Memory-based approach where the model is trained using the loss defined in BYOL. The memory management strategy is the same as the one used in ER.
**SimSiam-ER**[]: Memory-based approach where the model is trained using the loss defined in SimSiam. The memory management strategy is the same as the one used in ER.
**LUMP**[]: Replay-based approach where every image in the batch is a mixup between memory and stream image. The model is trained using the unsupervised contrastive loss. Originally proposed in a non-online scenario, this method was adapted to the UOGCL.
**SCALE**[]: Replay-based method using a pseudo-labeled contrastive loss. While very recent, the code is not available for this method and we had to report the available performances from the original paper.
**STAM**[]: A method designed for UOGCL using an expandable memory, patch-based clustering and novelty detection.
### Implementation details
We train a ResNet-18 from scratch for every experiment. The projection layer for contrastive approaches is a MLP with 1 hidden layer of size 512, ReLU activation, and output size of 128. Memory batch size for replay-based methods is 200 and stream batch size for any method is 10. Our method uses an SGD optimizer with a fixed learning rate of 0.1. For all methods, a small hyperparameter search is conducted, and best parameters are kept for training. The search includes learning rate and optimizer. Temperature for contrastive losses is set to 0.07. For standard augmentations, we use random crop, colo jitter, random flip, and grayscale. Offline methods are trained for 50 epochs with the same optimizer, model, and augmentation procedure as other methods. Unsupervised methods are evaluated using NCM on memory data at the end of training following sec 4.1. For each experiment, the order of the labels for the training sequence is generated randomly.
### Results
In what follows, we present our experimental results, highlighting the main figures and characteristics that demonstrate the interest and relevance of our approach.
**Scaling memory parameters.** Memory parameters described in 3.2 can have a significant impact on performances. While expanding the amount of data retrieved from memory \(|\mathcal{B}_{m}|\) continuously improves performances, it cannot exceed memory size. Similarly, we
observe that increasing the amount of augmentation \(p\) also results in an increase in performances for all datasets. However, larger values of memory iteration \(q\) do not scale well for \(p\geq 5\) while considerably increasing computation. Therefore, we set \(q=1\) for our final method and scale with the number of augmentations rather than the number of iterations. However, experimenting with larger values of \(q\) could lead to even higher performances.
**Final AA.** We report the final AA on table 2 for all methods. Our approach outperforms every other unsupervised method for UOGCL, on all considered setups. Notably, Ours - \((7,1,0,0,0)\), which corresponds to training with \((p,q)=(7,1)\) demonstrate that training with more augmentations can considerably help training in UOGCL. Such results experimentally demonstrate the efficiency of focusing on memory usage rather than minimizing forgetting. We cannot report performances for STAM on Tiny IN since the author did not give corresponding parameters for this dataset and CIFAR100 parameters gave poor performances.
**Impact of DAA.** To disentangle the impact of DAA compared to standard augmentation, we present in table 2 the results of our method with \((p,q)=(7,1)\), namely \(Ours-(7,1,0,0,0)\) and the results of our method with \((p,q)=(4,1)\) and 1 DAS, 1 DAM, 1 DAC; namely \(Ours-(4,1,1,1,1)\). It can be seen that for the same number of augmentations overall, using DAA gives better performances in all considered scenarios.
**Comparison to supervised methods.** Since very few methods have been designed for UOGCL, we also implemented some typical supervised methods for OGCL. Results displayed in table 2 show that for small memory sizes, our method can achieve performances close to SCR, a state-of-the-art supervised technique. Specifically, on CIFAR10 with \(M=200\), our method performs only 1.5% below SCR. We conjecture that this results from self-supervised methods being less sensitive to overfitting, which is especially important for smaller memory sizes.
\begin{table}
\begin{tabular}{l l|l|c c|c c c} \hline \hline \multicolumn{2}{c}{} & \multicolumn{3}{c}{CIFAR10} & \multicolumn{3}{c}{CIFAR100} & \multicolumn{3}{c}{Tiny ImageNet} \\ \hline \multicolumn{2}{c}{} & Method & M=200 & M=500 & M=2k & M=5k & M=2k & M=5k & M=10k \\ \hline \hline \multirow{8}{*}{CIFAR10} & offline & \multicolumn{2}{c|}{86.1\(\pm\)5.7} & \multicolumn{2}{c|}{53.0\(\pm\)1.8} & \multicolumn{2}{c}{42.3\(\pm\)3.9} \\ & fine-tuned & \multicolumn{2}{c|}{16.6\(\pm\)2.3} & \multicolumn{2}{c|}{3.6\(\pm\)0.7} & \multicolumn{2}{c}{1.4\(\pm\)0.1} \\ & ER [\(\boxboxbox\)] & 41.46\(\pm\)3.41 & 52.93\(\pm\)4.39 & 31.37\(\pm\)0.69 & 39.22\(\pm\)1.11 & 11.33\(\pm\)1.17 & 19.42\(\pm\)2.26 & 25.93\(\pm\)3.02 \\ & GDUMB [\(\boxboxbox\)] & 34.06\(\pm\)1.81 & 41.42\(\pm\)1.25 & 15.74\(\pm\)0.61 & 25.53\(\pm\)0.44 & 7.08\(\pm\)0.39 & 13.79\(\pm\)0.76 & 22.35\(\pm\)0.23 \\ & SCR [\(\boxbox\)] & 49.16\(\pm\)3.02 & 60.28\(\pm\)1.21 & 37.79\(\pm\)0.95 & 47.31\(\pm\)0.34 & 19.76\(\pm\)0.24 & 28.80\(\pm\)0.51 & 34.28\(\pm\)0.28 \\ & ER-ACE [\(\boxbox\)] & 45.25\(\pm\)2.85 & 53.10\(\pm\)2.70 & 33.32\(\pm\)1.14 & 40.60\(\pm\)1.55 & 21.71\(\pm\)0.34 & 27.27\(\pm\)0.95 & 32.57\(\pm\)1.0 \\ & GSA [\(\boxboxbox\)] & 52.03\(\pm\)2.14 & 61.30\(\pm\)2.35 & 38.77\(\pm\)1.07 & 48.21\(\pm\)0.99 & 19.35\(\pm\)0.72 & 27.58\(\pm\)0.74 & 34.72\(\pm\)0.82 \\ \hline \multirow{8}{*}{CIFAR10} & STAM & \multicolumn{2}{c|}{30.54\(\pm\)0.8} & \multicolumn{2}{c|}{8.39\(\pm\)0.4} & \multicolumn{2}{c}{-} \\ & SCALE [\(\boxboxbox\)] & \multicolumn{2}{c|}{32\(\pm\)1\({}^{*}\)} & \multicolumn{2}{c|}{22\(\pm\)0.1\({}^{*}\)} & \multicolumn{2}{c}{-} \\ & LUMP [\(\boxboxbox\)] & 24.96\(\pm\)1.72 & 25.34\(\pm\)1.06 & 7.42\(\pm\)0.57 & 7.18\(\pm\)0.5 & 4.15\(\pm\)0.5 & 4.55\(\pm\)0.68 & 5.41\(\pm\)0.19 \\ \cline{1-1} & SimSiam-ER [\(\boxboxbox\)] & 27.73\(\pm\)1.18 & 30.59\(\pm\)1.21 & 6.91\(\pm\)0.37 & 7.47\(\pm\)0.11 & 5.69\(\pm\)0.32 & 6.49\(\pm\)0.41 & 6.9\(\pm\)0.52 \\ \cline{1-1} & BYOL-ER [\(\boxboxbox\)] & 29.43\(\pm\)0.55 & 29.30\(\pm\)1.01 & 9.39\(\pm\)0.52 & 10.35\(\pm\)0.61 & 5.07\(\pm\)0.39 & 6.19\(\pm\)0.26 & 6.59\(\pm\)0.38 \\ \cline{1-1} & SimCLR-ER [\(\boxbox\)] & 43.20\(\pm\)2.30 & 48.81\(\pm\)0.78 & 21.2\(\pm\)0.9 & 23.62\(\pm\)0.54 & 12.84\(\pm\)0.7 & 16.7\(\pm\)0.5 & 17.97\(\pm\)0.14 \\ \cline{1-1} & Ours \((7,1,0,0)\) & 45.68\(\pm\)2.38 & 52.89\(\pm\)0.57 & 27.27\(\pm\)0.13 & 31.32\(\pm\)0.64 & 13.16\(\pm\)0.37 & 17.9\(\pm\)0.58 & 20.21\(\pm\)0.13 \\ \cline{1-1} & Ours \((4,1,1,1,1)\) & **48.09\(\pm\)1.22** & **56.02\(\pm\)1.34** & **29.02\(\pm\)0.77** & **33.19\(\pm\)0.9** & **14.79\(\pm\)0.49** & **20.35\(\pm\)0.02** & **22.06\(\pm\)0.37** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Final AA (%) for all methods on CIFAR10, CIFAR100 and Tiny ImageNet and varying memory sizes \(M\). For our method, we reported two set of \((p,q,\#\textit{DAM},\#\textit{DAC},\#\textit{DAS})\) where \(\#\textit{DAM}\), \(\#\textit{DAC}\), \(\#\textit{DAS}\) are the number of DAM, DAC and DAS respectively. Lines corresponding to our method show that 1) using more augmentations can easily improve performances 2) more improvement is achieved using DAA. Each experiment is run 5 times and their average value and standard deviation are reported. The best result and are displayed in bold. Starred values are values reported from the original paper.
## 5 Conclusion
In this paper, we addressed the problem of Unsupervised Online General Continual Learning from the perspective of improving memory usage whereas current state-of-the-art methods propose to cope with catastrophic forgetting. We demonstrated that data augmentation can be enhanced for replay-based methods and proposed a new augmentation strategy, Domain Aware Augmentations, designed for continual learning. We showed the efficiency of focusing on memory usage rather than minimizing forgetting: with such an approach, we not only surpassed current unsupervised approaches to UOGCL but also narrowed the gap between supervised and unsupervised methods for Online General Continual Learning. Our experiments show that better memory utilization by augmentations implies higher computation costs. As these calculations can be parallelized, the impact on training time remains manageable. Lastly, it should be pointed out that the proposed approach could be adapted to other memory-based methods, with small changes, making it a promising strategy for continual learning.
|
2309.07550 | Naturalistic Robot Arm Trajectory Generation via Representation Learning | The integration of manipulator robots in household environments suggests a
need for more predictable and human-like robot motion. This holds especially
true for wheelchair-mounted assistive robots that can support the independence
of people with paralysis. One method of generating naturalistic motion
trajectories is via the imitation of human demonstrators. This paper explores a
self-supervised imitation learning method using an autoregressive
spatio-temporal graph neural network for an assistive drinking task. We address
learning from diverse human motion trajectory data that were captured via
wearable IMU sensors on a human arm as the action-free task demonstrations.
Observed arm motion data from several participants is used to generate natural
and functional drinking motion trajectories for a UR5e robot arm. | Jayjun Lee, Adam J. Spiers | 2023-09-14T09:26:03Z | http://arxiv.org/abs/2309.07550v1 | # Naturalistic Robot Arm Trajectory Generation via Representation Learning
###### Abstract
The integration of manipulator robots in household environments suggests a need for more predictable and human-like robot motion. This holds especially true for wheelchair-mounted assistive robots that can support the independence of people with paralysis. One method of generating naturalistic motion trajectories is via the imitation of human demonstrators. This paper explores a self-supervised imitation learning method using an autoregressive spatio-temporal graph neural network for an assistive drinking task. We address learning from diverse human motion trajectory data that were captured via wearable IMU sensors on a human arm as the action-free task demonstrations. Observed arm motion data from several participants is used to generate natural and functional drinking motion trajectories for a UR5e robot arm.
Keywords:Human-like Robot Motion Self-Supervised Learning Graph Representation Learning Imitation Learning.
## 1 Introduction
For people with motion impairments, the ability to feed oneself is a major factor of independence [3]. Recently, wheelchair- or desk-mounted robotic manipulators have been implemented with these tasks in mind [1, 2]. In human-robot interaction (HRI), it has been observed that the human comfort and confidence may be increased by generating predictable and naturalistic motion paths [4, 9]. As such, we are aiming to add human-like arm motion to an assistive drinking task.
To generate human-like robot arm motion we collect human arm movement data using wearable IMUs. We then reconstruct action-free human arm trajectories to gain access to low-dimensional states, and use an autoregressive spatio-temporal graph neural network (GNN) to ingest this data in a self-supervised way. We learn internal model representations of human drinking dynamics that exploit the spatial and temporal relation between arm joints based on the Space-Time Separable Graph Convolutional Network (STS-GCN) [8]. By behaviour cloning (BC) from the human motion data collected via IMUs, we were able to generate diverse, human-like drinking robot arm motion that is functional across various bottle positions with heuristics to complete other subtasks in sequence.
In this work we have adapted the STS-GCN architecture from the human pose prediction community into an autoregressive GNN for self-supervised imitation learning for robotics, with the Mean Per Joint Position Error (MPJPE)
as the BC loss. As a result, the new system learns an internal model dynamics of naturalistic drinking motion with relatively sparse input data, making it suited to fewer (motion captured) demonstrations. The more compact result is also better suited for implementation on physical hardware with predictions further back in time making it suited to functional tasks.
## 2 Related Work
### Human-like Arm Motion Generation for Robots
It has been proposed that human-like behaviours of robotic manipulators can ensure safety, predictability, and social acceptance [4, 9]. Many research efforts have aimed for various aspects of human-like robot motion planning. One popular approach is movement primitives that decompose motion into a set of primitives that can be combined to generate complex movements and learned from human demonstrations [7]. We have adopted a self-supervised learning method that can generate diverse and generalisable human-like motion while learning an internal model dynamics with an autoregressive structure and without primitives. It is noted that our approach and MPs could potentially complement each other.
Human motion forecasting deals with the problem of predicting the 3D coordinates of \(V\) body joints for the future \(K\) frames, given past \(T\) frames. A skeleton-based model of human body may be used to form a graph structure, where each joint is a node [5, 6]. In [8], the STS-GCN model is introduced, which learns to encode the human body dynamics by factorising the spatio-temporal graph adjacency matrix to separate spatial and temporal adjacency matrices and focus on the joint-joint and temporal relations. We modify this architecture to learn from relatively sparsely logged data by extending the model to train autoregressively with a self-supervised loss. The result is a more compact learned internal model of human motion dynamics that can predict much further in time. We also take an embodied approach that maps the generated human arm trajectory onto a real robot arm to complete functional tasks, as opposed to visualisations of simulated skeleton models. Unlike in [8], the input trajectory segment to our system is not the initial frames of a continuous action, but rather a preparatory motion to reach and grasp a bottle prior to the generated movement of bringing the bottle to a user's mouth.
Figure 1: Human drinking motion is captured using wearable IMUs, with the arm trajectory reconstructed to form an action-free demonstration. An autoregressive spatio-temporal GNN learns the motion dynamics from diverse drinking data to generate generalised naturalistic drinking motions which are scaled for a UR5e.
## 3 Methods
To collect human drinking motion data, 3 MetaMotionS+ IMUs _(MBientLab)_ are attached using Velcro straps along the participant's right arm on the upper arm, forearm, and back of hand. Euler angles are logged at 100Hz and preprocessed to address discontinuities and noise. Five participants (two female, mean age of 23.2 years) each provided 10 drinking demonstrations for 6 discrete bottle positions on a 2-by-3 grid on a desk. Each recorded trajectory is discretely down-sampled to 150 samples and split into reaching, drinking and returning phases in Fig. 1.
The GNN is shown in Fig. 2. The encoder has four STS-GCN layers with the input graph of \(T=30\), which learns the adjacency matrix of the input to highlight certain space-time edges with feature graphs. The decoder has five TCN layers to generate the output graph of \(K=30\) with 3D joint coordinates. The learned graph representations act as the internal model for the drinking dynamics to generate the subsequent motion segment given the input segment. The model is trained to minimise the MPJPE loss in Eq. 1 between the autoregressively generated 120-frame drinking trajectory and the ground-truth self-supervised label. This requires a recursive forwarding of its output to its input four times.
\[L_{MPJPE}=\frac{1}{VK}\sum_{k=T+1}^{T+K}\sum_{v=1}^{V}\left\|\hat{x}_{vk}-x_{ vk}\right\|_{2} \tag{1}\]
\(x_{vk},\hat{x}_{vk}\in R^{3}\) are the true and predicted joint \(v\) positions at frame \(k\). \(V\) is the number of nodes per frame. \(T\) and \(K\) are the number of input and output frames.
As the generated human trajectory resides in the human workspace, we map and linearly scale the human wrist 3D trajectory to the robot arm's end-effector (EE) workspace, safely reaching the user's mouth. Future work would integrate sensing solutions to deal with the user moving and force-interactions.
## 4 Results
A 6 DOF UR5e robot arm was used with a parallel jaw gripper adapted from the ROBOTIS Open-Manipulator X robot, Fig. 1. In Fig. 3 we compare the GNN trajectory with a typical joint-space IK trajectory. Pronounced curves with hysteresis are present in the GNN trajectory. Such hysteresis also appears in human reaching motions [9]. We also test our trained model on unseen test bottle positions placed within the aforementioned 2-by-3 grid of bottle positions.
Figure 2: An overview of the autoregressive GNN adapted from the STS-GCN.
## 5 Conclusion and Future work
We have proposed a preliminary GNN-based self-supervised imitation learning framework, using human demos to generate human-like robot arm drinking motion from a reach-to-grab motion. In future work, this could be extended by multi-task learning and with a camera to observe scene obstacles for more Activities of Daily Living where human-like motion is beneficial for assistive robots.
|
2309.08663 | Implementing fault-tolerant non-Clifford gates using the [[8,3,2]] color
code | Quantum computers promise to solve problems that are intractable for
classical computers, but qubits are vulnerable to many sources of error,
limiting the depth of the circuits that can be reliably executed on today's
quantum hardware. Quantum error correction has been proposed as a solution to
this problem, whereby quantum information is protected by encoding it into a
quantum error-correcting code. But protecting quantum information is not
enough, we must also process the information using logic gates that are robust
to faults that occur during their execution. One method for processing
information fault-tolerantly is to use quantum error-correcting codes that have
logical gates with a tensor product structure (transversal gates), making them
naturally fault-tolerant. Here, we test the performance of a code with such
transversal gates, the [[8,3,2]] color code, using trapped-ion and
superconducting hardware. We observe improved performance (compared to no
encoding) for encoded circuits implementing non-Clifford gates, a class of
gates that are essential for achieving universal quantum computing. In
particular, we find improved performance for an encoded circuit implementing
the control-control $Z$ gate, a key gate in Shor's algorithm. Our results
illustrate the potential of using codes with transversal gates to implement
non-trivial algorithms on near-term quantum hardware. | Daniel Honciuc Menendez, Annie Ray, Michael Vasmer | 2023-09-15T18:00:02Z | http://arxiv.org/abs/2309.08663v1 | # Implementing fault-tolerant non-Clifford gates using the [[8,3,2]] color code
###### Abstract
Quantum computers promise to solve problems that are intractable for classical computers, but qubits are vulnerable to many sources of error, limiting the depth of the circuits that can be reliably executed on today's quantum hardware. Quantum error correction has been proposed as a solution to this problem, whereby quantum information is protected by encoding it into a quantum error-correcting code. But protecting quantum information is not enough, we must also process the information using logic gates that are robust to faults that occur during their execution. One method for processing information fault-tolerantly is to use quantum error-correcting codes that have logical gates with a tensor product structure (transversal gates), making them naturally fault-tolerant. Here, we test the performance of a code with such transversal gates, the [[8,3,2]] color code, using trapped-ion and superconducting hardware. We observe improved performance (compared to no encoding) for encoded circuits implementing non-Clifford gates, a class of gates that are essential for achieving universal quantum computing. In particular, we find improved performance for an encoded circuit implementing the control-control \(Z\) gate, a key gate in Shor's algorithm. Our results illustrate the potential of using codes with transversal gates to implement non-trivial algorithms on near-term quantum hardware.
## I Introduction
Quantum error correction (QEC) promises to unlock the full potential of quantum computing, by protecting fragile qubits from the effects of decoherence [1; 2; 3]. But it is not enough to merely preserve the quantum information stored in a qubit register, we also need to perform a universal set of logical gates in a fault-tolerant manner [4]. Logical gates in the Clifford group (the unitaries that map Pauli operators to Pauli operators) are often relatively straightforward to implement fault-tolerantly in a given QEC code, however they are not universal. In fact, no QEC code can have a transversal and universal set of logical gates [5]. To obtain a universal gate set we need an additional non-Clifford gate [6], but implementing gates from this class fault-tolerantly is often difficult, usually requiring complex procedures such as magic state distillation [7; 8].
Certain QEC codes with special structure have transversal non-Clifford gates, where a transversal gate is a gate that acts as a tensor product unitaries that do not entangle different qubits in the same QEC code block. Examples of such gates include the transversal CNOT available in all CSS codes, and any gate acting as a tensor product of single-qubit unitaries. Transversal gates are naturally fault-tolerant as they do not spread errors within a code block.
There exists a family of codes known as triorthogonal codes [9] with transversal non-Clifford gates, implemented by tensor products of \(T=\operatorname{diag}\left(1,\exp(i\pi/4)\right)\) Certain (generalized) triorthogonal codes have transversal entangling non-Clifford gates, the smallest of which (to our knowledge) is the [[8,3,2]] color code [10; 11], which has a transversal \(\operatorname{CCZ}=\operatorname{diag}(1,1,1,1,1,1,1,-1)\) gate. From a fault-tolerance perspective, it is particularly desirable to implement complex entangling gates using single-qubit gates, as single-qubit gates are often an order of magnitude less noisy than entangling gates in many hardware platforms [12; 13; 14; 15; 16; 17]. Using small codes to demonstrate fault-tolerant Clifford and non-Clifford operations has previously been suggested [18] and implemented in NMR [19; 20], trapped-ion [21; 22; 23; 24], and superconducting hardware [25; 26; 27].
Here, we perform experiments on superconducting and trapped-ion hardware platforms to compare the performance of the encoded gates of the [[8,3,2]] code with the same gates executed with no encoding. We find that the encoded gates perform better than their non-encoded counterparts in every case where the encoded gate is non-Clifford, even though the encoded circuits contain more entangling gates than the unencoded circuits. Notably, we observe improved performance for the CCZ gate, which is the dominant gate in circuits such as adders [28; 29] and the modular exponentiation used in Shor's algorithm [30; 31].
The remainder of this article is structured as follows. In Section II, we review the definition of the [[8,3,2]] code and its transversal logical gates. In Section III, we give fault-tolerant circuits for preparing encoded states of the [[8,3,2]] code and for logical measurements. In Section IV, we describe our experiments on quantum hardware and their results, and we conclude with Section V.
## II The [[8,3,2]] color code
The [[8,3,2]] color code is a stabilizer code [32], encoding 3 logical qubits into 8 physical qubits with distance 2 (meaning that it can detect any single-qubit error). It is convenient to define the code using a geometric represen
tation, where the physical qubits reside at the vertices of a cube, as shown in Fig. 1. The stabilizer group is generated by an \(X\)-type operator acting on all the qubits, and by \(Z\)-type operators associated with the faces of the cube. Concretely, using the qubit indices in Fig. 1, the stabilizer group is
\[\begin{split}\mathcal{S}=\langle X^{\otimes 8},& Z_{0}Z_{1}Z_{2}Z_{3},Z_{4}Z_{5}Z_{6}Z_{7},\\ & Z_{0}Z_{1}Z_{4}Z_{5},Z_{0}Z_{2}Z_{4}Z_{6}\rangle,\end{split} \tag{1}\]
where \(Z_{i}\) denotes a Pauli \(Z\) operator acting on qubit \(i\) etc. We note that the stabilizer generators in Eq. (1) are either \(X\)-type or \(Z\)-type, meaning that the [[8,3,2]] code is a CSS code [33, 34].
The logical operators of the [[8,3,2]] code also have a geometric interpretation. Logical \(X\) operators are associated with the faces of the cube, and logical \(Z\) operators with the edges of the cube. We can choose the following basis of logical Pauli operators
\[\begin{split}\overline{X}_{1}=X_{0}X_{1}X_{2}X_{3},& \overline{Z}_{1}=Z_{0}Z_{4},\\ \overline{X}_{2}=X_{0}X_{1}X_{4}X_{5},&\overline{Z }_{2}=Z_{0}Z_{2},\\ \overline{X}_{3}=X_{0}X_{2}X_{4}X_{6},&\overline{Z }_{3}=Z_{0}Z_{1},\end{split} \tag{2}\]
where we use overlines to distinguish operators acting on the logical qubits from operators acting on the physical qubits.
The [[8,3,2]] code is notable for having a non-Clifford transversal gate, CCZ implemented by \(T\) and \(T^{\dagger}\) gates. Specifically,
\[\overline{\text{CCZ}}=T_{0}T_{1}^{\dagger}T_{2}^{\dagger}T_{3}T_{4}^{\dagger}T _{5}T_{6}T_{7}^{\dagger}. \tag{3}\]
This gate again has a geometric interpretation: vertices and edges of the cube form a bipartite graph and CCZ is implemented by applying \(T\) to (the qubits on) one set of the vertices and \(T^{\dagger}\) to the other. The transversality of CCZ and Pauli \(X\) imply that the [[8,3,2]] code also has transversal \(\text{CZ}=\text{diag}(1,1,1,-1)\) gates, as follows
\[\begin{split}\overline{\text{CZ}}_{12}=S_{0}S_{2}^{\dagger}S_{4}^ {\dagger}S_{6},\\ \overline{\text{CZ}}_{13}=S_{0}S_{1}^{\dagger}S_{4}^{\dagger}S_{5 },\\ \overline{\text{CZ}}_{23}=S_{0}S_{1}^{\dagger}S_{2}^{\dagger}S_{3}, \end{split} \tag{4}\]
where \(S=T^{2}\) and \(\text{CZ}_{ij}\) acts on logical qubits \(i\) and \(j\).
## III Fault-tolerant circuits
For an error-detecting code such as the [[8,3,2]] code, we say that a circuit is fault-tolerant if any single-qubit error on the input state or an error at any single location in the circuit can at worst lead to a detectable error on the output state. A circuit location can be a state preparation, gate, or measurement. We need only consider Pauli errors due to error discretization [35]. And we note that as the [[8,3,2]] code is a CSS code, it is sufficient to analyse \(X\) and \(Z\) errors independently. We remark that the logical CCZ and CZ gates discussed in Section II are transversal and are therefore trivially fault-tolerant. We also need fault-tolerant circuits for logical measurement and logical state preparation, and we now discuss each of these in turn.
As the [[8,3,2]] code is a CSS code, we can do a fault-tolerant measurement of the logical qubits in the \(X\) or \(Z\) basis by measuring all of the physical qubits in the \(X\) or \(Z\) basis, respectively, and processing the classical outcomes [35]. In the case of an error-detecting code such as the [[8,3,2]] code, the classical processing is especially simple: we simply discard any measurement result that corresponds to a state that is not a \(+1\) eigenvalue of the stabilizers. For example, when measuring in the \(X\) basis we accept any result whose parity is even, i.e., a \(+1\) eigenstate of \(X^{\otimes 8}\). This is fault-tolerant because single-qubit errors before the measurements are detectable by definition, and any single measurement error is equivalent to a single-qubit error before the measurement.
### GHZ state preparation
First we consider a fault-tolerant circuit for preparing the logical GHZ state, \(|\text{GHZ}\rangle=(|000\rangle+|111\rangle)/\sqrt{2}\). Our circuit (shown in Fig. 2) factorizes into two independent and identical sub-circuits acting on qubits 0, 3, 5, 6 and qubits 1, 2, 4, 7 (the two bipartite sets discussed in Section II). The [[8,3,2]] code can detect any
Figure 1: Geometric representation of the [[8,3,2]] code. (a) The physical qubits reside at the vertices of the cube. (b) \(Z\)-type stabilizers are associated with faces, for example the blue face has an associated stabilizer \(Z_{0}Z_{1}Z_{2}Z_{3}\). (c) The \(X\)-type stabilizer acts on all the qubits.
Figure 2: Fault-tolerant circuit for preparing the \(|\text{GHZ}\rangle\) state in the [[8,3,2]] code.
weight \(\leq 3\)\(X\) error and so we only need to consider the four-qubit errors \(X_{0}X_{3}X_{5}X_{6}\) and \(X_{1}X_{2}X_{4}X_{7}\). However, each of these errors is in fact a logical \(\overline{X}_{1}\overline{X}_{2}\overline{X}_{3}\) operator and so leaves the target \(|\)GHZ\(\rangle\) state invariant. The only possible \(Z\) errors are weight one (detectable) and weight two (non-detectable). However, one can verify that all the non-detectable errors have trivial action on the target \(|\)GHZ\(\rangle\) state. For example, the first CNOT could fail giving a \(Z_{1}Z_{2}\) error, but this implements a logical \(\overline{Z}_{2}\overline{Z}_{3}\) operator (see Eq. (2)) and hence leaves the target \(|\)GHZ\(\rangle\) state invariant.
### \(|\)++\(\rangle\) state preparation
Next, we provide a fault-tolerant circuit for preparing the \(|\)++\(\rangle\) state, shown in Fig. 3. In this circuit, the potentially problematic errors are those that can propagate through the CNOT gates. Consider, for example, the CNOT gates with qubit 0 as the control. The possible multi-qubit \(X\) errors that can arise from these gates are
\[\begin{split}& X_{0}X_{3}\quad(\text{detectable}),\\ & X_{0}X_{2}X_{3}\quad(\text{detectable}),\\ & X_{0}X_{1}X_{2}X_{3}\quad(\overline{X}_{1}),\end{split} \tag{5}\]
where the only non-detectable error has trivial action on the target encoded state. The same is true for the other groups of CNOT gates with the same target. Certain \(Z\) errors can also propagate through CNOT gates. For example, consider the CNOT gates with qubit 1 as the target. The possible multi-qubit \(Z\) errors that can arise from these gates are
\[\begin{split}& Z_{1}Z_{a_{0}}\quad(\text{detectable}),\\ & Z_{1}Z_{7}Z_{a_{0}}\quad(\text{detectable}),\\ & Z_{1}Z_{6}Z_{7}\quad(\text{detectable}),\\ & Z_{1}Z_{6}Z_{7}\quad Z_{a_{0}}\quad(\text{detectable}),\\ & Z_{0}Z_{1}Z_{6}Z_{7}\quad(\text{stabilizer}).\end{split} \tag{6}\]
The purpose of the flag qubit [36], \(a_{0}\), is to make the error \(Z_{1}Z_{7}=\overline{Z}_{1}\overline{Z}_{2}\) detectable. Similarly, the flag qubits \(a_{1}\) and \(a_{2}\) catch the errors \(Z_{2}Z_{7}\), \(Z_{3}Z_{6}\) and \(Z_{4}Z_{6}\).
## IV Experimental results
We investigate the performance of circuits comprised of three parts: state preparation, a transversal logical gate, and logical measurement.
For the state preparation part, we consider either \(|\)GHZ\(\rangle\) or \(|\)++++\(\rangle\) state preparation, using the circuits described in Section III. For the logical gate part, we consider one of the 16 possible products of the transversal logical CCZ, CZ\({}_{12}\), CZ\({}_{02}\) and CZ\({}_{01}\) gates available in the [[8,3,2]] code. For the logical measurement part, we consider transversal \(Z\) basis and \(X\) basis measurements. In the encoded case, the fault-tolerant measurement involves post-selection and we provide the post-selection rates for each of the experiments in Appendix B.2.
We test these circuits on two quantum computers: ibmq_mumbai, a 27-qubit device developed by IBM [37], and ionq-11q, an 11-qubit device developed by IonQ [13]. The IonQ device has all-to-all qubit connectivity, whereas
Figure 3: Fault-tolerant circuit for preparing the state \(|\)++\(\rangle\) in the [[8,3,2]] code. The qubits \(a_{1}\), \(a_{2}\) and \(a_{3}\) are flag qubits whose purpose is to detect certain \(Z\) errors that could cause logical errors. If we measure the three flag qubits to be in the \(|0\rangle\) state then we accept the output.
the IBM device has "heavy-hexagon" qubit connectivity [38], see Appendix A. We only consider \(\ket{\text{GHZ}}\) state preparation on the IBM device, as our circuit for preparing logical \(\ket{++++}\) states (Fig. 3) is only implementable on the IBM device with SWAP gates, and as a result is no longer fault-tolerant. We compare the performance of the encoded circuits against the performance of the bare (no encoding) circuits, using the statistical distance of the output distribution from the ideal output distribution as our metric.
We show the results for \(\ket{\text{GHZ}}\) state preparation and \(X\) basis measurement in Fig. 4. For both devices and for every transversal gate, we observe improved performance of the encoded version of the circuit. The results for \(Z\) basis measurement are qualitatively similar; see Appendix B.1.
We show the results for \(\ket{+++}\) state preparation and \(X\) basis measurement in Fig. 5. The bare version of the circuit performs better for transversal Clifford gates, whereas the encoded version performs better for transversal non-Clifford gates. Notably, we observe lower statistical distances for the preparation of the encoded magic state CCZ\(\ket{+++}\). We can attribute the difference between the results for Clifford and non-Clifford gates to the compilation of the three-qubit CCZ gate into a circuit involving multiple two-qubit gates on the IonQ device [39]. And the discrepancy between the results for \(\ket{+++}\) and \(\ket{\text{GHZ}}\) state preparation is expected, given
Figure 4: Performance of bare (unencoded) and encoded versions of circuits for preparing states of the form \(g\ket{\text{GHZ}}\), where \(g\) is a transversal gate of the [[8,3,2]] code. In each case, we measure the qubits in the \(X\) basis and we plot the statistical distance of the observed measurement distribution from the ideal distribution. The upper two plots show the data for ionq-1lq, where we ran 1024 shots for each circuit, and the lower two plots show the data for ibmq_mumbai where we ran 10,000 shots for each circuit. In both cases, the error bars are calculated using bootstrap resampling.
that the bare circuit for preparing the former requires only single-qubit gates and the latter requires two entangling gates. We again relegate the results for \(Z\) basis measurement to Appendix B.1, as they are qualitatively similar to the results for \(X\) basis measurement.
## V Discussion
We have shown that using the [[8,3,2]] code allows us to prepare certain (encoded) states more accurately (as measured by the statistical distance) than using the native gates to prepare the same (unencoded) states. We observe this advantage across a range of circuits on two different hardware platforms: IBM's superconducting qubits and IonQ's trapped-ion qubits. The all-to-all connectivity of the IonQ device that we used enabled us to run more circuits fault-tolerantly than we could on the IBM device. In particular, we were able to interrogate the performance of the [[8,3,2]] code for preparing magic states of the form \(g\,|\!++\rangle\), where \(g\in\mathrm{C}CZ\times\{I,\mathrm{C}Z_{12},\mathrm{C}Z_{13},\mathrm{C}Z_{23}\}\). We observe an improved performance for the encoded version of circuits for preparing these states, illustrating the utility of codes like the [[8,3,2]] code, where multi-qubit non-Clifford gates can be applied using single-qubit operations.
The [[8,3,2]] is one example of a family of codes, known as generalized triorthogonal codes [40, 41, 42], with transversal multi-qubit \(Z\) rotations implemented by single-qubit gates. In future it would be interesting to test the performance of larger codes in this family with higher distance. For example, Ref. [42] gives a [[64,6,4]] code with a transversal CCZ\({}^{\otimes 2}\) gate and it is possible that smaller examples could be found using the techniques of [43, 44, 45].
As with any stabilizer code, the transversal gates of the [[8,3,2]] code do not form a universal set of gates. Therefore, in order to use the [[8,3,2]] code or a similar code to implement an actual quantum algorithm, we would need to supplement the transversal gates with additional fault-tolerant gates in order to obtain a universal gate set. One possibility worth considering would be to explore the implementation of logical gates via permutations of the physical qubits [46, 47], which can be fault-tolerant if implemented by qubit relabelling or physically moving the qubits.
## Acknowledgements
Research at Perimeter Institute is supported in part by the Government of Canada through the Department of Innovation, Science and Economic Development Canada and by the Province of Ontario through the Ministry of Colleges and Universities. We acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC). We thank IonQ for giving us access to their hardware through the IonQ Research Credits Program. We acknowledge CMC Microsystems for facilitating this research, specifically through their member access to the IBM Quantum Hub at PINQ\({}^{2}\). We thank Benjamin Brown, Joel Klassen and James Seddon for useful discussions. We thank Raymond Laflamme for comments on an earlier version of this manuscript.
_Note added_: We would like to bring the reader's attention to a related work by Wang, Simsek and Criger [48], which appears in the same arXiv posting.
Figure 5: Performance of bare (unencoded) and encoded versions of circuits for preparing states of the form \(g\,|\!++\rangle\), where \(g\) is a transversal gate of the [[8,3,2]] code. In each case, we measure the qubits in the \(X\) basis and we plot the statistical distance of the observed measurement distribution from the ideal distribution. Each data point represents 1024 shots of the circuit performed on ionq-11q, and we use bootstrap resampling to calculate the error bars. |
2309.11332 | Software Compartmentalization Trade-Offs with Hardware Capabilities | Compartmentalization is a form of defensive software design in which an
application is broken down into isolated but communicating components.
Retrofitting compartmentalization into existing applications is often thought
to be expensive from the engineering effort and performance overhead points of
view. Still, recent years have seen proposals of compartmentalization methods
with promises of low engineering efforts and reduced performance impact. ARM
Morello combines a modern ARM processor with an implementation of Capability
Hardware Enhanced RISC Instructions (CHERI) aiming to provide efficient and
secure compartmentalization. Past works exploring CHERI-based
compartmentalization were restricted to emulated/FPGA prototypes.
In this paper, we explore possible compartmentalization schemes with CHERI on
the Morello chip. We propose two approaches representing different trade-offs
in terms of engineering effort, security, scalability, and performance impact.
We describe and implement these approaches on a prototype OS running bare metal
on the Morello chip, compartmentalize two popular applications, and investigate
the performance overheads. Furthermore, we show that compartmentalization can
be achieved with an engineering cost that can be quite low if one is willing to
trade off on scalability and security, and that performance overheads are
similar to other intra-address space isolation mechanisms. | John Alistair Kressel, Hugo Lefeuvre, Pierre Olivier | 2023-09-20T14:07:20Z | http://arxiv.org/abs/2309.11332v2 | # Software Compartmentalization Trade-Offs
###### Abstract.
Compartmentalization is a form of defensive software design in which an application is broken down into isolated but communicating components. Retrofitting compartmentalization into existing applications is often thought to be expensive from the engineering effort and performance overhead points of view. Still, recent years have seen proposals of compartmentalization methods with promises of low engineering efforts and reduced performance impact. ARM Morello combines a modern ARM processor with an implementation of Capability Hardware Enhanced RISC Instructions (CHERI) aiming to provide efficient and secure compartmentalization. Past works exploring CHERI-based compartmentalization were restricted to emulated/FPGA prototypes.
In this paper, we explore possible compartmentalization schemes with CHERI on the Morello chip. We propose two approaches representing different trade-offs in terms of engineering effort, security, scalability, and performance impact. We describe and implement these approaches on a prototype OS running bare metal on the Morello chip, compartmentalize two popular applications, and investigate the performance overheads. Furthermore, we show that compartmentalization can be achieved with an engineering cost that can be quite low if one is willing to trade off on scalability and security, and that performance overheads are similar to other intra-address space isolation mechanisms.
Compartmentalization, Hardware Capabilities +
Footnote †: c) 2023 Copyright held by the owner/author(s).
ACM ISBN 979-8-4007-0404-8/23/10.
[https://doi.org/10.1145/3623759.3624550](https://doi.org/10.1145/3623759.3624550)
+
Footnote †: c) 2023 Copyright held by the owner/author(s).
ACM ISBN 979-8-4007-0404-8/23/10.
## 1. Introduction
Software compartmentalization is one of the ways to enforce the principle of least privilege (Mireire et al., 2016). Compartmentalization enforces isolation between components of a software system, granting compartments only the minimal privileges they need to function. If a component of a compartmentalized system is subverted, the damage the attacker can do is limited to the privileges granted to the compromised compartment (Kressel et al., 2017; O'Malley et al., 2017). Contrary to many other protection techniques, compartmentalization allows defending against yet unknown/future vulnerabilities in existing code bases (Kressel et al., 2017). Many approaches have been proposed in recent years, utilizing different hardware and software isolation mechanisms to compartmentalize libraries (Kressel et al., 2017; O'Malley et al., 2017; Kressel et al., 2017; Kressel et al., 2017; Kressel et al., 2017; Kressel et al., 2017; Kressel et al., 2017; Kressel et al., 2017; Kressel et al., 2017; Kressel et al., 2017) as well as smaller pieces of code such as functions (Kressel et al., 2017; Kressel et al., 2017; Kressel et al., 2017; Kressel et al., 2017).
Morello (Morello, 2018) is an extension to the ARMv8-A architecture implementing the Capability Hardware Enhanced RISC Instructions (CHERI), designed specifically to enable high-performance and scalable compartmentalization (Kressel et al., 2017; Kressel et al., 2017). This is achieved by enforcing compartment bounds on most memory loads and stores in hardware, and letting communicating compartments securely lend memory to each other using so-called hardware capabilities, a mechanism similar to fat pointers (Kressel et al., 2017) implemented in hardware to restrict accesses to shared memory at a fine (byte-level) granularity.
When retrofitting compartmentalization to existing code bases, a key challenge is keeping refactoring costs low (Kressel et al., 2017). This is crucial not only for reducing the cost of deployment, but also to reduce the number of errors made during the compartmentalization, which can undermine its efficiency or security guarantees (Kressel et al., 2017). Work exploring compartmentalization with CHERI is so far limited to solutions with relatively high porting costs (Kressel et al., 2017; Kressel et al., 2017; Kressel et al., 2017), that require a non-negligible reworking to the code corresponding to inter-compartment communications. These existing works are further limited to MIPS/RISC-V emulated or FPGA prototypes, making it hard to understand the real-world performance one would observe on an ASIC processor. In that context, the recent availability of Morello raises the research questions we tackle in this paper:
1. Which compartment models are possible using Morello, using what programming abstractions, at which refactoring costs?
2. How does Morello's compartmentalization performance and security guarantees compare to other intra-address space compartmentalization mechanisms (e.g., MPK)?
For this purpose, we adapt an existing compartmentalization-oriented library OS (libOS), FlexOS (Zhou et al., 2017), to Morello, and extend it by developing two compartmentalization programming abstractions relying on hardware capabilities, each representing a particular trade-off in terms of porting costs, security guarantees, and scalability to multiple compartments. The first is based on manual sandboxing as advocated by CHERI's designers (Morello et al., 2017), with every shared buffer protected by a capability. Further, we propose a second approach relying on a single region of shared data between two mutually distrusting compartments. These abstractions are used to compartmentalize popular open source software, SQLite (Bordes et al., 2016) and LibSodium (Bordes et al., 2016), at different isolation granularities: functions and libraries. We evaluate the porting costs, degree of security of these solutions, and further evaluate their performance when executing on the Morello chip, comparing these results to that of another intra-address space isolation mechanism: Intel MPK. We show that manual porting as advocated by CHERI's designers offers good performance, a good level of security, and scales well to high numbers of compartments. However, it can require a significant engineering effort when applied to large compartments. The second approach trades off security guarantees and scalability to more than 2 compartments, to achieve low porting costs, requiring only annotations indicating shared data at declaration time in the code.
## 2. CHERI Hardware Capabilities
A hardware _capability_(Morello et al., 2017) is an architectural data type used to represent a contiguous region of virtual memory with byte-level granularity. CHERI hardware capabilities define a base address, bounds and permissions information. The capability can be dereferenced to access the memory it refers to, with the hardware performing bounds and permissions checks. Capabilities are made unforgeable by a validity tag, stored separately, and the restriction that capabilities can only be used/manipulated via capability-aware instructions.
When using Morello for compartmentalization (commonly referred to as _hybrid_ mode), all compartments share a single address space, and the vast majority of the program's machine code is unchanged, consisting of traditional ARMv8 instructions. Every memory access made by a core is constrained by two global capabilities that delimit the memory regions the currently executing compartment can access: the _Program Counter Capability_ (PCC) and the _Default Data Capability_ (DDC). There is one PCC and one DDC register per core holding these, restricting the ARMv8 code's ability to perform instruction (PCC) and data (DDC) memory accesses within the relevant bounds. Additional capabilities can be used for sharing data between compartments. That way, the caller is lending access to the smallest region of memory (the data structure) needed by the callee. This is secure due to the fine-grained bounds enforcement and efficient as no data copy happens. Capabilities are also used to control exceptionless security domains (compartment) switches, realized by a privileged security monitor (referred to as the _switcher_ in this paper). This is achieved through a special type of capability referred to as _sealed_, which is immutable and nondereferenceable, and can only be unsealed via a jump to a pre-determined instruction in the switcher.
## 3. Design
We propose two design approaches suitable for compartments on Morello, guided by four main considerations: the engineering effort to retrofit compartmentalization in existing software, compartmentalization's performance overhead, the security of the given approach, and how it scales to many compartments.
_Engineering effort_ represents the effort to retrofit isolation into legacy software, or to write new compartmentalized software, using a given abstraction. It consists of marking compartment boundaries and shared data (Zhou et al., 2017), e.g. with annotations, but also sometimes redesigning part of the software with security in mind (Morello et al., 2017). Ifhigh, it can be a significant barrier to the adoption of compartmentalization, as it increases costs and development complexity (Zhou et al., 2017). _Performance overheads_ are another factor hindering the popularity of compartmentalization (Zhou et al., 2017; Sompan et al., 2017) and should be minimized. For example, the recent DARPA Compartmentalization and Privilege Management call (Bordes et al., 2016) specifies this overhead to be <5% for application-level compartmentalization with function granularity compartments. Higher overhead is allowed for proportionally higher security gains. _Security_ is a spectrum of guarantees over an uncompartmentalized system. The precise security requirements must be judged against the other requirements to strike an acceptable balance. As a minimum, solutions must enforce strong isolation between compartments and provide access to only a subset of data which has been selectively shared for communication. Finally, the _scalability_ of a solution denotes its capacity to efficiently scale to many compartments.
### Design Overview
In line with Morello/CHERI single address space compartmentalization model (Morello et al., 2017), and with many existing works (Morello et al., 2017; Zhou et al., 2017; Zhou et al., 2017; Sompan et al., 2017; Sompan et al., 2017; Sompan et al., 2017), we assume a libOS-based environment in which two or more user space and/or kernel components share a single address space and are isolated from each other. Compartments are defined statically at build time: each compartment is given a pair of memory regions, each contiguous in the virtual address space, to hold its private 1) code and 2) data. Global compartment capabilities constraining each compartment's memory accesses to the corresponding pair of regions are initialized at boot time. The static memory layout of the application and the systems software's dynamic memory allocation primitives (malloc/mmap/brk) are designed to allocate data accordingly. All data declared in the scope of a given compartment is treated as private to that compartment, unless it is specifically annotated as shared. Shared data is managed differently in the two abstractions we propose, and this is presented in detail in the next subsection.
```
1voidfoo(mystruct*stat,intindex){//Original
2char*str=stat->str;
3bar(str);
4float*element=stat->array[index];
5float*next=element+1;
6}
7
8voidfoo(mystruct
9*_capabilitystat,intindex){//Ported
10char*_capabilitystr=stat->str;
11bar((_cherl_fromcapchar*)str);
12float*_capabilityelement=stat->array[index];
13float*_capabilitynext=stat->array[index+1];
14}
```
Listing 1: Example of a function annotated to use capabilities.
Gates are inserted in the code in place of function calls, where these calls now cross compartment boundaries. They invoke the switcher, which performs, on the relevant CPU core, the security domain switch by switching the stack and global compartment code/data capability registers. Finally, a tramoline is called to enable fast return to the caller compartment from capability-unaware code, with less overhead than re-invoking the switcher. The switcher is trusted and privileged, hence its code and data (including the global capabilities for all compartments) cannot be accessed by the compartments directly, instead they must use a sealed capability (SS2). The switching mechanism is kept as lightweight as possible to minimize overhead while preserving strong security. We define the trusted computing base as the switcher, gates, tramoline, early boot code (including capability initialization code), memory manager, scheduler and interrupt handler.
### Two Approaches To Sharing Data
Comparments cannot access memory outside the regions constrained by their _DDC/PCC_. This raises the issue of how to selectively share data between compartments, and how to do so efficiently (i.e. without data copy). We discuss two approaches to data sharing, and reason about their performance, security, and scalability properties.
#### 3.2.1. Approach 1: Replacing Pointers with Capabilities
With CHERI, certain pointers can be transformed into fine-grained capabilities encompassing only the pointed data structure(s). This is the standard way to manage shared data as designed by the CHERI/Morello authors (Morello, 2017). A compartment _C1_ wishing to share a subset of its dedicated memory region with another compartment _C2_ can pass such a fine-grained capability _cap_ as parameter/return value of a gate. Morello is designed so that the memory access made by _C2_ dereferencing _cap_ will not be subject to _C2_'s global capability: _C2_ will thus be able to access _C1_'s memory region, but only the few bytes represented by _cap_.
_Engineering Cost._ The sandboxing effort of a legacy function foo is illustrated on Listing 1. As described earlier, pointer parameters (stat line 8) and any pointers created out of a capability (str line 9, 11, 12) must be transformed into capabilities with annotations. Capabilities flowing out of the compartment must be changed back into pointers with a cast (line 10). Capability monotonicity must also be respected, i.e. a capability cannot extend the bounds of the capability it is derived from: element+1 (line 5) is forbidden because it refers to memory outside the bounds of next, and that code must be redesigned. Other types of changes may be needed depending on the ported code (Zhou et al., 2017). In the general case, we hint that such manual porting may only be amenable to small-scale scenarios (e.g. sandboxing one or a few functions), because the engineering cost of rewriting pointers into capabilities becomes too high as compartments' sizes increase.
_Trust Model._ Engineering costs can be kept low with this approach in scenarios with 1) small compartment sizes, which limits the amount of code rewriting within the compartments; and 2) no capability flowing outside the compartment, to avoid costly rewriting of the data flow in the rest of the application. This fits very well function sandboxing scenarios. The isolated function represents a distrusted compartment, and is isolated from the rest of the system. Pointer arguments entering the compartment are replaced with capabilities. The rest of the system is trusted and can access any memory within the sandbox compartment.
_Performance, Security, and Scalability._ Data is shared through capabilities, leading to a low performance impact (no copy or marshaling). In terms of security, the sandbox is isolated by the _PCC/DDC_, constraining all non-capability operations made by the sandbox, and shared data is tightly bounded by argument capabilities, resulting in strong isolation. Regarding scalability, this approach scales to an unlimited number of compartments (e.g. function sandboxes) with a constant porting effort (porting complexity does not grow with the number of compartments).
#### 3.2.2. Approach 2: Overlapping Shared Region
This approach drops the fine-grained capabilities to rely on a single region of shared data. The _DDC_ bounds of communicating compartments are extended to cover this region, so both can access shared data, as illustrated in Figure 1. Since capability bounds must cover contiguous memory, the shared data region is located between two compartments in memory. The linker and dynamic memory allocation primitives ensure that shared data is correctly placed in the relevant memory.
_Engineering Cost._ With this approach shared data needs to be marked as such with annotations in the source code.
Figure 1. Compartment bounds with shared memory regions. The compartment bounds overlap to encompass shared data.
```
1voidfoo(){//Original6voidfoo(){//Portedintx;int_sharedx;bar(&x);8__gate(bar,&x,;9compartment1);10} }
```
Listing 2Example of annotations for shared regions.
The function calls at compartment boundaries are similarly annotated. This is illustrated on Listing 2, where foo and bar are placed in different compartments. Code transformations use these annotations to automatically allocate shared data in memory which is accessible from all compartments, and to instantiate gates. The engineering effort of this approach is relatively low, and significantly lower than replacing pointers with capabilities, as data must only be annotated at declaration and/or allocation sites.
Trust ModelMutual distrust is enforced between compartments, with none able to access the others' private data.
PerformanceSecurity, and ScalabilityUnlike Approach 1, shared stack variables must be allocated on a heap in the shared region, resulting in an additional allocation cost. Techniques to address this problem like data shadow stacks (Krishnan, 2015) cannot be applied as-is due to the requirements of the _DDC_. Nevertheless, we expect performance to be comparable to the previous approach. Regarding security, isolation of memory accesses is also enforced by the compartment _PCC_ and _DDC_. Data sharing is however made at a coarser granularity, with the entire shared memory region accessible to both compartments at all times. This trades off security in two ways: 1) bounds are not tight to individual objects, thus not offering CHERI's spatial safety for shared objects; 2) even assuming no revocation in Approach 1, the number of objects effectively accessible by each compartment at any execution time will remain larger, resulting in more potential for compartment interface vulnerabilities (Krishnan, 2015). In terms of scalability, this approach only scales to a small number of compartments: indeed one can create only a single overlapping region per pair of communicating compartment, hence a scenario with e.g. 3 compartments wishing to access a shared data structure is not possible.
## 4. Implementation
We have selected FlexOS (Krishnan, 2015; Krishnan, 2015), a compartmentalization-focused library operating system, to implement a prototype system. FlexOS originally supported isolation with Intel Memory Protection Keys and Extended Page Tables. The OS allows easy extension to new isolation mechanisms. Further, its design is based on the Unikraft (Unikraft, 2015) unikernel, so it inherits its high performance, small attack surface, and good compatibility with popular applications. We ported FlexOS to the Morello platform, and implemented on top of it the two compartmentalization abstractions described earlier, in a total of about 2200 lines of code. Below we give implementation details regarding the system's initialization, the compartments' structure, and the security domain switching process.
### Compartment Structure
Compartments are defined at build time in a configuration file provided to the FlexOS build tool. At link time, isolated data are placed into their respective, separate, non overlapping ELF section with the help of a custom linker script automatically generated by the toolchain. Non-isolated data are placed into a default compartment. The linker script also reserves space for dynamically-allocated data (stack and heap) in each compartment's memory.
The compartment switcher's code and data is isolated from all other compartments' code. This is done to control access to the switcher, which is a privileged entity. In addition, compartment capability pairs (one _DDC_ and _PCC_ pair per compartment) are stored in memory which is not accessible from any compartment but that of the switcher. This is done to avoid a compartment arbitrarily granting itself access to another compartment's memory.
### Initialization
Based on the compartment boundaries defined in the linker script, compartments are initialized at boot time: we trust the boot code of FlexOS to initialize compartment capabilities correctly. Compartment's _PCC_ and _DDC_ bounds are initialized to cover the statically defined compartment memory region. Capability bounds can only cover a contiguous region of memory, meaning that all of the code and data of a compartment must be present in contiguous memory. Once compartment capabilities have been created, they are stored in the memory reserved for compartment capability pairs.
During boot time, a capability pair for the switcher is also initialized. This pair grants access to the switcher code, and the compartment capability pairs. To prevent unauthorized execution of the switcher, the capability pair granting access to the switcher is also placed in memory which is out of bounds of any compartment. To access this pair, a sealed capability is created for each compartment, which is unsealed using a lpb (load pair and branch) instruction. The sealed capability is thus the only way for compartments to invoke the switcher. Each compartment is given one such sealed capability. Finally, each compartment receives a private allocator, which manages the per-compartment portion of the virtual address space previously reserved in the linker script. Using this allocator, a private stack and heap for each compartment are initialized.
At the end of the boot process, the capability pair for the default compartment is loaded and execution then enters the default compartment.
### Switching Security Domains
The security domain switch process is illustrated on Figure 2: the caller compartment invokes the privileged switcher safely through a sealed capability, which switches the architectural state representing compartment permissions on the
relevant CPU core. Using a trampoline, the switcher then branches to the callee compartment. On the return path, a trampoline is used to branch back to the caller.
We implement compartment switch gates as C macros. This allows the instructions invoking the switcher to be directly inlined at the call site, avoiding the need for a function call. The call gate is in the caller compartment. Unlike in other implementations of FlexOS call gates, such as MPK (Krishnan et al., 2017), the domain transition is not realized within the security context of the caller: this is done to prevent compartments from accessing the capability pairs of other compartments. Instead, it invokes the switcher after having loaded the parameters needed for the callee. When initiating a switch, the compartment switch gate takes the caller and callee compartment IDs, the callee function pointer, a return variable pointer (if needed) and arguments to be passed.
The compartment switch gates follow the AArch64 calling convention for argument registers. The procedure used to invoke the compartment switcher is as follows: caller-saved registers are pushed to the stack, the current stack and frame pointers are saved, the switcher parameters are loaded and finally, the sealed capability granting access to the switcher capabilities is loaded, unsealed and the switcher is invoked. By invoking the switcher, the PCC is restricted to only execute switcher code.
The switcher is an isolated entity which is trusted to perform the compartment switches. The switcher \(PCC\) is the only capability able to execute switcher code, which is isolated from all other code. The switcher \(DDC\) is the only way to access compartment capability pairs. Once the switcher is invoked, the following steps are taken:
1. Upon first entering the switcher, the caller compartment \(DDC\) is still in place. This, along with the return capability generated by the call to the switcher are stored on the caller compartment stack. A sealed capability is generated which grants access to this stored capability pair.
2. The \(DDC\) is changed to the switcher \(DDC\).
3. Callee compartment capabilities (PCC, DDC) are load-ed/set.
4. The stack is switched for the callee.
5. The callee compartment \(PCC\) is used to leave the switcher and jump to the trampoline.
The trampoline serves as both the entry and exit point for a compartment. Return to the caller compartment can only be performed via the capability pair stored on the caller stack, accessed via a sealed capability. This avoids the need to go through the switcher on the return path. At call time, the trampoline stores the sealed capability created by the switcher onto the callee's stack before calling the target function. Upon return to the trampoline, the sealed capability is popped, unsealed, and a return to the caller compartment is performed. The unsealed capability is used to load the caller compartment capability pair: the \(PCC\) is set as part of the return and the \(DDC\) is restored by the call gate in the caller. Upon return from the callee compartment, the gate restores the stack and any saved registers. If the function call returned a value, the gate will store the returned value in a provided variable pointer.
## 5. Evaluation
In this section we evaluate the impact on performance and engineering effort of our proposed approaches. We use the abstractions we developed to compartmentalize two popular applications, the SQLite (Beck et al., 2016) database management system and the libsodium (Beck et al., 2016) crypto library. For libsodium we use our first approach to data sharing, function sandboxing with fine-grained capabilities, and sandbox 5 functions manipulating external input, listed in Table 1. The library is integrated with a benchmark we derived from its test suite, running representative tests (e.g. encrypting a buffer, generating a key) 200 times in a loop. SQLite's compartmentalization uses our second approach, coarse-grain shared data regions with overlapping DDCs between communicating compartments. We create two mutually distrusting compartments: the filesystem management code, and the rest of the system. This application is benchmarked with 5000 INSERT operations on an in memory (ramf's) database. The compartmentalization scenario and the benchmark are both taken from the FlexOS paper (Krishnan et al., 2017) and represent a system call intensive application.
We run all experiments on our port of FlexOS, bare-metal on the Morello evaluation board (Krishnan et al., 2017; Krishnan et al., 2017) with 16 GB of RAM and the capability-enabled SoC clocked at 2.5 GHz. For comparison, we also gather data for Linux on Morello, running a capability-unaware AArch64 Debian 11, as well as for the other isolation mechanisms supported by FlexOS (MPK, EPT) on an x86-64 Xeon Silver 4114 clocked at 2.2 Ghz with 128 GB of RAM, running Debian 11. Results are averages of 10 runs; since they show little variance we omit error bars.
### Engineering Cost
Table 1 shows the porting effort associated with the compartmentalization of libsodium and SQLite.
Concerning libsodium, the engineering cost of sandboxing functions with our first approach involves rewriting the function's code to be capability-aware, something that can be a task of non-negligible complexity (Krishnan et al., 2017). Still, because we deliberately selected functions with relatively small sizes (11 to 141 LoC), that effort was relatively low (1 or 2 hours per
Figure 2. Control flow of a compartment switch (call and return paths). Dashed boxes represent protection domains. The trampoline is available in the domain of the callee compartment.
function for a programmer with good knowledge of capability programming), and mostly consisted of annotating the relevant pointers to be transformed into capabilities.
Regarding SQLite, we achieved the compartmentalization in a couple of days using the overlapping shared data region approach. The effort involved can be broken down into two tasks: 1) gate insertion and 2) shared data identification. Gate insertion is mostly automated by the FlexOS toolchain, with the programmer only needing to insert annotations at the desired compartment boundary. The majority of the work comes from identifying shared data. It is currently with FlexOS a manual process, during which the programmer must analyze the code carefully to pinpoint what needs to be shared with annotations. Although the engineering cost for this approach seems higher than for the sandboxing method, the compartment size is also much larger for the overlapping DDC, e.g. 5.8K LoC for the filesystem compartment.
### Performance
LibsodiumWe analyze different configurations of the 5 Libsodium functions we sandboxed by replacing pointers with capabilities. The results are presented on Figure 3. The overhead is very modest, due to a relatively low amount of compartment switches; the highest is
chacha20_encrypt_bytes with 0.669 compartment switches/1k instructions. The lowest performance overhead is achieved when sodium_hex2bin and sodium_bin2hex are isolated, adding only a 0.144% performance overhead. In contrast, the highest performance overhead comes from compartmentalizing chacha20_encrypt_bytes only, with an overhead of 12.207%. This is higher than the scenario where all are isolated, because chacha20_encrypt_bytes makes calls to store32_le. When only chacha20_encrypt_bytes is isolated, a compartment switch is required for each call, hence the overhead is higher. Evaluating against the DARPA requirements for function granularity isolation (Brandt et al., 2015), most of these results are in range of the required 5% overhead.
To further understand this behavior, we enabled hardware performance counters support in our prototype OS and gathered the data presented in Table 2. The number of instructions executed, memory accesses performed including cache accesses/misses, and branches executed, increase proportionally. The reduction in instruction cache misses for store32_le and store64_be is due to the new wrapper function being no longer inlined as the original functions were, resulting in more efficient cache utilization. We also note that the branch predictor struggles with increased use of indirect branches, a direct consequence of gates as a layer of indirection.
SQLiteResults for SQLite are presented on Figure 4. Here the overlapping shared data region approach is used for CHERI configurations. Compared to an uncompartmentalized FlexOS baseline, isolating the filesystem adds an overhead of 119.9%. This is because the isolated code lies on the hot path, meaning that isolated primitives are frequently called: we measured a high frequency of domain transitions (2.49/1k instructions). However, CHERI-based filesystem isolation still outperforms the same benchmark running on an unmodified Debian Linux installation by a factor of 1.4x. Running on Debian Linux is equivalent to a relatively costly two-compartments page-table-based scenario (PT2), due to the page table-based user-kernel separation.
The data measured from the hardware performance counters for SQLite is presented in Table 2. Compartmentalization increases the number of instructions executed by 27.1% and memory accesses by 48.3%. Correspondingly, the number of L1 instruction cache and L1 data cache access increase, while the number of misses for both increase by a smaller proportion. Interestingly, the branch misprediction rate rises by a far greater amount than the number of branches executed. This may be attributed to the increased number of indirect branches used as a result of the switching process, which are harder for the predictor to predict.
The relative overhead figures for SQLite are presented in Figure 5. We also include numbers for FlexOS running on x86-64 with MPK and EPT. We present results for the previously-mentioned compartmentalization scenario, 2 compartments (filesystem and rest of the system, CHERI
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline
**Software** & \begin{tabular}{c} **Sharing** \\ **approach** \\ \end{tabular} & **Compartments** & \begin{tabular}{c} **Porting** \\ **cost** \\ \end{tabular} &
\begin{tabular}{c} **Changes** \\ **(LoC)** \\ \end{tabular} \\ \hline \multirow{4}{*}{libsodium} & \multirow{4}{*}{Function} & sodium\_hex2bin & \textless{1h} & 9 \\ \cline{3-5} & & sodium\_btn2hex & \textless{1h} & 8 \\ \cline{3-5} & & chacha20 encrypt\_bytes & \textless{2h} & 73 \\ \cline{3-5} & & store32\_le & \textless{1h} & 5 \\ \cline{3-5} & & store64\_be & \textless{1h} & 5 \\ \hline SQLite & Overlapping DDCs & vfscore + ranfs & \textless{2d} & \textless{300} \\ \hline \end{tabular}
\end{table}
Table 1. Porting effort required to compartmentalize.
Figure 4. Execution times of different configurations of SQLite running on Morello.
Figure 3. Overhead of various compartmentalization scenario on libsodium (X labels are sandboxed functions).
an additional scenario with 3 compartments (filesystem, time manager, rest of the system, CHERI3/MPK3). Morello's numbers are in line with the relative overheads of these other isolation mechanisms. Compared to the overhead of MPK3, CHERI is slightly more expensive. This can be attributed to the switching mechanism that with CHERI requires additional bookkeeping and jumps (e.g. to the switcher) compared to MPK.
_Security Domain Switch Latency Breakdown._ We observe that the majority of the overhead comes from security domain switches (Srivastava et al., 2016), hence we use microbenchmarks to obtain a breakdown of the cost in CPU cycles associated with a switch. Results are presented in Figure 6. Compartent switches can be broken down into hot and cold switches. This is a result of cache utilization. The vast majority of switches (>99.9%) observed in all configurations fall into the category of hot switches. The cold switches, therefore, represent a worst-case cycle latency for compartment switches. These can be expected in compartmentalization scenarios where compartment switches rarely occur, and cache utilization is worse. This results in a best switching latency of <400 cycles, and a worst case of 900-1000 cycles.
## 6. Related Work
_Compartmentalization._ In recent years, many works have looked at implementing various forms of compartmentalization (Srivastava et al., 2016; Chen et al., 2016; Chen et al., 2017; Chen et al., 2018; Chen et al., 2019; Chen et al. |
2309.05347 | Asynchrony-Resilient Sleepy Total-Order Broadcast Protocols | Dynamically available total-order broadcast (TOB) protocols tolerate
fluctuating participation, e.g., as high as 99% of their participants going
offline, which is especially useful in permissionless blockchain environments.
However, dynamically available TOB protocols are synchronous protocols, and
they lose their safety guarantees during periods of asynchrony. This is a major
issue in practice.
In this paper, we propose a simple but effective mechanism for tolerating
bounded periods of asynchrony in dynamically available TOB protocols that
ensure safety deterministically. We propose to trade off assumptions limiting
the online/offline churn rate in exchange for tolerating bounded asynchronous
periods through the use of a configurable message-expiration period.
In practice, this allows picking a small synchrony bound $\delta$, and
therefore obtain a fast protocol in the common case, knowing that the protocol
tolerates occasional periods of duration at most $\pi>\delta$ during which the
bound does not hold. We show how to apply this idea to a state-of-the-art
protocol to make it tolerate bounded periods of asynchrony. | Francesco D'Amato, Giuliano Losa, Luca Zanolini | 2023-09-11T09:46:13Z | http://arxiv.org/abs/2309.05347v2 | # Improving Asynchrony Resilience in Dynamically Available Total-Order Broadcast Protocols
###### Abstract
Dynamically available total-order broadcast (TOB) protocols are essential in permissionless systems in which participants may unpredictably go offline and later come back online. Existing dynamically-available protocols are synchronous protocols, and they lose their safety guarantees during periods of asynchrony. This is a major issue in practice.
In this paper, we explore the challenge of tolerating bounded periods of asynchrony in dynamically-available TOB protocols that ensure safety deterministically. We propose to trade off assumptions limiting the online/offline churn rate in exchange for tolerating bounded asynchronous periods through the use of a configurable message-expiration period. We show how to apply this idea to a state-of-the-art protocol to make it tolerate bounded periods of asynchrony.
## 1 Introduction and Technical Outline
A design requirement for permissionless systems such as Bitcoin [9] or Ethereum [1] is to solve total-order broadcast (TOB) while allowing participants to join or leave the systems at any time and without coordination. We say that they tolerate dynamic participation, and we call such protocols _dynamically available_.
Dynamically available TOB protocols are typically synchronous (the first being the Bitcoin protocol) and are usually formally analyzed in the sleepy model of Pass and Shi [11]. Execution of a protocol in the sleepy model proceeds in rounds. There is a fixed set of participants, but each round, only a subset of the participants are online. The set of online participants is constant during a round, but it can change arbitrarily from one round to the next as long as, each round, the failure ratio (that is, the ratio of misbehaving participants, controlled by an adversary, to online participants) is below a predetermined threshold \(\beta\) (usually \(\beta=\frac{1}{3}\) or \(\beta=\frac{1}{2}\)).
Operationally, in each round \(r\), first every online participant sends messages determined by the messages it received in the previous round \(r-1\) (or by an external input if \(r=0\)), and then every participant, online or not, receives all the messages that have been sent to it in the current round1. Well-behaved participants follow an algorithm, while misbehaving participants are controlled by an adversary and can send arbitrary messages, except that they cannot forge cryptographic signatures. In practice, this abstract model can be simulated in a system that has a known upper bound \(\delta\) both on the message delay and on the difference between local clocks, obtaining rounds of duration \(\Delta=3\delta\)[11, Section 2.1].
Footnote 1: Assuming that all participants receive messages, whether they are online or not, is a convenient abstraction. In practice, a participant that is offline may only receive messages the next time it comes online.
In this paper, we are interested in TOB protocols that guarantee safety _deterministically_ in the sleepy model, such as the protocol of Malkhi, Momose, and Ren [7] (the MMR protocol). These protocols consist of a sequence of views where each view spans a fixed number of rounds (2 in the MMR protocol). In each view, participants may introduce new values at the beginning of the view, and they then go through a series of voting rounds that culminate in a decision round whose votes determine the decision on the next value to be ordered.
Each participant decides on a value \(b\) when a sufficient fraction \(\alpha=1-\beta\) (so \(\alpha=\frac{2}{3}\) for the MMR protocol) of the votes it receives in a decision round are for \(b\).
In order to guarantee progress in the face of changing participation, in each round \(r\), each participant only uses votes cast in the same round \(r\). Otherwise, a drop in participation might stall the protocol because the currently-online participants would not be numerous enough, compared to the number of previously online participants, to reach the decision threshold. For example, if participation drops from 100 to 10 participants from one round \(r\) to the next round \(r+1\), then the votes of the participants that are online in round \(r+1\) can obviously not account for \(\frac{2}{3}\) or even \(\frac{1}{2}\) of participants' latest vote over rounds \(r\) and \(r+1\).
Unfortunately, protocols that only use votes from the current round lose all safety guarantees if there are periods of asynchrony during which the message-delivery guarantees do not hold. For example, suppose that the network delivers only adversarial messages in the decision round of the MMR protocol. Trivially, if the adversary sends only votes for \(b\) to a participant \(p_{i}\) and only votes for \(b^{\prime}\neq b\) to another participant \(p_{j}\), then \(p_{i}\) decides \(v\) because it receives unanimous votes for \(b\) and, similarly, \(p_{j}\) decides \(b^{\prime}\). This violates the agreement property of total-order broadcast.
Since participants that come online during periods of asynchrony might not have any reliable information at hand (because all they might ever have received are adversarial messages), we cannot guarantee anything to them during asynchrony. However, at a minimum, we would like to ensure that, after asynchrony ends, the system eventually converges to a regime in which all online participants agree on the decisions made before and after the asynchronous period. This is, roughly speaking, what we mean by _tolerating a period of asynchrony_.
To sum up, on the one hand it seems that, each round, dynamically-available protocols like the MMR protocol must only use votes cast in the current round or they lose progress guarantees. On the other hand, using only votes cast in the current round means losing safety in asynchronous rounds. In this paper, we offer a solution to this conundrum.
A simplistic solution to tolerating periods of asynchrony of duration at most \(\tau\) times the round duration \(\Delta\) is to increase the duration of each round to at least \(\tau\Delta\). However, this means multiplying the latency of the protocol by \(\tau\) even during synchronous periods.
Instead, we observe that we can use votes from a fixed number of previous rounds, called the _expiration period_, without losing safety or progress guarantees if we fix a maximum _drop-off rate_\(\gamma\) (roughly, the fraction of participants online during the last expiration period that are allowed to go offline; see Section sec:tau-sleepy-model) and set the maximum failure ratio to a function of \(\gamma\) (see Section 1 for the case of a protocol with a decision threshold of \(\frac{2}{3}\)). In turn, as observed above, using votes from a fixed expiration period allows tolerating periods of asynchrony shorter than the expiration period. Next we intuitively explore this idea in more detail.
### An expiration period protects safety during asynchrony
To understand how an expiration period of multiple rounds helps tolerate asynchrony, consider a view \(v\) of the MMR protocol starting in round \(r\) (thus \(v\) consists of the two rounds \(r\) and \(r+1\)). According to the MMR protocol, round \(r+1\) is the decision round of view \(v\), and any process that receives unanimous votes for a value \(b\) from more than two thirds of the participants it hears from in round \(r+1\) decides \(b\).
Now suppose that round \(r+1\) is asynchronous but that we use an expiration period of 2 rounds. So, to try to make a decision in round \(r+1\), each participant \(p_{i}\) tallies the latest vote of each participant it hears from in rounds \(r+1\) or \(r\). To simplify, suppose that participation is constant, that round \(r\) is synchronous, and that we have \(2n+1\) online, well-behaved participants for some \(n>0\) and \(n\) adversarially-controlled participants. In the worst case: a) in round \(r\), well-behaved participants cast \(n\) votes for a value \(b\) and \(n+1\) votes for a value \(b^{\prime}\neq b\) and b) in round \(r\), for each participant \(p_{i}\), the adversary can send at most \(n\) votes for a value of its choosing to \(p_{i}\) and c) in round \(r+1\), the network only delivers adversarial messages.
Since all messages sent in round \(r\) are received (\(r\) is synchronous), in round \(r+1\) each participant makes its decision based on the \(2n+1\) messages from well-behaved participants received in round \(r\) plus the \(n\) adversarial messages received in round \(r+1\). Note that the only way for the adversary to cause a process \(p_{i}\) to decide is to cast \(n\) votes for \(b\) and cause \(p_{i}\) to decide \(b\), as no other value can reach the two-thirds decision threshold. Thus there can be no disagreement between well-behaved participants.
In general, assume an expiration period of \(\tau\) rounds and assume, for simplicity, that participation is constant. Then, as long as the failure ratio is below \(1-\alpha\) (recall that \(\alpha\) is the decision threshold), we can tolerate asynchronous periods lasting less than \(\tau\) rounds without any safety violations.
The situation becomes more complicated if participation fluctuates during asynchrony because processes that come online during asynchrony can be manipulated by the adversary. In Section sec:tau-sleepy-model, we give precise conditions that relate churn to the failure ratio and that ensure asynchrony tolerance.
### Churn must be limited even during synchrony
Although using messages from multiple rounds allows tolerating bounded asynchrony, it comes at a cost: Even during synchrony, to ensure safety and progress of the protocols, we must introduce bounds on the fraction of participants that are offline after participating at some point during the last expiration period. Otherwise, safety can be violated because a consensus decision may be witnessed by too few participants, compared to the number of participants that have been active during the expiration period, and then overridden in the following rounds. Progress may also be hampered, as old votes may be too numerous and prevent votes for a new value from reaching the decision threshold. Let us examine the latter case in more detail.
Assume an expiration period of 2 views (i.e., 4 rounds), and consider two views \(v\) and \(v+1\) of the MMR protocol such that participation drops by more than \(\frac{1}{3}\) from view \(v\) to view \(v+1\). Then, even if no adversarial participants are online, no value \(b\) introduced in view \(v+1\) can be decided: Since \(b\) is introduced in view \(v+1\), no vote cast in view \(v\) can possibly be for \(b\); moreover, because participation dropped by at least \(\frac{1}{3}\), the votes cast in view \(v\) must consist of more than \(\frac{1}{3}\) of the votes taken into account in view \(v+1\). Therefore, votes for value \(b\) cannot reach the \(\frac{2}{3}\) decision threshold required to decide \(b\) in view \(v+1\).
To characterize the changes in participation that can be tolerated during synchrony with an expiration period of \(\tau\) rounds, we assume that, for every round \(r\), out of all the well-behaved participants that are online in any round between \(r\) and \(r+\tau-1\) included, at most a fraction \(\gamma\) are not online in round \(r+\tau\). We call \(\gamma\) the drop-off rate.
As we shall see, to ensure safety and progress during synchronous periods, we must assume that the failure ratio is bounded by a function \(\tilde{\beta}_{\alpha}(\gamma)\), where \(\alpha\) is the decision threshold of the algorithm. Figure 1 below represents \(\tilde{\beta}_{2/3}(\gamma)\), which is applicable, for example, to the MMR algorithm (which has a decision threshold of \(\frac{2}{3}\)).
### Main contributions and roadmap
To summarize, we present a TOB protocol that tolerates bounded periods of asynchrony and a maximum failure ratio of \(\frac{1}{3}\). This protocol is inspired by the MMR protocol. Moreover, we give precise bounds on
Figure 1: Allowable failure ratio, noted \(\tilde{\beta}_{2/3}\), to ensure progress during synchrony, for an algorithm using a decision threshold of \(\frac{2}{3}\). \(\tilde{\beta}_{2/3}\) is a function of the drop-off rate \(\gamma\). If participation is static (\(\gamma=0\)), the maximum tolerable failure ratio is \(\frac{1}{3}\), and this matches the upper bound for a decision threshold of \(\frac{2}{3}\). At a drop-off rate of \(\gamma\geq\frac{1}{3}\), the system may stall even without failures. For the general formula for \(\tilde{\beta}\), see Section 4.
churn and failures that guarantee a) safety and liveness during synchrony and b) tolerating bounded periods of asynchrony.
The remainder of this work is structured as follows. In Section 2, we introduce the system model and provide necessary definitions. Section 3 delivers a comprehensive overview of the protocol as proposed by Malkhi, Momose, and Ren [7], particularly focusing on its limitations in relation to asynchrony. Following this, adversarial restrictions and the \(\tau\)-sleepy model [3] are revisited in Section 4. Our modified version of the Malkhi, Momose, and Ren protocol [7], which is designed to be resilient to asynchrony, is thoroughly explained in Section 5. Section 6 contains a discussion on related work. Conclusions are drawn in Section 7.
## 2 Model and Definitions
### System model
ProcessesWe consider a system consisting of a set \(\mathcal{P}=\{p_{1},\dots,p_{n}\}\) of \(n\)_processes_ in a message-passing system with point-to-point communication links between every pair of processes. Each process is assigned a protocol to follow, consisting of a collection of programs with instructions for all processes. Processes are divided into _well-behaved_ processes and _Byzantine_ processes. Well-behaved processes follow their assigned protocol and send the messages stipulated by it, while Byzantine processes are controlled by an adversary which can make them send arbitrary messages. Messages sent by processes come with a signature. Messages without this signature are discarded.
Time and networkAn execution of the system proceeds in an infinite sequence of rounds \(1,2,3,\dots\) We assume the existence of a single asynchronous period starting after round \(r_{a}\), which could extend up to \(\pi\in\mathbb{N}\) rounds. In other words, the rounds ranging from \([r_{a}+1,r_{a}+\pi]\) may experience asynchrony.
SleepinessEach round has two phases, one occurring at its beginning and one at its end. In either phase, only an adversarially chosen subset of the processes are said to be _awake_[11]. Processes that are not awake are said to be _asleep_. The subset of processes awake at the beginning of round \(r\) is \(S_{r}\), and they coincide with the processes awake at the end of the previous round, \(r-1\). In other words, the processes awake at the beginning of a round are potentially different from those awake at the end of it, as in [7]. Asleep processes do not execute the protocol, and messages for that round are queued and delivered in the first round in which the process is awake again. When a process \(p_{i}\) goes from being awake to being asleep, we say that \(p_{i}\)_goes to sleep_. We denote with \(H_{r}\) and \(B_{r}\) the sets of well-behaved and Byzantine processes, respectively, that are awake at the beginning of round \(r\). From now on, we refer to \(H_{r}\), \(B_{r}\), and \(S_{r}\) simply as processes that are awake at round \(r\), leaving it implicit that they are awake at the beginning of it. The Byzantine processes never go to sleep: the adversary is either _constant_, in which case we have \(B_{r}\) is the same at every round \(r\), or the adversary is _growing_[7], in which case \(B_{r}\subseteq B_{r+1}\) for every round \(r\).
Round structureA round starts with a _send phase_, and ends with a _receive phase_, immediately prior to the beginning of the next round. Processes in \(H_{r}\) participate to the send phase, while processes in \(H_{r+1}\) to the receive phase.
In the send phase of round \(r\), each process \(p_{i}\in S_{r}\) sends messages. A process \(p_{i}\in B_{r}\) may send arbitrary messages, and processes that are not awake in round \(r\) do not send any messages. If process \(p_{i}\) is well-behaved, then \(p_{i}\) sends the messages dictated by the protocol.
In the receive phase of round \(r\), each well-behaved process that is awake at the end of round \(r\), i.e., a process in \(H_{r+1}\), receives the following messages:
* If round \(r\) belongs to a synchronous period, then \(p_{i}\) receives all the messages that it has not received yet and that were sent in any round \(r^{\prime}\leq r\).
* Otherwise, if \(r\) belongs to an asynchronous period, then \(p_{i}\) receives an arbitrary subset of such messages.
Moreover, processes that are not awake in the end of round \(r\) do not receive any messages.
Expiration of (latest) messagesWe require in our model that every message has an _expiration period_\(\eta\)[2, 3]. In particular, given a round \(r\) and a constant \(\eta\in\mathbb{N}\) with \(\eta\geq 0\), the _expiration period_ for round \(r\) is the interval \([r-1-\eta,r-1]\), and only messages sent within this period influence the protocol's behavior at round \(r\). Moreover, only the _latest_ messages sent by processes are taken into account during each round of a protocol's execution.
### Graded Agreement and Total-Order Broadcast
**Definition 1** (Log).: A _log_ is a finite sequence of values \([v_{1},v_{2},\ldots,v_{k}]\). For two logs \(\Lambda\) and \(\Lambda^{\prime}\), we write \(\Lambda\preceq\Lambda^{\prime}\) when \(\Lambda\) is a prefix of \(\Lambda^{\prime}\), and we also say that \(\Lambda^{\prime}\) extends \(\Lambda\). We say that two logs are _compatible_ when one is a prefix of the other, and that they conflict when they are not compatible (i.e., none is a prefix of the other).
**Definition 2** (Byzantine total-order broadcast).: A Byzantine total-order broadcast (TOB) protocol ensures that all the well-behaved processes deliver the same log. In a Byzantine total-order broadcast protocol, every process can _input_ a value \(v\) and the broadcast primitive repeatedly _delivers_ logs \(\Lambda\).
A protocol for Byzantine total-order broadcast satisfies the following properties (after round \(r\)).
**Safety after round \(r\)**: For any two rounds \(r^{\prime},r^{\prime\prime}>r\), and any two well-behaved processes \(p_{i}\) and \(p_{j}\) awake at rounds \(r^{\prime}\) and \(r^{\prime\prime}\), respectively, either \(\Lambda^{r^{\prime}}_{i}\preceq\Lambda^{r^{\prime\prime}}_{j}\) or \(\Lambda^{r^{\prime\prime}}_{j}\preceq\Lambda^{r^{\prime}}_{i}\)2.
**Liveness after round \(r\)**: If a well-behaved process inputs a value \(v\) and remains awake long enough, then, eventually after \(r\), every well-behaved process that becomes awake long enough delivers a log \(\Lambda\) that includes \(v\).
Footnote 2: We use this terminology here only, denoting with \(\Lambda^{r}_{i}\) the log of process \(p_{i}\) at round \(r\).
Observe that if Definition 2 holds for every round \(r\), we get back the usual notion of total-order broadcast, satisfying (Safety) If two well-behaved processes deliver logs \(\Lambda_{1}\) and \(\Lambda_{2}\), then \(\Lambda_{1}\) and \(\Lambda_{2}\) are compatible, and (Liveness) If a well-behaved process inputs a value \(v\) and remains awake long enough, then, eventually, every well-behaved process that becomes awake long enough delivers a log \(\Lambda\) that includes \(v\).
**Definition 3** (Dynamically Available Total-Order Broadcast).: Under synchrony, a protocol for _total-order broadcast_ is dynamically available if and only if the protocol satisfies safety and liveness (Definition 2) provided that at every round \(r\) it holds \(|B_{r}|<\beta|S_{r}|\), for a _failure ratio_\(\beta\) (typically \(\frac{1}{3}\)[7] or \(\frac{1}{2}\)[2, 8, 6]).
The Byzantine total-order broadcast protocol we consider [7] is implemented through a _weak graded agreement_ protocol, defined as it follows.
**Definition 4** (Weak Graded agreement [7]).: In a weak graded agreement protocol, each process has in input log and, at the end of the protocol, outputs a set of logs with each log assigned a grade bit, such that the following properties are satisfied.
**Graded consistency:**: If a well-behaved process outputs a log with grade 1, then all well-behaved processes output the log with at least grade 0.
**Integrity:**: If a well-behaved process outputs a log with any grade, then there exists a well-behaved process that inputs the log.
**Validity:**: Processes output with grade 1 the longest common prefix among well-behaved processes' input logs.
**Uniqueness:**: If a well-behaved process outputs a log with grade 1, then no well-behaved processes outputs any conflicting log with grade 1.
**Bounded divergence:**: Each well-behaved process outputs at most two conflicting logs (with grade 0).
Asynchrony resilience and healing after asynchronyLet \(D_{r_{a}}\) be the set of logs decided by well-behaved processes in rounds \(\leq r_{a}\).
**Definition 5** (Asynchrony resilience).: We say that a Byzantine total-order broadcast protocol _preserves safety during the period of asynchrony_\([r_{a}+1,r_{a}+\pi]\) if during \([r_{a}+1,r_{a}+\pi+1]\) no well-behaved process that is awake at round \(r_{a}\) decides a log conflicting with \(D_{r_{a}}\), and after round \(r_{a}+\pi+1\) no well-behaved process decides a log conflicting with \(D_{r_{a}}\). We say that a Byzantine total-order broadcast is \(\pi\)_-asynchrony resilient_ if it preserves safety during all periods of asynchrony of length \(\leq\pi\).
**Definition 6** (Healing after asynchrony).: We say that a Byzantine total-order broadcast protocol _heads from asynchrony_ if and only if safety and liveness hold after round \(r+k\), where \(r\) is the last asynchronous round and \(k>0\) is some constant.
## 3 1/3-resilient protocol of Malkhi, Momose, and Ren [7]
Recent studies on dynamically available total-order broadcast protocols have explored diverse techniques to resolve consensus in the sleepy model and its variants [2, 3, 6, 7, 8, 10]. However, these protocols share a common limitation - they are strictly applicable to synchronous models. This restriction is due to the CAP theorem [4, 5], which stipulates that no consensus protocol can accommodate dynamic participation and simultaneously tolerate network partitions [10].
This section focuses on the total-order broadcast protocol proposed by Malkhi, Momose, and Ren [7], demonstrating its inability to withstand periods of asynchrony. We examine this protocol in the context of the initial framework it was presented in -- the growing adversary model [7]. In the full version of this work, we show how all other suggested solutions [2, 6, 8, 10] also falter under asynchronous periods of any duration.
Malkhi, Momose, and Ren [7] propose a total-order broadcast protocol with a resilience of \(\frac{1}{3}\), and expected termination in 6 rounds, without the assumption of participation stabilization, differently from previous work [8]. The authors extended the sleepy model [11] to allow for a growing number of faulty processes, and developed a simple graded agreement protocol with a fault tolerance of \(\frac{1}{3}\) in this model, upon which the TOB protocol is based.
Figure 2 describes an instance of the weak graded agreement protocol of Malkhi, Momose, and Ren [7]. As in the original formulation, different processes can be awake in the two phases. Every well-behaved process awake at round \(r\) multi-casts a vote message for a log \(\Lambda\). Then, during the receive phase of round \(r\), every awake process \(p_{i}\) tallies vote messages for the received logs, counting votes for a log extending \(\Lambda\) as one for \(\Lambda\), and ignoring multiple votes from the same process. If there exists a log \(\Lambda\) that has been voted by more than \(\frac{2}{3}\) (or more than \(\frac{1}{3}\)) of the processes that \(p_{i}\) heard from, then \(p_{i}\) outputs \(\Lambda\) with grade 1 (or with grade 0).
In their work, Malkhi, Momose, and Ren [7] implement their total-order broadcast protocol (Algorithm 1) via two instances of weak graded agreement. Algorithm 1 is executed in views spanning two rounds each, corresponding to two instances of graded agreement (Figure 2). The exception is view 0, which requires only a single round. Specifically, at round 1 of view 0, every awake process \(p_{i}\) multi-casts [propose, \(\Lambda\), \(\textsc{VRF}_{p_{i}}(1)|p_{i}\), proposing \(\Lambda:=[b_{0}]\).
Figure 2: Graded Agreement \(GA\) - Malkhi, Momose, and Ren [7]
Subsequently, at round 1 of any other view \(v\geq 1\), each awake and well-behaved process calculates the outputs of \(GA_{v-1,2}\), deciding for any log \(\Lambda\) that is output with a grade 1. In addition, it sets \(\mathcal{L}_{v-1}\) as the longest log \(\Lambda^{\prime}\) for which \(GA_{v-1,2}\) generates output at any grade. It then initiates a graded agreement instance \(GA_{v,1}\), inputting a log contained in the propose message with the largest valid \(\text{VRF}(v)\), ensuring it doesn't conflict with \(\mathcal{L}_{v-1}\).
At round 2 of this view, every awake and well-behaved process \(p_{i}\) computes its outputs from \(GA_{v,1}\), and starts a graded agreement instance \(GA_{v,2}\) with the input being the longest log \(\Lambda\) that \(GA_{v,1}\) outputs with a grade 1. Notably, due to the validity property, it's always possible to identify such a \(\Lambda\). Furthermore, process \(p_{i}\) proposes for view \(v+1\) a block \(b\) extending the longest log \(\mathcal{C}_{v}\) where \(GA_{v,1}\) outputs \((\mathcal{C}_{v},*)\). This means process \(p_{i}\) multi-casts [propose, \(\Lambda^{\prime},\text{ VRF}_{p_{i}}(v+1)\)] with \(\Lambda^{\prime}:=b\|\mathcal{C}_{v}\).
**Proposition 1**.: _The total-order broadcast protocol of Malkhi, Momose, and Ren (Algorithm 1) is not asynchrony resilient. In particular, the output of a graded agreement instance is adversarially controlled under asynchrony._
Proof.: Let us assume that during a synchronous view \(v\) every well-behaved process decides a log \(\Lambda\). Note that this assumption can be done due to the liveness property of Algorithm 1 (Lemma 7 [7]). If every well-behaved process decides for \(\Lambda\) in view \(v\) (Line 3, Algorithm 1), then every well-behaved process sets \(\mathcal{L}_{v}\) to be \(\Lambda\) (Line 4, Algorithm 1).
Let us assume that view \(v+1\) is asynchronous and throughout it the adversary (which is assumed to control at least two processes \(p_{a}\) and \(p^{\prime}_{a}\) ) does not deliver any message sent by well-behaved processes to other well-behaved processes. At round 2 of view \(v+1\), all well-behaved processes input into \(GA_{v+1,1}\) a log in the propose message (that was sent during view \(v\)) with the largest valid VRF not conflicting with \(\mathcal{L}_{v}\) (Line 8, Algorithm 1). During \(GA_{v+1,1}\) the adversary does not deliver any message sent by well-behaved processes to well-behaved processes, but it delivers only an adversarially controlled log \(\Lambda^{\prime}\), which conflicts with \(\mathcal{L}_{v}\). This implies that every well-behaved process outputs log \(\Lambda^{\prime}\) with grade 1 from \(GA_{v+1,1}\). This reasoning can be applied also to \(GA_{v+1,2}\). In other terms, the adversary can get all well-behaved processes to output any conflicting log with any previous decisions made during synchrony. After that, even if synchrony is restored in view \(v+2\), all well-behaved processes keep inputting descendants of \(\Lambda^{\prime}\) in every round.
Algorithm 1 leads from asynchronous periods of any length i.e., it recovers both liveness and safety after asynchrony stops. In fact, as the safety and liveness proof of MMR [7] shows, as long as the graded agreement properties hold in any graded agreement round \(GA_{v^{\prime},*}\) of a view \(v^{\prime}\geq v\), a decision from view \(v\) is safe, and a proposal from a well-behaved leader at view \(v\) has a probability \(\frac{1}{2}\) of being decided. Crucially, it is irrelevant what happened in views \(<v\), and moreover each graded agreement round fulfills the graded agreement properties as long as it is itself synchronous and with \(\frac{2}{3}\) of participants being well-behaved, independently of other rounds. Therefore, starting from the first fully synchronous view (where both rounds are synchronous), liveness and safety properties are recovered.
## 4 Adversarial restrictions and \(\tau\)-sleepy model [3]
For dynamically available protocols which consider only messages from the previous round, it is actually usually sufficient to restrict the latter, by imposing a limit on the failure ratio \(\beta\), i.e., \(|B_{r}|<\beta|S_{r}|\), typically with \(\beta=\frac{1}{2}\)[2, 6, 8] or \(\beta=\frac{1}{3}\)[7]. No restrictions on dynamic participation are necessary. However, as we will discuss in Section 3, MMR fails to ensure safety during periods of asynchrony. A similar conclusion regarding Goldfish [2] is drawn in [3]. In contrast, as we will see, taking into account messages from a broader span of prior rounds can weaken the tolerance to dynamic participation, but strengthen the tolerance to bounded periods of asynchrony.
If there are many processes whose _latest_ messages are _unexpired_ (Section 2), because they were recently awake, but which are no longer awake in round \(r\), the protocol's behavior may be adversely affected. This is because the adversary could in principle exploit these latest messages to their advantage, as they are not entirely up to date. We prevent this by bounding the _churn rate_ of well-behaved processes, i.e., by requiring that the rate at which awake and well-behaved processes go to sleep is bounded by \(\gamma\) per \(\tau\) rounds. Letting \(H_{s,r}=\bigcup_{s\leq r^{\prime}\leq r}H_{r^{\prime}}\) (with \(H_{s}\coloneqq\emptyset\) if \(s<0\)) be the set of processes that are awake and well-behaved _at some point_ in rounds \([s,r]\), the requirement is then:
\[|H_{r-\tau,r-1}\setminus H_{r}|\leq\gamma|H_{r-\tau,r-1}| \tag{1}\]
In other words, at most a fraction \(\gamma\) of the well-behaved processes of the last \(\tau\) rounds are allowed to not be well-behaved processes of the current round \(r\). Besides bounding the churn rate, we also as usual need to bound the failure rate of each round, which we do by requiring a failure rate \(\tilde{\beta}\leq\beta\), in particular \(\tilde{\beta}=\frac{\beta-\gamma}{\gamma(\beta-2)+1}\):
\[|B_{r}|<\tilde{\beta}|S_{r}| \tag{2}\]
Here, \(\beta\) is meant to be the failure ratio tolerated by the original dynamically available protocol, which is modified to use unexpired latest messages in order to strengthen its resilience to asynchrony. The failure rate of the modified protocol needs to be appropriately lowered, in particular to \(\frac{\beta-\gamma}{\gamma(\beta-2)+1}\) if the churn rate is bounded by \(\gamma\), to account for the additional power derived from exploiting latest messages of asleep processes.
Observe that, if \(\gamma=0\), our first requirement reduces to \(|H_{r-\tau,r-1}\setminus H_{r}|=0\), i.e., awake processes do not go to sleep, so that the model reduces to one without dynamic participation. Moreover, \(\tilde{\beta}=\beta\), so Equation 2 simply requires the failure ratio \(\beta\) of the original protocol. In other words, no extra stronger assumption is required under the standard synchronous model with constant participation. Observe also that \(\tau=0\) implies \(H_{r-\tau,r-1}=\emptyset\), so that the first requirement does not introduce any restriction, regardless of which \(\gamma\) we choose. In other words, fully dynamic participation is allowed. We can in particular let \(\gamma=0\), meaning that the required failure ratio \(\tilde{\beta}\) is once again just \(\beta\), recovering the original model. Finally, note that \(\gamma\) must be \(<\beta\), since otherwise Equation 2 requires \(|B_{r}|<0\): we cannot allow a fraction \(\beta\) of \(H_{r-\tau,r-1}\) to fall asleep before round \(r\), even if there is no adversary, because then \(H_{r}\) cannot possibly meet a \(1-\beta\) quorum over all unexpired messages (if no more processes wake up).
Bounded asynchronyAs discussed above, a round \(r\) might belong to the period of asynchrony and, if that's the case, a well-behaved process \(p_{i}\) might receive in \(r\) an arbitrary subset of the messages sent during such period. It is therefore necessary to forbid the awakening of too many well-behaved processes during asynchronous periods, because the messages they receive upon waking up are adversarially controlled, and thus they can be manipulated into sending messages that jeopardize the safety of decisions made _before the period of asynchrony_. To preserve it, we must prevent the adversary from overwhelming the well-behaved processes which were awake in the last round before asynchrony started, round \(r_{a}\), either with its own messages or with those of newly awake well-behaved processes, or with corruption, since the adversary can grow. Analogously to \(H_{s,r}\), we define \(S_{s,r}=\bigcup_{s\leq r^{\prime}\leq r}S_{r^{\prime}}\) (with \(S_{r}\coloneqq\emptyset\) if \(r<0\)). We require the following conditions to hold whenever analyzing behavior related to asynchrony:
\[|H_{r_{a}}\setminus B_{r}|>(1-\beta)|S_{r-\tau,r}|\quad\forall r\in[r_{a}+1,r _{a}+\pi+1] \tag{3}\]
\[H_{r_{a}}\subseteq H_{r_{a}+1} \tag{4}\]
The first condition must hold for all rounds in the period of asynchrony _and for the first synchronous round after it_. For such rounds, we require that the well-behaved processes which were awake in the last synchronous round \(r_{a}\), _and have not since been corrupted_, sufficiently (meaning, with the usual failure ratio) outnumber all other processes awake in the interval. Intuitively, the processes in \(H_{r_{a}}\) attempt to preserve the safety of decisions made before asynchrony, unless they are corrupted, and they must sufficiently outnumber all other processes in order to do so. The reason why we include round \(r_{a}+\pi+1\), which is itself synchronous, is that round \(r_{a}+\pi\) being asynchronous means that messages are not guaranteed to be received in its receive phase, and thus that processes in \(H_{r_{a}+\pi+1}\) still do not necessarily have access to up-to-date messages. The second condition simply requires that all process in \(H_{r_{a}}\) are still awake _at the end of round \(r_{a}\)_, so that they participate in the receive phase and in particular obtain messages for the current round, from the other processes in \(H_{r_{a}}\). Knowledge of these messages is what prevents them from "changing their mind" during the period of asynchrony.
Relationship with \(\tau\)-sleepiness [3]The \(\tau\)-sleepy model in the work of D'Amato and Zanolini [3] deals with the same problem. There, the churn rate is not bound explicitly, and instead a single all-encompassing assumption is made, called the \(\tau\)-sleepiness condition, the equivalent of which in our framework is 3:
Footnote 3: The original formulation more closely resembles \(|B_{r}\cup H_{r-\tau,r-1}\setminus H_{r}|<\rho|H_{r}|\), where \(\rho=\frac{\beta}{1-\beta}\). This is equivalent to \(|B_{r}\cup H_{r-\tau,r-1}\setminus H_{r}|<\beta|S_{r-\tau,r}|\), and in turn to \(|H_{r}|>(1-\beta)|S_{r-\tau,r}|\).
\[|H_{r}|>(1-\beta)|S_{r-\tau,r}| \tag{5}\]
As in this work, they consider all processes whose messages might be unexpired, precisely \(S_{r-\tau,r}\), and thus affect the protocol in the following round, and out of these they consider all such processes other than \(H_{r}\), i.e., processes \(|B_{r}\cup H_{r-\tau,r-1}\setminus H_{r}|\), as adversarial. These are therefore restricted by the failure rate \(\beta\) of the original protocol. While the \(\tau\)-sleepiness condition bounds \(|B_{r}\cup H_{r-\tau,r-1}\setminus H_{r}|\) at once, Equations 1 and Equation 2 bound \(|H_{r-\tau,r-1}\setminus H_{r}|\) and \(|B_{r}|\) separately, allowing for a clearer interpretation of the modelling assumptions. Moreover, our two conditions imply \(\tau\)-sleepiness (proof in the full version). In order to keep the terminology consistent with the work of D'Amato and Zanolini [3], we also refer to our model with bounded churn as the \(\tau\)-sleepy model with periods of asynchrony of length \(\pi<\tau\).
Relationship between \(\tau\), \(\eta\), and \(\pi\)This work introduces three key parameters, \(\tau\), \(\pi\), and \(\eta\). The first two are model parameters, whereas \(\eta\) is a protocol parameter. The parameter \(0\leq\eta\leq\infty\) defines the expiration period of a protocol, i.e., how old messages can be while still affecting the protocol. The parameter \(0\leq\tau\leq\infty\) is the number of rounds for which the system's churn rate is taken into account, and restricted, by the model. Specifically, the churn rate is bounded to ensure that the frequency at which active processes go to sleep does not exceed \(\gamma\) per \(\tau\) rounds, as illustrated in Equation 1. When analyzing a protocol with expiration period \(\eta\), it is sensible to consider the model with parameter \(\tau=\eta\), so that bounding the churn exactly corresponds to bounding the unexpired messages from asleep validators. Though one could more generally consider \(\tau\geq\eta\), this needlessly strengthens the assumptions, so we will henceforth take \(\tau=\eta\). Lastly, the parameter \(1\leq\pi\leq\infty\) specifies the duration of the assumed asynchronous period in the model. We are going to require that \(\pi<\tau=\eta\) in order for messages from the last synchronous rounds to remain unexpired throughout the period of asynchrony, and as long as necessary to ensure asynchrony resilience. Given that \(1\leq\pi<\tau\), it is evident that for the concept of "asynchrony resilience" (Definition 5) to be applicable in our model, both \(\tau\) and \(\eta\) must be set to at least \(2\), or no round of asynchrony can be resisted.
**Definition 7** (Byzantine total-order broadcast in the \(\tau\)-sleepy model).: A Byzantine total-order broadcast protocol satisfies \(\tau\)-safety and \(\tau\)-liveness if it satisfies safety and liveness (Definition 2), respectively, in the \(\tau\)-sleepy model.
**Definition 8** (\(\tau\)-Dynamically available Byzantine TOB).: We say that a Byzantine total-order broadcast protocol is \(\tau\)_-dynamically-available_ if and only if the protocol is a Byzantine total-order broadcast in the \(\tau\)-sleepy model i.e., it satisfies \(\tau\)-safety and \(\tau\)-liveness.
Asynchrony resilient Byzantine total-order broadcast
Proposition 1 shows how the dynamically available total-order broadcast of Malkhi, Momose, and Ren [7] fails in preserving safety of logs decided before a period of asynchrony starts. The reason for that is due to the fact that during asynchrony the adversary has the control over the message delivery schedule and can make well-behaved processes perceive an adversarially manipulated participation level, forcing them to decide conflicting logs.
In this section we show how to make such total-order broadcast resilient to bounded periods of asynchrony by working in the \(\tau\)-sleepy model (Section 4).
In order to devise a Byzantine total-order protocol with deterministic safety that can effectively handle periods of bounded asynchrony, it becomes essential to extend the concept of a graded agreement protocol. This adjustment is crucial in facilitating discussions about the "messages received in previous rounds."
Graded agreement, by nature, is a _one-shot_ primitive. This means it does not produce a sequence of logs but rather is instantiated with specific inputs and, once it provides output, its execution terminates. In this framework, arguments pertaining to "unexpired messages from previous rounds" do not fit seamlessly.
In the subsequent sections, we demonstrate how to enhance the graded agreement protocol initially presented in Section 3 and we also establish that this improved primitive upholds the properties of graded agreement, as outlined in Definition 4.
### Extended weak graded agreement protocol
In this section, we elaborate on the extension of the graded agreement protocol initially presented in Figure 2. This extended protocol retains a send and receive phase at round \(r\), with the send phase remaining unchanged.
In the output phase, each process \(p_{i}\in H_{r+1}\) comes equipped with an initial set of vote messages, denoted as \(\mathcal{M}_{0}^{i}\). These messages originate from a set of processes \(\mathcal{P}_{0}\), each supporting a specific log \(\Lambda\). We require that the cardinality of \(H_{r}\) exceeds \(\frac{2}{3}\) of the cardinality of \(S_{r}\cup\mathcal{P}_{0}\), and each set \(\mathcal{M}_{0}^{i}\) contains a maximum of one message per process.
Process \(p_{i}\) tallies all the votes it has accumulated from round \(r\) and discards equivocations. Furthermore, it discards votes in \(\mathcal{M}_{0}^{i}\) sent by processes from which \(p_{i}\) has received a new vote message in round \(r\). As a result, by the end of the protocol, process \(p_{i}\) holds at most one vote per process in \(\mathcal{P}_{0}\cup S_{r}\). The vote message from round \(r\) takes precedence over the initial set of votes \(\mathcal{M}_{0}^{i}\).
The set of all remaining vote messages, referred to as \(\mathcal{M}_{r}^{i}\), is then employed to output logs with a grade, aligning with the methodology in Figure 2. The requirement for grade \(0\) is a quorum of \(\frac{1}{3}\) and for grade \(1\), a quorum of \(\frac{2}{3}\).
It is worth noting that when \(\mathcal{M}_{0}^{i}=\emptyset\) for all \(p_{i}\), we revert to the standard graded agreement from Figure 2.
Figure 3: Extended Weak Graded Agreement \(GA\)[7] initialized with a set \(\mathcal{M}_{0}^{i}\) of vote messages from a set of processes \(\mathcal{P}_{0}\), each supporting some log \(\Lambda\) – protocol for process \(p_{i}\).
**Lemma 1**.: _The extended weak graded agreement presented in Figure 3 satisfies the original properties of weak graded agreement (Definition 4). It moreover satisfies the following property, both for synchronous and asynchronous rounds._
**Clique validity:**: _Consider_ \(H^{\prime}\subset H_{r}\cup H_{r+1}\) _such that all_ \(p_{i}\in H^{\prime}\cap H_{r}\) _have an extension of_ \(\Lambda\) _as input, and such that, for any_ \(p_{i}\in H^{\prime}\cap H_{r+1}\)_,_ \(\mathcal{M}_{0}^{i}\) _contains a message from each process in_ \(H^{\prime}\)_, also all for some extension of_ \(\Lambda\)_. Moreover, suppose that_ \(|H^{\prime}|>\frac{2}{3}|S_{r}\cup P_{0}|\)_. Then, all processes in_ \(H^{\prime}\cap H_{r+1}\) _output_ \(\Lambda\) _with grade 1._
Proof.: The proofs of the shared properties is similar as in the original protocol. There, we use that \(|H_{r}|>\frac{2}{3}|S_{r}|\), whereas here we use \(|H_{r}|>\frac{2}{3}|S_{r}\cup\mathcal{P}_{0}|\) in an analogous manner, as \(S_{r}\cup\mathcal{P}_{0}\) is set the of all processes whose messages can influence the outputs, much like \(S_{r}\) in the original protocol.
Let us consider a round \(r\), and let \(n_{r}\) be the maximum possible perceived participation by any well-behaved participant awake in round \(r\), i.e., \(n_{r}=\left|\mathcal{P}_{0}\cup S_{r}\right|\). We repeatedly use the assumption that \(|H_{r}|>\frac{2}{3}n_{r}\). Moreover, for all properties other than clique validity, network synchrony is assumed, so we repeatedly use that, for all \(p_{i}\in H_{r+1}\), \(H_{r}\subset\mathcal{M}_{r}^{i}\), since all well-behaved messages from \(H_{r}\) are broadcast on time and thus received by the end of the round.
For the _graded consistency_ property, let us assume that process \(p_{i}\) outputs a log \(\Lambda\) with grade 1 and let \(m=|\mathcal{M}_{r}^{i}|\leq n_{r}\) be the perceived participation of process \(p_{i}\). Moreover, let \(S\) be the set of processes whose message in \(\mathcal{M}_{r}^{i}\) is for an extension of \(\Lambda\). By assumption, \(|S|>\frac{2}{3}m\), and \(|H_{r}|>\frac{2}{3}n_{r}\). Moreover, \(|S|+|H_{r}|-|S\cap H_{r}|=|S\cup H_{r}|\leq m\), since \(S,H_{r}\subset\mathcal{M}_{r}^{i}\). Therefore, \(|S\cap H_{r}|\geq|S|+|H_{r}|-m>\frac{2}{3}(n_{r}+m)-m=\frac{2}{3}n_{r}-\frac{m }{3}\geq\frac{2}{3}n_{r}-\frac{n_{r}}{3}=\frac{n_{r}}{3}\), i.e., \(|S\cap H_{r}|>\frac{n_{r}}{3}\). For any process \(p_{j}\in H_{r+1}\), \(S\cap H_{r}\subset\mathcal{M}_{r}^{j}\), so \(p_{j}\) counts \(>\frac{n_{r}}{3}\) votes for extensions of \(\Lambda\), and it thus outputs \(\Lambda\) with at least grade 0.
The proof for the _integrity_ property follows from a very similar argument as for graded consistency, in this case with \(|S|>\frac{m}{3}\). In particular, \(|S\cap H_{r}|\geq|S|+|H_{r}|-m>\frac{m}{3}+\frac{2}{3}n_{r}-m=\frac{2}{3}(n_{r} -m)\). Since \(m\leq n_{r}\), it follows that \(S\cap H_{r}\neq\emptyset\), implying that at least a well-behaved process voted for a log extending \(\Lambda\).
For _validity_, let \(\Lambda\) be the longest common prefix among well-behaved processes' inputs logs at round \(r\). Every process in \(H_{r}\) multi-casts a vote message for an extension of \(\Lambda\). The proof easily follows from the assumption that \(|H_{r}|>\frac{2}{3}n_{r}\), and from \(H_{r}\subset\mathcal{M}_{r}^{i}\) for all \(p_{i}\in H_{r+1}\).
To prove _uniqueness_, let us assume that a well-behaved process \(p_{i}\) awake at round \(r\) outputs a log \(\Lambda\) with grade 1. By the same logic of the graded consistency property, we have that every other well-behaved process \(p_{j}\) awake at round \(r\) sees \(|S\cap H_{r}|>\frac{n_{r}}{3}\) vote messages for an extension of \(\Lambda\). This implies that there cannot be a well-behaved process \(p_{j}\) that sees more that \(\frac{2}{3}m\) vote messages for a conflicting log.
For _bounded divergence_, observe that in order to be output with any grade by process \(p_{i}\), a log \(\Lambda\) must be voted by more than \(\frac{m}{3}\) processes, where \(m=|\mathcal{M}_{r}^{i}|\) is the perceived participation of \(p_{i}\). Recall that \(\mathcal{M}_{r}^{i}\) contains at most one message per process. Thus, each process outputs at most two conflicting logs.
Finally, for the _clique validity_ property, let us consider a process \(p_{i}\in H^{\prime}\cap H_{r+1}\). By assumption, \(\mathcal{M}_{0}^{i}\) contains a vote message for some extension of \(\Lambda\) from each process in \(H^{\prime}\). Since all vote messages from \(H^{\prime}\cap H_{r}\) are also by assumption for some extension of \(\Lambda\), it is the case that \(\mathcal{M}_{r}^{i}\) also contains a vote message for each process in \(H^{\prime}\), all for extensions of \(\Lambda\). By assumption, \(|H^{\prime}|>\frac{2}{3}|S_{r}\cup P_{0}|=\frac{2}{3}n_{r}\) which implies that \(\Lambda\) is output with grade 1 by \(p_{i}\).
### Extended Byzantine total-order broadcast protocol
In this section, we show that the extended weak graded agreement protocol from the previous section can be used to capture expiration of messages in the \(\eta\)-sleepy model, allowing us to simply prove safety and liveness of Algorithm 1 in the \(\eta\)-sleepy model with messages subject to expiration.
Recall that Algorithm 1 proceeds in views of two rounds each, and in each round an instance of graded agreement (Figure 2) is started. In order to make Algorithm 1 asynchrony resilient, we modify it to use the latest unexpired messages as inputs in its graded agreement instances, i.e., a process \(p_{i}\in H_{r+1}\) computes its outputs from a \(GA\) instance started in round \(r\) based on the set of unexpired, latest messages, i.e., the latest among those from rounds \([r-\eta,r]\), with equivocating latest messages being discarded. From this point, "Algorithm 1 modified to use latest unexpired messages", or simply "the modified Algorithm 1", refers precisely this modified protocol.
Note that a \(GA\) instance in the modified Algorithm 1 corresponds exactly to a specific instance of the extended weak graded agreement depicted in Figure 3, in the sense that the inputs and outputs of well-behaved processes are the same in the two instances. In particular, the \(GA\) instance at round \(r\) corresponds to an instance of extended weak graded agreement protocol where the initial set \(\mathcal{M}_{0}^{i}\) of process \(p_{i}\in H_{r+1}\) is taken to contain the set of all latest messages among those from rounds \([r-\eta,r)\) seen by \(p_{i}\) by the receive phase of round \(r\), with equivocating latest messages being discarded. In fact, the set \(\mathcal{M}_{r}^{i}\), which \(p_{i}\) uses to determine its output in this instance of the extended weak graded agreement protocol, then contains simply all latest unexpired messages, i.e., the latest messages among those from rounds \([r-\eta,r]\) (without equivocations). Therefore, the output of \(p_{i}\) in such an instance corresponds to its output in this round of the modified Algorithm 1.
Since we have this correspondence of outputs, it is helpful to show that the assumptions of the extended weak graded agreement protocol hold in these particular instances we have now constructed, because then its properties (which are all statements on the relationship between inputs and outputs) extend to the graded agreement instances in the modified Algorithm 1. For an extended weak graded agreement protocol happening at a synchronous round \(r\), we have required that \(|H_{r}|>\frac{2}{3}|S_{r}\cup\mathcal{P}_{0}|\). In the particular instance we have constructed above, \(\mathcal{P}_{0}\), the set of senders of messages in \(\mathcal{M}_{0}^{i}\), is contained in \(H_{r-\eta,r-1}\cup B_{r}\subseteq S_{r-\eta,r}\), since by construction \(\mathcal{M}_{0}^{i}\) contains only messages from rounds \([r-\eta,r)\), and we are at round \(r\). Therefore, \(|S_{r}\cup\mathcal{P}_{0}|\leq|S_{r-\eta,r}|\), and thus \(|H_{r}|>\frac{2}{3}|S_{r}\cup\mathcal{P}_{0}|\) immediately follows from the \(\eta\)-sleepiness assumption, \(|H_{r}|>\frac{2}{3}|S_{r-\eta,r}|\). Since all the assumptions hold, Lemma 1 guarantees that graded consistency, integrity, validity, uniqueness, and bounded divergence all apply to the extended weak graded agreement instances in our modification of Algorithm 1.
**Theorem 1**.: _Algorithm 1 with the extended weak graded agreement protocol implements Byzantine total-order broadcast._
Proof.: As we have just discussed, each instance of the extended weak graded agreement protocol utilized in the modified Algorithm 1 satisfies the five properties of the graded agreement primitive from Malkhi, Momose, and Ren [7]. Since the safety and liveness proofs of Algorithm 1 (Lemma 6 and Lemma 7 of [7]) rely entirely on these properties, they apply to the modified Algorithm 1 as well.
Recall that \(r_{a}\) is the last round before asynchrony starts. We have the following results for Algorithm 1 modified to use latest messages.
**Lemma 2**.: _Let \([r_{a}+1,r_{a}+\pi]\) with \(\pi<\tau\) be the period of asynchrony. If every process \(p_{i}\) in \(H_{r_{a}}\) multi-casts a vote message for an extension of a log \(\Lambda\) in round \(r_{a}\), then every process \(p_{i}\in H_{r_{a}}\cap H_{r}\setminus B_{r}\) multi-casts a vote message for an extension of log \(\Lambda\), for every round \(r\in[r_{a}+1,r_{a}+\pi+1]\)._
Proof.: We prove this lemma through an inductive argument.
The base case is round \(r_{a}+1\). By assumption we have that \(H_{r_{a}}\subseteq H_{r_{a}+1}\), i.e., processes participating in the send phase of the extended weak graded agreement (GA) of round \(r_{a}\) also participate in its receive phase. In particular, seen that round \(r_{a}\) is synchronous by assumption, each process \(p_{i}\in H_{r_{a}}\) receives all the vote messages for an extension of \(\Lambda\) sent by other processes in \(H_{r_{a}}\) in round \(r_{a}\). By validity property of the extended weak graded agreement, every process in \(H_{r_{a}+1}\) outputs from GA log \(\Lambda\) with grade \(1\), and thus multi-casts a vote message for an extension of it in the next instance of the extended weak graded agreement of round \(r_{a}+1\).
For the inductive step, suppose that every process \(p_{i}\in H_{r_{a}}\cap H_{r}\setminus B_{r}\) multi-casts a vote message for an extension of log \(\Lambda\), for every round \(r\in[r_{a}+1,r^{\prime}]\), \(r^{\prime}<r_{a}+\pi+1\). Let \(H^{\prime}=H_{r_{a}}\setminus B_{r^{\prime}}\), and observe that for every \(p_{i}\in H^{\prime}\cap H_{r^{\prime}+1}\), the set \(\mathcal{M}_{0}^{i}\) contains all latest unexpired vote messages from rounds \(<r^{\prime}\) that \(p_{i}\) has received. In particular it contains a latest and unexpired messages from each process in \(H_{r_{a}}\setminus B_{r^{\prime}}\), all from rounds no later than \(r_{a}\). This is because messages from round \(r_{a}\) from \(H_{r_{a}}\) were previously received in round \(r_{a}\), and these are still unexpired, since \(r^{\prime}+1-\eta\leq r_{a}+\pi+1-\eta<r_{a}\). By inductive assumptions, all such latest messages are for an extension of \(\Lambda\). It is then the case that all \(p_{i}\in H^{\prime}\cap H_{r^{\prime}}\) have an extension of \(\Lambda\) as input, and that, for any \(p_{i}\in H^{\prime}\cap H_{r^{\prime}+1}\), \(\mathcal{M}_{0}^{i}\) contains a message from each process in \(H^{\prime}\), also all for some extension of \(\Lambda\), as required by the assumptions of clique validity. To apply clique validity, we need only to show that \(|H^{\prime}|>\frac{2}{3}|S_{r^{\prime}}\cup P_{0}|\). Equation 4 gives us that \(|H^{\prime}|=|H_{r_{a}}\setminus B_{r^{\prime}}|>\frac{2}{3}|S_{r^{\prime}- \eta,r^{\prime}}|\), which immediately implies the desired result, because by construction \(\mathcal{P}_{0}\subset S_{r^{\prime}-\eta,r^{\prime}}\), so \(|S_{r^{\prime}}\cup\mathcal{P}_{0}|\leq|S_{r^{\prime}-\eta,r^{\prime}}|\).
The following Lemma describes the behavior of MMR under synchrony, which is preserved when modifying Algorithm 1 to use latest unexpired messages, as we have already argued.
**Lemma 3**.: _Let \(\Lambda\in D_{r_{a}}\) be a log decided in a round \(r\leq r_{a}\). In every round \(r^{\prime}\in[r,r_{a}]\), every process \(p_{i}\in H_{r^{\prime}}\) multi-casts a vote message for an extension of log \(\Lambda\)._
Proof.: Let \(\Lambda\in D_{r_{a}}\) be a log decided in a round \(r\leq r_{a}\) by process \(p_{i}\) awake at round \(r\). We prove this result through induction on rounds \(r^{\prime}\in[r,r_{a}]\).
The base case, i.e., \(r^{\prime}=r\) follows from the graded consistency property of the extended weak graded agreement. In particular, if an awake and well-behaved process \(p_{i}\) decides \(\Lambda\) in round \(r\leq r_{a}\), then, since \(r\) is a synchronous round, all processes \(p_{i}\in H_{r}\) multi-cast a vote message for an extension of \(\Lambda\).
For the induction step, suppose that if every process in \(H_{r^{\prime}}\) multi-casts a vote message for an extension of \(\Lambda\), then (from the validity property of the extended weak graded agreement) every process in \(H_{r^{\prime}+1}\) outputs \(\Lambda\) with grade \(1\). Regardless of whether \(r^{\prime}\) corresponds to round \(1\) or \(2\) of its view, this implies that they all multi-cast a vote message for an extension of \(\Lambda\).
**Theorem 2**.: _Algorithm 1 with the extended weak graded agreement protocol is \(\pi\)-asynchrony resilient for \(\pi<\tau\)._
Proof.: Let \(\Lambda\in D_{r_{a}}\) be a log decided in a round \(r\leq r_{a}\) and let \([r_{a}+1,r_{a}+\pi]\) with \(\pi<\tau\) be the period of asynchrony. By Lemma 3, all processes in \(H_{r_{a}}\) multi-cast a vote message for an extension of \(\Lambda\) in rounds \([r,r_{a}]\). In particular they do so in round \(r_{a}\), so we can apply Lemma 2 and conclude that every process in \(H_{r_{a}}\cap H_{r^{\prime}}\setminus B_{r^{\prime}}\) also multi-casts a vote message for an extension of \(\Lambda\) in round \(r^{\prime}\), for any round \(r^{\prime}\in[r_{a}+1,r_{a}+\pi+1]\). Firstly, this shows that no process \(p_{i}\in H_{r_{a}}\) ever decides a log \(\Lambda^{\prime}\) conflicting with \(\Lambda\) in rounds \([r,r_{a}+\pi+1]\), as this would imply multi-casting a vote message for an extension of \(\Lambda^{\prime}\). Moreover, since round \(r_{a}+\pi+1\) is synchronous by assumption, all vote messages from rounds \([r_{a},r_{a}+\pi+1]\) are delivered in the received phase of the round, to all well-behaved processes which are awake during it, i.e., to processes \(H_{r_{a}+\pi+2}\). Any such process would then have received all messages sent by process \(H_{r_{a}}\) in rounds \([r_{a},r_{a}+\pi+1]\), which are all unexpired at round \(r_{a}+\pi+2\), since the expiration period for it starts at round \((r_{a}+\pi+2)-1-\eta=r_{a}+1+\pi-\eta\leq r_{a}\) since \(\pi<\eta\). Therefore, any process \(p_{i}\in H_{r_{a}+\pi+2}\) has received an unexpired message for each process in \(H_{r_{a}}\setminus B_{r_{a}+\pi+1}\), all for extension of \(\Lambda\), since no other messages were cast during rounds \([r_{a},r_{a}+\pi+1]\) by such processes. In particular, the latest of these messages is then for an extension of \(\Lambda\). Equation 4 then gives us \(|H_{r_{a}}\setminus H_{r_{a}+\pi+1}|>\frac{2}{3}|S_{r_{a}+\pi+1-\eta,r_{a}+ \pi+1}|\), \(>\frac{2}{3}\) of all latest unexpired messages seen by \(p_{i}\) are for an extension of \(\Lambda\), and thus \(p_{i}\) outputs \(\Lambda\) with grade \(1\) and multi-casts a vote message for an extension of it in round \(r_{a}+\pi+2\). Since rounds \(\geq r_{a}+\pi+2\) are synchronous, we can then apply the same inductive reasoning of the original protocol (MMR) and conclude that all processes \(H_{r^{\prime}}\) multi-cast a vote message for an extension of \(\Lambda\) in any round \(r^{\prime}\geq r_{a}+\pi+2\). In particular, this rules out any decision for a conflicting log in such rounds. Overall, we have shown that process in \(H_{r_{a}}\) ever decides a log conflicting with \(D_{r_{a}}\), and after round \(r_{a}+\pi+1\) no well-behaved process at all decides a log conflicting with \(D_{r_{a}}\), i.e., that the protocol is \(\pi\)-asynchrony-resilient.
**Theorem 3**.: _Algorithm 1 with the extended weak graded agreement protocol heals after any period of asynchrony, after \(k=1\) slot_
Proof.: The argument is the same as for the original MMR protocol, except using \(\eta\)-sleepiness to ensure that the graded agreement properties hold. Say \(r\) is the first round after asynchrony, and view \(v\) any view whose first round is \(>=r\). \(\eta\) sleepiness holds at all rounds of views \(\geq v\), so all such rounds satisfy the graded agreement properties. Thus, all decisions made in views \(\geq v\) are safe, and all proposals from well-behaved leaders made in such views have a probability \(\frac{1}{2}\) of being decided. In other words, the protocol is safe and live after round \(r\).
## 6 Related work
The literature on distributed consensus protocols has evolved over time to cater to the realities of large-scale permissionless networks and fluctuating participation. One of the foundational works in this area is the
"Sleepy Model of Consensus" [11] by Pass and Shi. This paper presented a significant shift in consensus protocols, introducing the concept of participants fluctuating between being online (alert or, in our terminology, awake) or offline (asleep). It proposed a model that remains resilient under "sporadic participation," where at any given point, only a subset of participants are actually online.
In response to the latency challenge inherent in the longest-chain protocols such as Bitcoin [9], Momose and Ren [8] present a protocol that supports dynamic participation while achieving constant latency. The authors do this by extending the classic Byzantine Fault Tolerance (BFT) approach from a static quorum size to a dynamic one, adjusting according to the current level of participation.
Another stride towards accommodating fluctuating participation was made by Malkhi, Momose, and Ren [7]. This work presents a protocol with a significantly reduced latency of three rounds, which tolerates one-third malicious participants and allows fully dynamic participation of both honest and malicious participants.
Malkhi, Momose, and Ren [12] informally present in a blog post an extension of the work of Momose and Ren [8], providing a Byzantine consensus solution under dynamic and unknown participation with an assumption of minority corruption. The original work did not fully support fluctuation, as it made progress only under stable participation. The authors propose a solution that removes this limitation, allowing optimal \(\frac{1}{2}\) corruption threshold.
Gafni and Losa [6] present two consensus algorithms that tolerate a ratio of \(\frac{1}{2}\) malicious failures in the the sleepy model. The first algorithm achieves deterministic safety and probabilistic liveness with constant expected latency, while the second, albeit theoretically due to its high round and message complexity, offers deterministic safety and liveness.
Focusing on Ethereum's consensus protocol, D'Amato _et al._[2] propose a simplified protocol, Goldfish, intended to replace the LMD-GHOST consensus protocol. Goldfish is secure in synchronous networks under dynamic participation, also tolerating a failure ratio up to \(\frac{1}{2}\), and introduces a coordination mechanism to synchronize the honest participants' actions under dynamic participation.
Finally, D'Amato and Zanolini [3] tackle the issue of dynamic availability with respect to asynchrony. The work presents RLMD-GHOST, a synchronous consensus protocol that not only ensures dynamic availability but also maintains safety during bounded periods of asynchrony, introducing the "generalized sleepy model" to analyze dynamically available protocols under varied conditions.
## 7 Conclusions
This paper studied the problem of handling asynchrony in dynamically available protocols that are _deterministically safe_. Our main contribution revolves around the novel concept of a configurable message-expiration period, initially introduced by D'Amato and Zanolini (CSF 2024), applied to the dynamically available protocol of Malkhi, Momose, and Ren. By leveraging the latest votes of participants across multiple prior rounds instead of restricting to the current round, we introduced a mechanism to enhance the resilience of the protocol during asynchrony. In order to benefit from this approach, we introduced a "drop-off rate" to quantify the maximal fraction of online participants that can transition to an offline state. We show that this drop-off rate plays a central role in determining the maximum tolerable failure ratio during synchronous operations. The techniques utilized in this work are extensible and can be directly applied to other deterministically safe, dynamically available protocols. An in-depth analysis of this will be a subject for future research.
|
2309.03736 | TradingGPT: Multi-Agent System with Layered Memory and Distinct
Characters for Enhanced Financial Trading Performance | Large Language Models (LLMs), prominently highlighted by the recent evolution
in the Generative Pre-trained Transformers (GPT) series, have displayed
significant prowess across various domains, such as aiding in healthcare
diagnostics and curating analytical business reports. The efficacy of GPTs lies
in their ability to decode human instructions, achieved through comprehensively
processing historical inputs as an entirety within their memory system. Yet,
the memory processing of GPTs does not precisely emulate the hierarchical
nature of human memory. This can result in LLMs struggling to prioritize
immediate and critical tasks efficiently. To bridge this gap, we introduce an
innovative LLM multi-agent framework endowed with layered memories. We assert
that this framework is well-suited for stock and fund trading, where the
extraction of highly relevant insights from hierarchical financial data is
imperative to inform trading decisions. Within this framework, one agent
organizes memory into three distinct layers, each governed by a custom decay
mechanism, aligning more closely with human cognitive processes. Agents can
also engage in inter-agent debate. In financial trading contexts, LLMs serve as
the decision core for trading agents, leveraging their layered memory system to
integrate multi-source historical actions and market insights. This equips them
to navigate financial changes, formulate strategies, and debate with peer
agents about investment decisions. Another standout feature of our approach is
to equip agents with individualized trading traits, enhancing memory diversity
and decision robustness. These sophisticated designs boost the system's
responsiveness to historical trades and real-time market signals, ensuring
superior automated trading accuracy. | Yang Li, Yangyang Yu, Haohang Li, Zhi Chen, Khaldoun Khashanah | 2023-09-07T14:25:35Z | http://arxiv.org/abs/2309.03736v1 | TradingGPT: Multi-Agent System with Layered Memory and Distinct Characters for Enhanced Financial Trading Performance
###### Abstract
Large Language Models (LLMs), prominently highlighted by the recent evolution in the Generative Pre-trained Transformers (GPT) series, have displayed significant progress across various domains, such as aiding in healthcare diagnostics and curating analytical business reports. The efficacy of GPTs lies in their ability to decode human instructions, achieved through comprehensively processing historical inputs as an entirety within their memory system. Yet, the memory processing of GPTs does not precisely emulate the hierarchical nature of human memory, which is categorized into long, medium, and short-term layers. This can result in LLMs struggling to prioritize immediate and critical tasks efficiently. To bridge this gap, we introduce an innovative LLM multi-agent framework endowed with layered memories. We assert that this framework is well-suited for stock and fund trading, where the extraction of highly relevant insights from hierarchical financial data is imperative to inform trading decisions. Within this framework, one agent organizes memory into three distinct layers, each governed by a custom decay mechanism, aligning more closely with human cognitive processes. Agents can also engage in inter-agent communication and debate. In financial trading contexts, LLMs serve as the decision core for trading agents, leveraging their layered memory system to integrate multi-source historical actions and market insights. This equips them to navigate financial changes, formulate strategies, and debate with peer agents about investment decisions. Another standout feature of our approach is to enable agents with individualized trading characters, which enrich the diversity of their highlighted essential memories and improve decision-making robustness. By leveraging agents' layered memory processing and consistent information interchange, the entire trading system demonstrates augmented adaptability to historical trades and real-time market cues. This synergistic approach guarantees premier automated trading with heightened execution accuracy.
Financial AI, Multi-Modal Learning, Trading Algorithms, Deep Learning, Financial Technology
## 1 Introduction
As the influx of diverse data streams continues to rise, there is a growing need for individuals to effectively harness information. This trend is particularly pronounced in the realm of finance, where traders must consider multiple sources to inform their investment decisions. In light of this demand, researchers design intelligent trading robot-agents that can synthesize and interpret data objectively[14, 5]. These robot-agents harness diverse machine algorithms, assimilate a broader spectrum of data, autonomously refine trading strategies via methodical planning, and even potentially collaborate [7]. Here, we introduce an advanced LLM-powered multi-agent trading agent framework, supported by layered memories and customized characters. By employing a collaborative multi-agent system and capturing the intricate market dynamics from varied perspectives, this approach significantly enhances automated trading outcomes. This approach substantially elevates the performance of automated trading by fostering collaborative interactions among agents and capturing the intricate dynamics of the market from diverse perspectives.
Previous studies have introduced multi-agent trading algorithms that employ machine learning techniques, such as reinforcement learning and have reported significant performance outcomes [5]. Yet, these methods exhibit limitations in precisely identifying, representing, and emulating crucial components of trading systems. This includes aspects like agents' memory archives and the evolving social interplay among agents.
LLMs, with a particular focus on their recent advancements, such as the Generative Pre-trained Transformer (GPT), have demonstrated remarkable effectiveness in enhancing human decision-making across various domains [9]. Notably, a growing body of research has focused on harnessing this technology to make informed trading decisions for stocks and funds by continuously interacting with financial environment information [17; 16]. While current financial LLM applications predominantly operate within single-agent systems based on textual uni-modality, their immense potential to elevate trading performance is becoming increasingly evident. Moreover, these financial agent systems make trading decisions relying solely on pre-trained LLMs or a memory system processing received information streams as an entirety. This can lead to a challenge for LLMs in efficiently prioritizing immediate and critical memory events for optimized trading.
Park et al. [10] recently introduced a generative agent framework aiming to enhance the efficient retrieval of critical events from agents empowered by LLMs. This structure comprises several agents, each distinguished by separate memory streams and unique character profiles configured by LLMs. Each agent, owning its seed memories, not only tracks its actions but also monitors other agents and environmental behaviors. Faced with a task, agents sift through memory segments to input into the language model, ranking them by recency, significance, and relevance. By archiving an agent's experiences, the system integrates individual weighted memories and the nuances of group dynamics. As a result, agents can collaboratively strategize, leveraging their collective knowledge. Moreover, Du et al. [3] presented a debate mechanism for LLM agents, emphasizing enhanced cooperative decision-making through debate phases in inter-agent memory interactions. These advancements align the LLM-driven multi-agent system more with human memory structures, paving the way for a more adept financial automated trading system.
Leveraging the capabilities of LLMs, we propose a novel trading agent framework, "TradingGPT". It offers a realistic scenario simulation through the integration of the trader's layered memory streams and character analysis. This framework is characterized by remarkable self-enhancement ability and performance to conduct automated trading and optimal execution. The primary contributions of our work include:
**This represents a pioneering multi-agent trading system that integrates memory streams and debate mechanisms,** anchored on LLMs. Building on Park et al.'s weighted memory mechanisms, our system innovatively categorizes the agent's memories into short-term, middle-term, and long-term layers, which are closely aligned with the structure of the human cognitive system. We adapt this layered memory framework to the financial trading system, equipping agents to reflect on past and present events, derive insights from trading performance, and leverage collective wisdom for future decisions. This approach improves the system's robustness.
**This marks the debut of the LLM agent trading system that incorporates the character design.** The design assigns agents with different varying risk preferences, such as risk-seeking, risk-neutral, and risk-averse, and various investment subscopes across industries. This design enables these collaborative agents to resonate more with human intuition and possess the potential to uncover latent market opportunities.
**Our trading system also integrates real-time multi-modal data from diverse information sources**, offering a comprehensive view of the financial landscape by encompassing both macro and micro perspectives, as well as historical trading records. With updates available on both daily and minute-by-minute frequencies, our system ensures prompt reactions to daily trades and offers the capability for high-frequency trading.
In this paper, we commence with an in-depth exposition of TradingGPT. We then present multi-modal datasets for the effective training of TradingGPT. We methodically evaluate the pivotal components of the system, illustrating their ability to yield notable results. We prospect that, when deployed on representative fund firms like ARK, TradingGPT will markedly outperform other automated trading strategies.
## 2 Related Work
### Large language models (LLMs)
The evolution of LLMs has reshaped artificial intelligence and natural language processing. From foundational embeddings like Word2Vec [4] and GloVe [11], the field advanced with the introduction of BERT [2]. Today, the new-generation LLMs, like Generative Pre-trained Transformer series (GPTs) [12; 9] and Large Language Model Meta AI (Llamas) [15], demonstrate expressive proficiency across diverse applications.
### Generative agent system with memory streams and customized character design
Park et al. [10] introduced generative agents' memory streams and innovatively employed character design concepts from gaming, expanding LLM capabilities for the multi-agent system [13]. In their design, agents display human-like behaviors while retaining individual characters. They dynamically interact with peers and their environment, forging memories and relationships. Moreover, these agents coordinate collaborative tasks through natural language, creating a captivating fusion of artificial intelligence and interactive design.
### Multi-agent debate mechanism
Du et al. [3] introduced a debate mechanism leveraging multiple language models in a multi-agent system. Within this framework, various model instances propose debate and collaboratively converge to a unified answer. This approach bolsters mathematical and strategic reasoning while enhancing the factual accuracy of the generated content.
## 3 Dataset and Database Structure
For TradingGPT's development, we systematically integrated an extensive array of multi-modal financial data from August 15, 2020, to August 15, 2023. These datasets were sourced from financial databases and APIs, exemplified by the Databento Stock Price Database, Alpaca News API, publicly available daily holdings history records from ARK, etc. This data serves two purposes: (a) to formulate multi-layer memories for agents, and (b) to train, guide, and back-test the agents using ARK funds' historical trading records, refining their trading decisions and actions. In our study, we employed FAISS[6], an open-source vector database, due to its capacity to store data as high-dimensional vectors, enabling semantic searches based on exact matches. Two primary reasons informed our decision: (a) The majority of our data, including audio transcriptions from ARK Invest videos (translated to texts via the Whisper API), benefits from FAISS's unique underlying structure to fast query data. (b) FAISS's compatibility incorporating OpenAI and efficient computation of cosine similarities for specific tickers. the Raw Input schema. This data is then channeled into the Agents' Cognition Schema, guided by both the system's foundational logic and LLM-agent processing. A comprehensive schema structure is in Figure. 1.
## 4 Proposed Method
Our methodology integrates LLM across multiple facets of the trading agent workflow. Details and associated notation are provided in the subsequent sections.
Figure 1: TradingGPT Data Warehouse.
### Trading Agents Layered Generative Memory Formulation
In our LLM-based trading system, agents autonomously manage their actions and memory trajectories, engaging in communication and deliberation as needed.
#### 4.1.1 Layered-memory structure
Each agent within TradingGPT discerns and categorizes perceived information into three distinct memory layers: long-term, middle-term, and short-term. Compared to the approach of extracting key insights through the computation of ranked retrieval scores from all memories in the generative agent system [10], this layered memory approach introduces a more nuanced ranking mechanism for retrieving crucial events from individual layers. This closely aligns with the human cognition proposed by Atkinson et al.[1]. Our framework initially categorizes memories into separate lists for each layer, guided by predefined rules tailored to specific situations and the nature of events. Subsequently, within each memory layer, we leverage three crucial metrics, inspired by the work of Park et al. - recency, relevancy, and importance - to establish the hierarchical arrangement of events within an agent's memory. However, we have reconstructed their mathematical representations to attain a more logical and advanced formulation.
For a memory event \(E\) within the memory layer \(i\in\{\text{short, middle, long}\}\), upon the arrival of a prompt \(P\) from the LLM, the agent computes the recency score \(S^{E}_{\text{Recency}}\) as per Equation.1. This score inversely correlates with the time difference between the prompt's arrival and the event's memory timestamp, aligning with Ebbinghaus's forgetting curve on memory decay [8]. \(Q_{i}\) Equation.1 represents the stability term, employed to control the memory decay rates across layers. A higher stability value in the long-term memory layer compared to the short-term layer suggests that memories persist longer in the former. The relevancy score \(S^{E}_{\text{relevancy}}\) represents the cosine similarity between the embedding vectors for the textual content of the memory event \(\mathbf{m_{\text{E}}}\) and the prompt query \(\mathbf{m_{\text{P}}}\). The importance score \(S^{E}_{\text{Importance}}\) is determined using a uniform piecewise function as described in Equation.3, adhering to the relationship \(c_{\text{short}}<c_{\text{middle}}<c_{\text{long}}\). After normalizing their values to the [0,1] range using min-max scaling, these scores, \(S^{E}_{\text{Recency}}\), \(S^{E}_{\text{Relvancy}}\) and \(S^{E}_{\text{Importance}}\) are linearly combined to produce the final ranking score \(\gamma^{E}_{i}\) for each memory layer in the Equation. 4 (equivalent to retrieval score in the study of Park et al.). In our setup, the ranking score thresholds, \(\gamma^{E}_{i}\), are 80 for long-term, 60 for middle-term, and 40 for short-term memory. Events scoring below 20 are removed.
\[S^{E}_{\text{Recency}}=e^{-\frac{E}{Q_{i}}}\quad\delta^{E}=t_{\text{P}}-t_{E} \tag{1}\]
, where \(Q_{\text{long}}=365\) for long-term, \(Q_{\text{middle}}=90\) for middle-term, and \(Q_{\text{short}}=3\) for short-term events.
\[S^{E}_{\text{Relevancy}}=\frac{\mathbf{m_{\text{E}}}\cdot\mathbf{m_{\text{P} }}}{\|\mathbf{m_{\text{E}}}\|_{2}\times\|\mathbf{m_{\text{P}}}\|_{2}} \tag{2}\]
\[S^{E}_{\text{Importance}}=\begin{cases}c_{\text{short}}&\text{if short-term memory}\\ c_{\text{middle}}&\text{if middle-term memory}\\ c_{\text{long}}&\text{if long-term memory}\end{cases} \tag{3}\]
, where \(c_{\text{short}},c_{\text{middle}}\) and \(c_{\text{long}}\) are all constants.
\[\gamma^{E}_{i}=\alpha^{E}_{i}\times S^{E}_{\text{Recency}_{i}}+\beta^{E}_{i} \times S^{E}_{\text{Relvancy}_{i}}+\lambda^{E}_{i}\times S^{E}_{\text{ Importance}_{i}} \tag{4}\]
where each memory event is only associated with one score, as it can only belong to one of the memory layers.
To ensure dynamic interactions across memory layers, we define upper and lower thresholds for memory event ranking scores in each layer. We also utilize an add-counter function to boost the scores of events that are triggered by trading executions resulting from significant trading profits and losses. This promotes frequent events to transition from short-term to potentially longer-term memory, enhancing their retention and recall by agents. The hyperparameters \(\alpha^{E}_{i}\), \(\beta^{E}_{i}\), and \(\lambda^{E}_{i}\) exhibit variations across different layers. The transferable layered memory system allows the agents to capture and prioritize crucial memory events by considering both their types and frequencies when conducting queries.
#### 4.1.2 Memory formulated by individual experience
In the trading paradigm, macro-level market indicators are stored in the long-term memory, quarterly investment strategies are allocated to the mid-term memory, and daily investment messages are channeled into the short-term
memory. These three memory classes constitute the initial structure within the Agents' Cognition Schema of our data warehouse in Figure. 1. In our trading system, agents make informed trading decisions relying on the outcomes of two distinct workflows: the single-agent workflow and the multi-agent workflow, as depicted on the left side of Figure 2.
In the single-agent workflow, when presented with a specific stock ticker, agents' LLM core generates evaluations and reflections, which encompass trading recommendations and the reasons behind them, based on the essential events retrieved from their layered memory. Subsequently, the agent can proceed to execute trading actions in accordance with these generated insights. The key features that empower our system are (a) Immediate reflection: Conducted daily, this mechanism allows agents to consolidate top-ranked events of each memory layer and market facts, such as daily stock prices and ARK fund trading records. Using LLM and specific prompts, agents generate five trading recommendations: "significantly increase position", "slightly increase position", "hold", "slightly decrease position", and "significantly decrease position", with its justification. Each option is associated with a predetermined trade value. which can be adjusted to suit the business scale represented by the agents. Additionally, this reflection captures the agent's trade volumes and returns. (b)Extended reflection: This provides a broader performance overview over a designated period, like a week. It includes stock prices, the agent's trading trends, and self-evaluation. The immediate reflection guides trade execution directly, while the extended reflection acts as a supplementary reference for recalling recent investment transactions. Both types of reflections are stored in the Agents' Cognition Schema's reflection index, as shown in Figure 1, distinguished by a specific flag.
#### 4.1.3 Memory gained by interacting with other agents
For stocks that appear in multiple agents' trading portfolios, TradingGPT enables inter-agent dialogue via a debate mechanism. This mechanism encourages collaboration between agents typically specializing in distinct sectors, with the goal of optimizing trading outcomes. Within these debates, agents present their top-K layered memories as well as immediate reflections, encompassing recommendations, trade values, volumes, and returns, inviting feedback from their peers. All feedback is subsequently stored in the debate class of the Agents' Cognition Schema, tagged with the receiver's index, as shown in Figure. 1.
### Design of Training and Testing Workflows
The distinct design of our training and testing workflows is crucial for curating valuable past memory events and strategizing optimal future trading actions.
#### 4.2.1 Training
The training process is twofold: a single-agent workflow followed by a multi-agent phase, as detailed in the left section of Fig. 2. In the single-agent phase, the LLM-driven agent is prompted with key data like stock ticker, date, and trader characters. Using this context, it evaluates top-K-ranked memories across each layer to derive preliminary investment signals, where K is a predefined hyperparameter. The LLM then synchronizes and analyzes these signals with market
Figure 2: TradingGPT training and test workflow.
data, such as daily records from fund firms like ARK and stock closing prices, leading the agent to formulate an immediate reflection and trade accordingly. Subsequently, the agent collaborates in the multi-agent phase, joining debates with agents trading the same stock from varied sectors on that day (refer to 4.1.3).
#### 4.2.2 Test
The testing process, illustrated in the right section of Figure. 2, blends single-agent and multi-agent operations. Both individually processed memories and insights from inter-agent exchanges are concurrently inputted into the LLM to inform trading decisions. Key differences from the training phase include: (a) During testing, agents operate without the guidance of trading records from the representative fund firm, relying solely on daily stock prices as market facts. (b) Time series patterns of prior training reflections and debates, covering a week in our setup, act as auxiliary references in the absence of substantial market ground truths, as noted in (a). Other aspects of the test workflow align with the training phase.
## 5 Current Stage And Future work
Our research consists of two phases: prompt design and ablation studies. We've crafted efficient LLM prompts using GPT3.5 turbo as the backbone. Examples of prompts that encapsulate the necessary insights for each phase of the TradingGPT training and testing workflow. The specific design of these prompts is illustrated by examples in Figure. 3.
With our established prompt template, we're poised to undertake ablation studies to assess the trading efficacy of agent systems based on various backbone models. This will involve comparisons within LLMs, such as GPT3.5 turbo versus CodeLlama 34B, and against models like multi-agent reinforcement learning. The training phase will utilize data spanning from August 15, 2020, to February 15, 2023, while the testing phase will extend until August 15, 2023. We'll assess performance using financial metrics like cumulative trade returns, volatility, and the Sharpe Ratio (see 4.1.2).
Harnessing an innovative multi-layer memory system and character design, our main goal is to establish a state-of-the-art LLM-based multi-agent automated trading system adaptable to various LLMs as its core. This system aspires to achieve superior trading performance over other leading trading agent systems by emulating human traders' cognitive behaviors and ensuring responsiveness in the constantly changing market scenario. We also posit that this LLM-based multi-agent design can improve working efficiency and collaborative performance in artificial systems across diverse sectors. Potential applications range from character development in video games to the creation of robo-consultants in business, healthcare, and technology domains.
Figure 3: Prompt template for key steps of TradingGPT workflow. |
2309.06128 | Exploring Anisotropic flow via the Boltzmann Transport Equation
Employing the Tsallis Blast Wave Description at LHC energies | Anisotropic flows $i.e.$ azimuthal anisotropies in particle production are
one of the important probes in characterizing the properties of the strongly
interacting matter created in the relativistic heavy-ion collisions. These
observables are sensitive to both the transport properties as well as the
equation of state (EOS) of Quantum Chromodynamics (QCD) matter. We have adopted
the Boltzmann transport equation (BTE) in the relaxation time approximation
(RTA) to describe the experimental data for harmonic flows such as elliptic
flow ($v_2$), triangular flow ($v_3$), quadrangular flow ($v_4$) obtained in
heavy-ion collisions at Large Hadron Collider (LHC) energies. In this analysis,
we have used Tsallis statistics as an initial distribution and the Tsallis
Blast wave (TBW) description is used as the equilibrium distribution function
while describing the evolution of the particle production in BTE. We have
fitted the transverse momentum spectra, $v_2$, $v_3$, and $v_4$ of identified
hadrons such as pion, kaon, and proton for Pb-Pb and Xe-Xe collisions at the
LHC energies of $\sqrt{s_{NN}}$ = 5.02 TeV and $\sqrt{s_{NN}}$ = 5.44 TeV,
respectively for various centralities. Our study offers a comparative analysis
between the two distinct collision systems operating at comparable collision
energies. The present formulation successfully fits the experimental data for
$p_T$-spectra upto $p_T$ = 8 GeV and effectively explains the anisotropic flows
data upto $p_T$ = 10 GeV with a very favourable $\chi^2/ndf$. We observe that
the average transverse flow velocity ($<\beta_r>$) and the kinetic freeze-out
temperature ($T$) extracted in our analysis decrease as we go towards the
peripheral collisions. The azimuthal modulation amplitudes ($\rho_a$) exhibit
an increasing pattern as one moves from central to peripheral collisions in
both the Pb-Pb and Xe-Xe nuclei interactions. | Aviral Akhil, Swatantra Kumar Tiwari | 2023-09-12T11:06:51Z | http://arxiv.org/abs/2309.06128v3 | Exploring Anisotropic flow via the Boltzmann Transport Equation Employing the Tsallis Blast Wave Description at LHC energies
###### Abstract
Anisotropic flows _i.e._ azimuthal anisotropies in the particle production are one of the important probes in characterizing the properties of the strongly interacting matter created in the relativistic heavy-ion collisions. These observables are sensitive to both the transport properties as well as the equation of state (EOS) of the Quantum Chromodynamics (QCD) matter. We have adopted the Boltzmann transport equation (BTE) in the relaxation time approximation (RTA) to describe the experimental data for harmonic flows such as elliptic flow (\(v_{2}\)), triangular flow (\(v_{3}\)), quadrangular flow (\(v_{4}\)) obtained in heavy- ion collisions at Large Hadron Collider (LHC) energies. In this analysis, we have used Tsallis statistics as an initial distribution and the Tsallis Blast wave (TBW) description is used as the equilibrium distribution function while describing the evolution of the particle production in BTE. We have fitted the transverse momentum spectra, \(v_{2}\), \(v_{3}\), and \(v_{4}\) of identified hadrons such as pion, kaon, and proton for Pb-Pb and Xe-Xe collisions at the LHC energies of \(\sqrt{s_{NN}}=5.02\) TeV and \(\sqrt{s_{NN}}=5.44\) TeV, respectively for various centralities. Our study offers the comparative analysis between the two distinct collision systems operating at comparable collision energies. The present formulation successfully fits the experimental data for \(p_{T}\)- spectra upto \(p_{T}\) = 8 GeV and effectively explains the anisotropic flows data upto \(p_{T}\) = 10 GeV with a very favourable \(\chi^{2}/ndf\). We observe that the average transverse flow velocity (\(<\beta_{r}>\)) and the kinetic freeze-out temperature (\(T\)) extracted in our analysis decrease as we go towards the peripheral collisions. Non-extensive parameters (\(q_{AA}\) and \(q_{pp}\)) exhibit an ascending trend from central to peripheral collisions, signifying an almost thermalized system in the most central collisions and a non-equilibrium state in peripheral ones. The azimuthal modulation amplitudes (\(\rho_{a}\)) for \(v_{2}\), \(v_{3}\), and \(v_{4}\) exhibit an increasing pattern as one moves from the most central to peripheral collisions in both the Pb-Pb and Xe-Xe nuclei interactions.
pacs: 25.75.-q,25.75.Nq,25.75.Gz, 25.75.Dw,12.38.Mh, 24.85.+p
## I Introduction
The investigation of high-energy heavy ion collisions has emerged as a cornerstone of modern nuclear and particle physics, offering a unique window into the fundamental properties of matter under extreme conditions. A key aspect of these collisions is the intricate interplay between the participating particles, leading to complex momentum transfer phenomena that shape the evolution of the collision dynamics. The quantitative understanding of these momentum transfer is pivotal not only for unraveling the underlying physics but also for informing the development of advanced theoretical models and experimental strategies. One of the prime goals of relativistic heavy-ion collision programs is to characterize the properties of the hot and dense medium known as Quark Gluon Plasma (QGP) created in these collisions. Earlier investigations conducted at the Super Proton Synchrotron (SPS), CERN [1], at the Relativistic Heavy Ion Collider (RHIC) [2; 3; 4; 5; 6] and at the Large Hadron Collider (LHC) [7; 8; 9; 10] have yielded compelling evidence suggesting the presence of the QGP, setting the stage for further in-depth exploration in heavy ion collision experiments. The properties of this medium can be studied via azimuthal anisotropies of the produced particles in the momentum space. These momentum anisotropies are arise due to the initial state geometry asymmetries. These asymmetries are characterized by the Fourier expansion coefficients \(v_{2}\), \(v_{3}\), \(v_{4}\), etc., of the azimuthal distribution of the particles. Experimentally, the azimuthal anisotropies for the hadrons created in the heavy- ion collisions have been studied at the RHIC [11; 12; 13; 14] and at the LHC [15; 16] energies. Theoretically, relativistic hydrodynamics are extensively used to study the anisotropic flows measured in the heavy-ion collisions.
The Boltzmann Transport Equation (BTE), a fundamental concept in the statistical mechanics, finds a profound application in the study of the medium created in the heavy ion collisions. Emerging from the kinetic theory of gases, it provides a mathematical framework for understanding the intricate dynamics of particles within the extreme conditions generated during high-energy heavy- ion collisions. In the context of the heavy ion collisions, this equation serves several critical purposes. The equation can be adapted to study collective phenomena, such as the development of flow patterns within the medium. This helps us in understanding how the initial state of the colliding nuclei evolves into a complex, collective behaviour, shedding light on the properties of the created medium. In this investigation, we have employed Tsallis statistics as the initial distribution, and for elucidating the particle production evolu
tion within the Boltzmann Transport Equation (BTE), we have adopted the Tsallis Blast Wave (TBW) description as the equilibrium distribution function.
Boltzmann-Gibbs statistics, which underlie classical thermodynamics, assumes that systems in equilibrium are described by the exponential distribution and relies heavily on the concept of entropy. However, in some complex systems, such as those with long-range interactions, fractal structures, or in non-extensive thermodynamics, Boltzmann-Gibbs statistics may not be adequate. Tsallis statistics [17] introduces a modified form of entropy, now called the Tsallis entropy (\(S_{q}\)), which is parametrized by a non-extensive parameter,"\(q\)". The Tsallis entropy leads to a generalized probability distribution function, known as the Tsallis distribution (or Tsallis q-distribution). This distribution plays a crucial role in the study of complex systems and has found applications in the various fields of physics, including the exploration of the properties of QGP [18; 19; 20; 21; 22] and the improvement of the Boltzmann transport equation.
Boltzmann-Gibbs blast- wave (BGBW) model [23; 24; 25] has long served as a fundamental pillar in this field. The BGBW model, a stalwart in this domain, postulates a critical assumption: the system reaches a local thermal equilibrium at a specific moment in time before embarking on a hydrodynamic evolution [26]. It has successfully described observables such as transverse momentum distributions of identified particles, offering valuable insights into the transverse expansion and the temperature at the moment when hadrons decouple from the system. However, the BGBW model faces a significant challenge, the inherent fluctuations in initial conditions [27], which fluctuate unpredictably from one collision event to another. These fluctuations can profoundly influence particle spectra, particularly in the low and intermediate transverse momentum (\(p_{T}\)) range. To account for the influence of fluctuations, the authors [28] opted to replace the Boltzmann distribution with the Tsallis distribution [17] for modelling the particle emission source, thereby adapting the statistical framework to better accommodate fluctuation-related phenomena. This fundamental shift has given rise to the Tsallis blast-wave (TBW) model, uniquely equipped to explore particle spectra in details. TBW model has been employed to scrutinize the spectra of a variety of particles, encompassing \(\pi^{\pm}\), \(k^{\pm}\), \(p(\bar{p})\), \(\phi\), \(\Lambda(\bar{\Lambda})\), and \(\Xi^{-}(\Xi^{+})\) for Au-Au collisions at \(\sqrt{s_{NN}}\) = 200 GeV [28]. In ref. [29], TBW was undertaken to encompass the spectra of both strange and non-strange hadrons. The results revealed a notable distinction in the central collisions, where strange hadrons exhibited smaller non-extensive parameter and average transverse flow velocity values alongside higher temperatures compared to the non-strange hadrons. This observation implies a potential earlier decoupling of strange hadrons relative to non-strange ones. In ref. [26], the authors delved into the particle spectra of Pb-Pb, Xe-Xe, and p-Pb collisions at energies of 2.76 TeV (for Pb-Pb), 5.02 TeV (for Pb-Pb and p-Pb) and 5.44 TeV (for Xe-Xe) using the TBW model, incorporating both linear and constant velocity profiles. Generally, the model successfully captures the spectra upto \(p_{T}\) = 3 GeV. Notably, they observed that as collisions transition from central to peripheral, average transverse flow velocity decreases, whereas temperature and non-extensive parameter exhibit the opposite trend. This suggests that in more central collisions, the system experiences a more rapid expansion and maintains a lower degree of off-equilibrium behaviour. In ref. [30], TBW is used to analyse the \(p_{T}\)-spectra of identified particles at \(\sqrt{s_{NN}}\) = 2.76 TeV and 5.02 TeV for Pb- Pb collisions, \(\sqrt{s_{NN}}\) = 5.02 TeV for p-Pb collisions as well as for Xe- Xe collisions at \(\sqrt{s_{NN}}\) = 5.44 TeV. They successfully describe the experimental data upto \(p_{T}\) = 3 GeV with a very good \(\chi^{2}/ndf\).
The manuscript is structured as follows: Section II presents the derivation of transverse momentum spectra and azimuthal anisotropies, utilizing the Boltzmann Transport Equation in the relaxation time approximation. In Section III, we delve into a comprehensive discussion of the results obtained. Lastly, in Section IV, we provide a succinct summary of the study along with potential future directions.
## II Formulation
The particle distribution in the four-momentum space can be written as a Fourier series [31],
\[E\frac{d^{3}N}{dp^{3}}=\frac{1}{2\pi}\frac{d^{2}N}{p_{T}dp_{T}dy}\left(1+2 \sum_{n=1}^{\infty}v_{n}\,\cos(n\phi)\right), \tag{1}\]
where \(E\) represents the energy of the emitted particles, which is a crucial parameter characterizing the particles\({}^{\prime}\) properties. \(y\) denotes the rapidity of these particles, a relativistic measure of their momentum along the beam direction, \(\phi\) is the azimuthal angle of a particle and \(v_{n}\) is the \(n^{th}\) harmonic coefficients that decode it's motion's patterns as: \(v_{2}\) unveils elliptic flow, \(v_{3}\) captures triangular correlations, and \(v_{4}\) reveals quadrangular patterns and so on. These coefficients hold clues to early pressure gradients, medium viscosity, and particle interactions, unveiling the complexities of the collision process.
Previously, various theoretical computations grounded in transport equations [32; 33] and phenomenological models [34; 35] had successfully addressed the explanation of \(v_{2}\). However, in this current study, we embark on an innovative endeavour, elucidating not only \(v_{2}\) but also \(v_{3}\) and \(v_{4}\) through the utilization of the Tsallis Blast wave description within the framework of the Boltzmann Transport Equation (BTE). Subsequent sections will intricately explore the application of BTE in tracking the dynamic evolution of the particle momentum distribution within a thermodynamic milieu, leading to a comprehensive exposition of the outcomes of this article.
### Anisotropic flows in Boltzmann transport equation (BTE) using Relaxation time approximation (RTA)
In the arena of high-energy heavy ion collisions, the anisotropic flow's enigmatic dance offers insights into the complex interplay of particles amidst extreme conditions as described in the introductory section. These asymmetric flow patterns, mirroring the collision's initial geometry and subsequent dynamics, hold keys to understanding the transport properties of the medium created in the heavy-ion collision. Navigating this intricate landscape, the Relaxation Time Approximation (RTA) within the venerable Boltzmann Transport Equation (BTE) emerges as a powerful tool.
The RTA refines this equation, simplifying intricate interactions through a relaxation time parameter. This partnership unveils how particles approach equilibrium after interactions. The BTE- RTA duo shines a light on the interplay of scattering, relaxation, and viscosity. This mathematical union not only aids the interpretation of experimental results but also unravels transport coefficients' significance. This endeavour delves into RTA within BTE, focusing on anisotropic flow. We decipher intricate relationships geometry, relaxation, and flow emergence through theoretical discourse and analytical tools. Our aim is to deepen the understanding of the medium formed in heavy-ion collisions, refine theory, and guide experimental inquiry.
The BTE in general can be written as:
\[\frac{df(x,p,t)}{dt}=\frac{\partial f}{\partial t}+\vec{v}.\nabla_{x}f+\vec{ F}.\nabla_{p}f=C[f] \tag{2}\]
The distribution of particles denoted as \(f(x,p,t)\) de
Figure 1: The transverse momentum spectra of pion, kaon and proton for 0-5 % centrality in Xe-Xe collisions.
Figure 2: (colour online) The comparison between the BGBW and TBW used as \(f_{eq}\) in BTE with RTA.
pends upon the position, momentum, and time. Here, \(\vec{v}\) represents velocity, while \(\vec{F}\) stands as the external force. The notations \(\nabla_{x}\) and \(\nabla_{p}\) denote partial derivatives concerning position and momentum, respectively. The term \(C[f]\) embodies collision interactions between the probing particles and the medium. Previously, the Boltzmann Transport Equation (BTE) within the Relaxation Time Approximation (RTA) framework has been employed to investigate various phenomena. These include the temporal progression of temperature fluctuations in non-equilibrium systems [36], analysis of elliptic flow of identified hadrons [35; 37] as well as the assessment of \(R_{AA}\) for diverse light and heavy flavors, at energies pertinent to the Large Hadron Collider (LHC) [38].
For the sake of simplification, assuming homogeneity of the system (\(\nabla_{x}f=0\)) and in the absence of external forces (\(\vec{F}=\)0), the second and third terms of the Eq. 2 become zero and it reduces to,
\[\frac{df(x,p,t)}{dt}=\frac{\partial f}{\partial t}=C[f]. \tag{3}\]
In RTA [39], the collision term is expressed as:
\[C[f]=-\frac{f-f_{eq}}{\tau} \tag{4}\]
where \(f_{eq}\) is the Boltzmann local equilibrium distribution characterized by a temperature \(T\). \(\tau\) is the relaxation time, the time taken by a non-equilibrium system to reach equilibrium. Using Eq. 4, Eq. 3 becomes
\[\frac{\partial f}{\partial t}=-\frac{f-f_{eq}}{\tau}. \tag{5}\]
Solving the above equation with the initial conditions _i.e._ at \(t=0,f=f_{in}\) and at \(t=t_{f},f=f_{fin}\), we get,
\[f_{fin}=f_{eq}+(f_{in}-f_{eq})e^{-\frac{t_{f}}{\tau}}, \tag{6}\]
where \(t_{f}\) is the freeze-out time. Using Eq. 6, the expression of the anisotropic flows (\(v_{n}\)) can be written as,
\[v_{n}(p_{T})=\frac{\int f_{fin}\times\cos(n\phi)\,d\phi}{\int f_{fin}\,d\phi}. \tag{7}\]
Eq. 7 gives the \(n^{th}\) azimuthal anisotropies after incorporating RTA in BTE. It involves the Tsallis non-extensive distribution function as the initial distribution of particles and TBW function as the equilibrium distribution. Continuing our discourse, we delve into the comprehensive derivation of the TBW model as done in [26]. In the TBW model, the invariant distribution function for identified particles is given by [40],
\[f_{eq}(x,p)=\frac{g}{(2\pi)^{3}}\Big{(}1\,+(q_{AA}-1)\frac{E-\mu}{T}\Big{)}^{ \frac{-1}{q-1}}. \tag{8}\]
Here, the temperature denoted by \(T\) is a kinetic freeze-out temperature, while \(g\) signifies the degeneracy factor. The energy of the emitted particles, described by \(E=p^{\nu}u_{\nu}\), originates from a source in motion with the velocity \(u_{\nu}\) and momentum \(p^{\nu}\). The latter can be expressed as, \(p^{\nu}=(m_{T}\cosh y,p_{T}\cos\phi_{p},p_{T}\sin\phi_{p},m_{T}\sinh y)\). The velocity of the source is denoted by \(u^{\mu}=\cosh\rho(\cosh y_{s},\tanh\rho\cos\phi_{b},\tanh\rho\sin\phi_{b}, \sinh y_{s})\), where \(y\) and \(m_{T}\) symbolize the rapidity and transverse mass of the identified particles. \(y_{s}\) represents the rapidity of the emitting source, while \(\phi_{p}\) and \(\phi_{b}\) are the azimuthal angles of the emitted particle velocity and the flow velocity with respect to the x-axis in the reaction plane. The azimuthal direction of the boost, \(\phi_{b}\), is aligned with the azimuthal angle of the emitting source in coordinate space. The parameter, \(q_{AA}\) encapsulates non-extensivity, quantifying the extent of deviation from equilibrium. This departure from unity is indicative of the non-equilibrium nature of the system. The parameter \(\rho\) known as the transverse expansion rapidity [26] is expressed as, \(\rho=tanh^{-1}\beta_{r}+\rho_{a}\cos(n\phi)\)[41] where \(\rho_{a}\) stands for the azimuthal modulation amplitude in the flow, and \(\beta_{r}=\beta_{s}\left(\xi\right)^{n}\) where, \(\beta_{s}\) is the maximum surface velocity and \(\xi=\Big{(}r/R\Big{)}\), with \(r\) as the radial distance and \(n\) is the flow profile. In the Tsallis blast-wave (TBW) model, the particles closer to the center of the fireball move slower than the ones at the edges. The average of the transverse velocity can be evaluated as [42],
\[<\beta_{r}>=\frac{\int\beta_{s}\xi^{n}\xi\,d\xi}{\int\xi\,d\xi}=\Big{(}\frac{2 }{2+n}\Big{)}\beta_{s}. \tag{9}\]
In our calculations, we have varied the parameter \(n\) to explore a range of flow profiles within the Tsallis Blast-Wave model. Here, \(R\) is the maximum radius of the expanding source at freeze-out (\(0<\xi<1\)). For the LHC energy regime, the chemical potential (\(\mu\)) is set to be 0 due to the the near symmetry in the particle-antiparticle production. Thus the invariant momentum spectrum for identified particles is written as,
\[E\frac{d^{3}N}{d^{3}\textbf{p}}=\frac{d^{3}N}{p_{T}dp_{T}dyd\phi_ {p}}=\frac{g}{\Big{(}2\pi\Big{)}^{3}}\int_{\sum_{f}}\] \[\Bigg{[}1+(q_{AA}-1)\frac{p^{\nu}u_{\nu}}{T}\Bigg{]}\frac{-1}{q_ {AA}-1}p^{\lambda}d\sigma_{\lambda}, \tag{10}\]
here \(\sum_{f}\) is the decoupling hyper-surface, \(d\sigma_{\lambda}\) is the normal vector to the hyper-surface. Using the parametrization of the surface in the cylindrical coordinates, we write \(d\sigma_{\lambda}\) as [43],
\[d\sigma_{\lambda}=(rd\phi_{b}drdz,-\textbf{e}_{\textbf{r}}rd\phi_{b}dzdt,0,- \textbf{e}_{\textbf{z}}rd\phi_{b}drdt), \tag{11}\]
and \(p^{\lambda}d\sigma_{\lambda}\) can be expressed as,
\[p^{\lambda}d\sigma_{\lambda}=rd\phi_{b}dy_{s}[m_{T}\tau\cosh(y_{s}- y)dr+m_{T}\sinh(y_{s}-y)d\tau\] \[-p_{T}\tau\cos(\phi_{p})d\tau], \tag{12}\]
where \(y_{s}=\frac{1}{2}\ln\frac{t+z}{t-z}\), \(\tau=\sqrt{t^{2}-z^{2}}\) is the longitudinal proper time. When particles decouple at a constant time, \(\tau=\tau_{0}\), the above equation is written as,
\[p^{\lambda}d\sigma_{\lambda}=\tau_{0}m_{T}\cosh(y_{s}-y)rdrd\phi_{b}dy_{s}. \tag{13}\]
With the expression,
\[p^{\nu}u_{\nu}=m_{T}\cosh\rho\cosh(y_{s}-y)-p_{T}\sinh\rho\cos(\phi_{p}-\phi_{b }), \tag{14}\]
the spectrum of the identified particle can be simplified as,
\[\frac{d^{3}N}{p_{T}dp_{T}dyd\phi_{p}}=\frac{g\tau_{0}}{\left(2 \pi\right)^{3}}\int_{\sum_{f}}dy_{s}rdrd\phi_{b}m_{T}\] \[\times\cosh(y_{s}-y)\Big{[}1+\frac{q_{AA}-1}{T}[m_{T}\cosh\rho \cosh(y_{s}-y)\] \[-p_{T}\sinh\rho\cos(\phi_{p}-\phi_{b})]\Big{]}\overline{q_{AA}-1}\,. \tag{15}\]
We have used the following assumptions in the analysis of the transverse momentum spectra and azimuthal anisotropies of identified hadrons [40]:
1. We assume Bjorken's longitudinal expansion, which means that the measured particle yield remains independent of rapidity because we integrate over the entire source's rapidity range [44]. This approximately holds at mid-rapidity for RHIC and LHC energies [2].
2. While we make the simplifying assumption of isotropic emission in azimuth for each local source, it's important to acknowledge that, in reality, the source's distribution may exhibit azimuthal variations or dependencies. [45].
3. We make the assumption that the emission source maintains uniformity in both density and degree of non-equilibrium at the time of kinetic freeze-out. Nevertheless, this assumption does not hold for high-\(p_{T}\) particles (jets) as they often demonstrate emission patterns concentrated on the surface, deviating from the assumed uniformity [46; 47].
We have not included the contributions from the resonance decay while analysing the \(p_{T}\)- spectra of stable particle as it plays a significant role only at very low \(p_{T}\). The detailed resonance decay kinematics and its effect on the spectra have been studied in the references [44] and [48]. Considering the above assumptions and taking \(\phi_{p}-\phi_{b}=\phi\) equation 15 becomes [28],
\[\frac{d^{3}N}{2\pi p_{T}dp_{T}}=\frac{g\tau_{0}m_{T}}{\left(2\pi \right)^{3}}\int_{-Y}^{+Y}\cosh(y)dy\int_{-\pi}^{+\pi}d\phi\times\int_{0}^{R} rdr\Big{[}1+\frac{(q_{AA}-1)}{T}[m_{T}\cosh(\rho)\cosh(y)\] \[-p_{T}\sinh(\rho)\cos(\phi)]\Big{]}\,\overline{q_{AA}-1}\,. \tag{16}\]
Here, we have used Jacobian for the transformation of the coordinates and integrated it over \(d\phi_{p}\). \(Y\) is the rapidity of the emitting beam. At mid-rapidity i.e y \(\simeq\) 0 above equation becomes,
\[f_{eq}=\frac{d^{3}N}{2\pi p_{T}dp_{T}dy}=\frac{g\tau_{0}m_{T}}{ \left(2\pi\right)^{3}}\int_{-\pi}^{+\pi}d\phi\times\int_{0}^{R}rdr\Big{[}1+ \frac{(q_{AA}-1)}{T}[m_{T}\cosh(\rho)\] \[-p_{T}\sinh(\rho)\cos(\phi)]\Big{]}\,\overline{q_{AA}-1}\,. \tag{17}\]
In this analysis, the initial distribution is parametrized by Tsallis distribution function [49],
\[f_{in}=D\left[1+(q_{pp}-1)\,\frac{m_{T}}{T_{ts}}\right]^{\frac{-q_{pp}}{q_{pp }-1}}. \tag{18}\]
Here, \(D=\frac{gVm_{T}}{(2\pi)^{2}}\). V is the volume of the fireball formed in the heavy- ion collisions. Consequently, we have employed the Tsallis distribution to derive both the final particle distribution and the \(n^{th}\) anisotropic flow, \(v_{n}\)
This thermodynamically consistent Tsallis distribution has been utilized to investigate particle distributions arising from proton-proton collisions, as elaborated in the reference [49]. Using equations 17 and 18 in equation 6 and taking \(\tau_{0}\approx V\) as a constant parameter, the final distribution can be expressed as,
\[f_{fin}=D\bigg{[}\frac{1}{2\pi}\int_{-\pi}^{+\pi}d\phi\times\int_{ 0}^{R}rdr\Big{[}1+\frac{(q_{AA}-1)}{T}[m_{T}\cosh(\rho)-p_{T}\sinh(\rho)\cos( \phi)]\Big{]}\overline{q_{AA}-1}\\ +\Bigg{(}\left[1+(q_{pp}-1)\,\frac{m_{T}}{T_{ts}}\right]\overline {q_{pp}-1}-\left(\frac{1}{2\pi}\int_{-\pi}^{+\pi}d\phi\times\int_{0}^{R}rdr \Big{[}1+\frac{(q_{AA}-1)}{T}[m_{T}\cosh(\rho)\\ -p_{T}\sinh(\rho)\cos(\phi)]\Big{]}\overline{q_{AA}-1}\,\Bigg{)} \Bigg{)}\exp^{-t_{f}/\tau}\Bigg{]} \tag{19}\]
Using equation 18 and equation 19, we calculate \(v_{n}\) for the observed identified hadrons as follows:
\[v_{n}(p_{T})=\frac{P}{Q}, \tag{20}\]
where,
\[P=D\int d\phi\cos(n\phi)\Bigg{[}\frac{1}{2\pi}\int_{-\pi}^{+\pi}d \phi\times\int_{0}^{R}rdr\Big{[}1+\frac{(q_{AA}-1)}{T}[m_{T}\cosh(\rho)\\ -p_{T}\sinh(\rho)\cos(\phi)]\Big{]}\overline{q_{AA}-1}+\Bigg{(} \left[1+(q_{pp}-1)\,\frac{m_{T}}{T_{ts}}\right]\overline{q_{pp}-1}-\left(\frac {1}{2\pi}\int_{-\pi}^{+\pi}d\phi\right.\\ \left.\times\int_{0}^{R}rdr\Big{[}1+\frac{(q_{AA}-1)}{T}[m_{T} \cosh(\rho)-p_{T}\sinh(\rho)\cos(\phi)]\Big{]}\overline{q_{AA}-1}\,\Bigg{)} \Bigg{)}\exp^{-t_{f}/\tau}\Bigg{]} \tag{21}\]
\[Q=D\int d\phi\Bigg{[}\frac{1}{2\pi}\int_{-\pi}^{+\pi}d\phi\times \int_{0}^{R}rdr\Big{[}1+\frac{(q_{AA}-1)}{T}[m_{T}\cosh(\rho)\\ -p_{T}\sinh(\rho)\cos(\phi)]\Big{]}\overline{q_{AA}-1}+\Bigg{(} \left[1+(q_{pp}-1)\,\frac{m_{T}}{T_{ts}}\right]\overline{q_{pp}-1}-\left(\frac {1}{2\pi}\int_{-\pi}^{+\pi}d\phi\right.\\ \left.\times\int_{0}^{R}rdr\Big{[}1+\frac{(q_{AA}-1)}{T}[m_{T} \cosh(\rho)-p_{T}\sinh(\rho)\cos(\phi)]\Big{]}\overline{q_{AA}-1}\,\Bigg{)} \Bigg{)}\exp^{-t_{f}/\tau}\Bigg{]} \tag{22}\]
Next, we transition to the results and discussion section to assess the efficacy of our current formalism in accurately characterizing anisotropic flow phenomena at LHC energies.
## III Results and Discussions
Now, we proceed towards the more detailed analysis of the experimental data of transverse momentum (\(p_{T}\)) spectra, \(v_{2}\), \(v_{3}\) and \(v_{4}\) measured at LHC energies for various collision systems as well as centralities. First, we analyse the experimental data of \(p_{T}\) spectra of identified hadrons such as \(\pi^{\pm}\), \(K^{\pm}\) and protons for Pb-Pb and Xe-Xe collisions at \(\sqrt{s_{NN}}=5.02\) TeV and \(\sqrt{s_{NN}}=5.44\) TeV, respectively. We have fitted the experimental data using the equation 19. Here, we consider the single freeze-out hyper-surface for all the identified hadrons. Thus, the kinetic freeze-out temperature (\(T\)) is considered same for all the particles and we have observed decreasing trend of \(T\) when moving towards the peripheral centrality [40; 52]. \(T\) is considered as a fixed parameter and are 0.110 GeV
and 0.106 GeV for Pb-Pb collision and Xe-Xe collision, respectively for the most central case. For the peripheral collisions of Pb-Pb nuclei and Xe- Xe nuclei the observed \(T\) are 0.096 GeV and 0.090 GeV, respectively. We have fitted the experimental data using the TF1 class [53] available in the ROOT library [54] to get a convergent solution. The convergent solution is obtained by the \(\chi^{2}\)-minimization technique which is also used in Ref. [55].
Figure 1 depicts the \(p_{T}\)-spectra for identified particles for the most central and peripheral Pb-Pb collisions at \(\sqrt{s_{NN}}=5.02\) TeV [56] and Xe-Xe collisions at \(\sqrt{s_{NN}}=5.44\) TeV [57] fitted with our proposed formulation. Fitting of the transverse momentum spectra play a crucial role in extracting information about the particle production and dynamics in heavy-ion collisions. The choice of the fitting range is essential for obtaining meaningful results. In the low \(p_{T}\) region, the maximum of the spectra is pushed towards higher momenta while going from peripheral to central Pb-Pb events. This effect is mass-dependent and can be interpreted as a signature of radial flow [58]. For high \(p_{T}\), the spectra follow a power-law shape, as expected from perturbative QCD (pQCD) calculations [59]. In our earlier work [60], BTE in RTA with BGBW as equilibrium distribution function is used to fit the \(p_{T}\)- spectra and explain the data only upto \(p_{T}=5\) GeV. This motivates us to use TBW as \(f_{eq}\) in our present formulation in BTE with RTA. We notice that the present formulation explains the experimental data successfully upto \(p_{T}=8\) GeV with a very good \(\chi^{2}/ndf\) for all the considered identified hadrons. The value of \(\chi^{2}/ndf\) is found to be smaller than unity because of the point-to-point systematic errors, which are included in the fit and dominate over statistical ones, are estimated on the conservative side and might not be completely random [48]. The extracted parameters are shown in the table 1. The average transverse flow velocity, \(<\beta_{r}>\) decreases with the mass and also shows the decreasing trend when moving from most central towards peripheral collisions for both the Pb-Pb and Xe-Xe collisions. These findings go inline with the well established hydrodynamical behaviour. We have also noticed that, \(<\beta_{r}>\) is higher for Xe-Xe collision in comparison to the Pb-Pb collisions, which suggests that the higher collision energies lead to increased particle production and multiplicity [48]. The collective motion and interactions of these particles can lead to a higher effective temperature within the system. We have found that the kinetic freeze-out temperature (\(T\)) decreases with the increasing collision energies. These findings suggest that a higher initial energy density results in a larger multiplicity and longer expansion time for the system, resulted into a large flow velocity and lower kinetic freeze-out
Figure 3: Elliptic flow of pions, kaons and protons for Pb-Pb collisions at \(\sqrt{s_{NN}}=5.02\) TeV [50].
temperature [40; 48]. The kinetic freeze-out temperature (\(T\)) decreases from central to peripheral collisions in the present analysis which is in contrast to the findings of the conventional BGBW results [48].
The value of the parameter \(n\) increases from the most central collisions towards peripheral collisions. The large values found in peripheral collisions are maybe due to the spectrum not being thermal over the full range and increases to reproduce the power-law tail [58]. We have observed that \(t_{f}/\tau\) increases with the mass in both the Pb-Pb and Xe-Xe collisions and does not show any centrality dependent trend which needs further investigations and will be discussed in our future work. The relationship between freeze-out time and relaxation time can vary depending on the specific details of the collision system and the assumptions made in the modelling. However, in many cases, if the freeze-out happens significantly earlier than the relaxation time, it implies that the system has not fully reached local thermal equilibrium before particles start escaping. This can happen in some high-energy or early-stage heavy ion collisions, where particles escape quickly due to the high initial energies involved. If the freeze-out time is comparable to or later than the relaxation time, it suggests that the system has had enough time to reach local thermal equilibrium before the freeze-out occurs. In this case, the observed particle spectra and properties may reflect a system that has experienced substantial equilibration and thermalization. The elevated chi-squared per degree of freedom (\(\chi^{2}/ndf\)) values suggest that the chosen range ensures a better convergence of the fitting procedure. In the high \(p_{T}\) region (\(>10\) GeV), the hadron production is dominated by surface emission [47] resulting in the inability of the Tsallis blast-wave model to accurately describe the spectra.
Further, we have fitted the \(p_{T}\) spectra of pions starting from \(p_{T}=0.5\) GeV as the formulation could not explain the data below this \(p_{T}\). Pions, being among the lightest hadrons, exhibit distinct resonance effects due to their relatively small mass. We have not incorporated the contribution of pion yields from resonance decay which significantly influence the spectral shape at a very low momenta [44; 30].
The anisotropic flow analysis conducted in both the Pb-Pb and Xe-Xe collisions sheds light on the intricate interplay of particle dynamics, collective effects, and collision system characteristics. By employing the Boltzmann transport equation with Tsallis distributions as the initial function and TBW as an equilibrium distribution function, a versatile framework is established to explore anisotropic flow phenomena in distinct collision systems [61]. A pivotal accomplishment emerges
Figure 4: Elliptic flow of pions, kaons and protons for Xe-Xe collisions at \(\sqrt{s_{NN}}\) = 5.44 TeV [51].
as we have compared our proposed Boltzmann transport equation with the TBW as an equilibrium function to the traditional Boltzmann-Gibbs blast wave model. The former exhibits superior success in fitting anisotropic flow data as evident from the results presented in the figure 2, indicating its remarkable flexibility in accommodating non-equilibrium effects. Here, the fitting has been done in both the TBW and BGBW models to get the minimum value of \(\chi^{2}/ndf\). In contrast, the limitations of Boltzmann- Gibbs blast wave model arise from its assumption of complete thermal equilibrium, potentially inhibiting its capacity to accurately represent non-equilibrium systems [62].
In figures 3 and 4, we have presented the fitting of the experimental data for elliptic flow (\(v_{2}\)) of \(\pi^{\pm}\), \(K^{\pm}\) and
\(p+\bar{p}\) at \(\sqrt{s_{NN}}\) = 5.02 TeV for Pb-Pb [50] and at \(\sqrt{s_{NN}}\) = 5.44 TeV for Xe-Xe collisions [51], respectively for the most central as well as peripheral collisions. The present formulation explains the experimental data upto \(p_{T}\) = 10 GeV for pions and upto \(p_{T}\) = 8 GeV for protons. For kaons, the fitting is upto only \(p_{T}\) = 4 GeV due to the unavailability of experimental data at higher transverse momentum. In Xe-Xe collisions, we have taken the experimental data upto \(p_{T}\) = 6 GeV for pions and protons, as values beyond this threshold manifest considerable error bars. For kaons, the range remains at 4 GeV due to data point limitations. In order to emphasize the effect of the azimuthal modulation amplitude on the azimuthal flows, here we have considered the \(<\beta_{r}>\) and \(T\) as fixed parameters extracted from the fitting of the \(p_{T}\)- spectra. We notice that, in both the Pb-Pb as well as Xe-Xe collisions, \(\rho_{a}\) increases as we go from most central to peripheral collisions.
Figures 5 and 6 illustrate the fitting of the triangular flow (\(v_{3}\)) for Pb-Pb collisions at \(\sqrt{s_{NN}}\) = 5.02 TeV [50] and for Xe- Xe collisions at \(\sqrt{s_{NN}}\) = 5.44 TeV [51], respectively. In Pb-Pb collisions, we have fitted the experimental results upto \(p_{T}\) = 10 GeV for both pions and protons. For \(p_{T}\) > 10 GeV, we get a higher value of \(\chi^{2}/ndf\). For kaons, the experimental data is available upto only 4 GeV so we have not fitted beyond this due to unavailability of data. We observe that, in Pb-Pb collisions, \(\rho_{a}\) increases when one go towards the peripheral collision. In Xe-Xe collisions, we have fitted the experimental results upto \(p_{T}\) = 5 GeV for pions while for kaons it is upto \(p_{T}\) = 4 GeV imposed by the limitations of data points within the given dataset. For protons, we have considered the experimental data upto \(p_{T}\) = 4 GeV as beyond this we get higher \(\chi^{2}/ndf\) values. We again notice that \(\rho_{a}\) increases with the centrality for all the particles.
In figure 7, we have displayed the fitting of the experimental data for \(v_{4}\) as a function of \(p_{T}\) for Pb-Pb collisions at \(\sqrt{s_{NN}}\) = 5.02 TeV [50] for the various centralities with a \(p_{T}\) range upto 5 GeV for pions and protons. This limitation is attributed to the presence of large error bars in the experimental data, which resulted in a large value of \(\chi^{2}/ndf\). For kaons, the range is also limited to 4 GeV, in accordance with the available data. Again, We find that \(\rho_{a}\) increases when one goes from the most central to peripheral collisions.
The azimuthal modulation amplitude (\(\rho_{a}\)) is defined as the ratio of the average momentum anisotropy in the transverse plane to the initial spatial anisotropy in the collision zone. In simpler terms, it quantifies the preference of the emitted particles to move in a direction perpendicular to the collision's symmetry plane. As shown in the figures 8 and 9, \(\rho_{a}\) increases as we go from most
central to peripheral collisions. The possible reasons behind this observations are summarized as:
1. In the peripheral collisions, the medium created in collision experiments may have a shorter lifespan due to the lower energy densities. This shorter duration means that the medium has less time to evolve and can retain the initial anisotropies present in the initial conditions, contributing to a larger \(\rho_{a}\).
2. There are fewer final-state interactions among particles during the late stages of the collision in the peripheral collisions. This reduced number of interactions allows the initial anisotropies to be better preserved in the final particle distributions, resulting in a larger \(\rho_{a}\).
## IV Summary and Outlook
In summary, we have fitted the \(p_{T}\)-spectra, higher flow harmonics (\(v_{2}\), \(v_{3}\), \(v_{4}\)) of identified hadrons such as pions, kaons and protons for Pb-Pb and Xe-Xe collisions for LHC energies at various centralities using the BTE in RTA with Tsallis Blast Wave (TBW) function as an equilibrium distribution. The main findings of this analysis are summarized as follows:
1. The value of \(\chi^{2}/ndf\) is found to be smaller than unity because the point-to-point systematic errors, which are included in the fit and dominate over statistical ones, are estimated on the conservative side and might not be completely random.
2. The value of the parameter \(n\) increases from the most central collisions towards peripheral collisions except for proton in Xe-Xe collisions. The large values found in peripheral collisions are maybe due to the spectrum not being thermal over the full range and increases to reproduce the power-law tail.
3. The non-extensive parameters, \(q_{pp}\) and \(q_{AA}\) increases as we go from the most central towards the peripheral collisions which is expected as these are greater for the system away from equilibrium.
4. The average transverse flow velocity decreases with the mass for both the Pb- Pb and Xe- Xe collisions and decreases from most central to peripheral collisions. Xe-Xe collisions at \(\sqrt{s_{NN}}\) = 5.44 TeV has a higher average transverse flow velocity compared
Figure 7: The quadrangular flow (\(v_{4}\)) of pions, kaons and protons at \(\sqrt{s_{NN}}\) = 5.02 TeV for Pb- Pb collisions [50].
to the Pb-Pb collisions at \(\sqrt{s_{NN}}\) = 5.02 TeV. Further, the extracted kinetic freeze-out temperature increases from peripheral to most central collisions and decreases with the collisions energies. These findings suggest that the initial higher energy density is responsible for the longer expansion time of the system which results in a larger flow velocity and lower kinetic freeze-out temperature.
5. The parameter \(t_{f}/\tau\) increases with the mass in both Pb-Pb and Xe-Xe collisions while it is not showing any trend centrality wise which needs further investigations and will be discussed in our future work. It suggests that the system has had enough time to reach local thermal equilibrium before the freeze-out occurs. In this case, the observed particle spectra and properties may reflect a system that has experienced substantial equilibration and thermalization.
6. In Pb-Pb and Xe-Xe collisions, the centrality-dependent behaviour of \(\rho_{a}\) for \(v_{2}\), \(v_{3}\), and \(v_{4}\) exhibits an increase as moving towards the peripheral collision. This trend is consistent with the fact that there are fewer final-state interactions among particles during the late stages of the collision in the peripheral collisions. This reduced number of interactions allows the initial anisotropies to be better preserved in the final particle distributions, resulting in a larger \(\rho_{a}\).
The observation of the centrality-dependent behaviour of \(\rho_{a}\) across both Pb-Pb and Xe-Xe collisions highlights the common underlying physics governing anisotropic flow phenomena. These trends are rooted in the interplay of particle interactions, momentum conservation, and the collective expansion dynamics of the collision systems. The systematic analysis of \(\rho_{a}\) for different flow harmonics enriches our understanding of the collision processes and the role of various particle species in heavy-ion collisions.
## Acknowledgements
SKT acknowledges the financial support of the seed money grant provided by the University of Allahabad,
Figure 8: Variation of azimuthal modulation amplitude \(\rho_{a}\) with the mass of the particles in Pb-Pb colisions at \(\sqrt{s_{NN}}\)= 5.02 TeV for the most central and peripheral collisions.
Prayagraj.
|
2309.11454 | NeighViz: Towards Better Understanding of Neighborhood Effects on Social
Groups with Spatial Data | Understanding how local environments influence individual behaviors, such as
voting patterns or suicidal tendencies, is crucial in social science to reveal
and reduce spatial disparities and promote social well-being. With the
increasing availability of large-scale individual-level census data, new
analytical opportunities arise for social scientists to explore human behaviors
(e.g., political engagement) among social groups at a fine-grained level.
However, traditional statistical methods mostly focus on global, aggregated
spatial correlations, which are limited to understanding and comparing the
impact of local environments (e.g., neighborhoods) on human behaviors among
social groups. In this study, we introduce a new analytical framework for
analyzing multi-variate neighborhood effects between social groups. We then
propose NeighVi, an interactive visual analytics system that helps social
scientists explore, understand, and verify the influence of neighborhood
effects on human behaviors. Finally, we use a case study to illustrate the
effectiveness and usability of our system. | Yue Yu, Yifang Wang, Qisen Yang, Di Weng, Yongjun Zhang, Xiaogang Wu, Yingcai Wu, Huamin Qu | 2023-09-20T16:40:17Z | http://arxiv.org/abs/2309.11454v1 | # _NeighViz_: Towards Better Understanding of Neighborhood Effects on Social Groups with Spatial Data
###### Abstract
Understanding how local environments influence individual behaviors, such as voting patterns or suicidal tendencies, is crucial in social science to reveal and reduce spatial disparities and promote social well-being. With the increasing availability of large-scale individual-level census data, new analytical opportunities arise for social scientists to explore human behaviors (e.g., political engagement) among social groups at a fine-grained level. However, traditional statistical methods mostly focus on global, aggregated spatial correlations, which are limited to understanding and comparing the impact of local environments (e.g., neighborhoods) on human behaviors among social groups. In this study, we introduce a new analytical framework for analyzing multi-variate neighborhood effects between social groups. We then propose _NeighViz_, an interactive visual analytics system that helps social scientists explore, understand, and verify the influence of neighborhood effects on human behaviors. Finally, we use a case study to illustrate the effectiveness and usability of our system.
Neighborhood Effects, Social Groups, Spatial Data, Visual Analytics
## 1 Introduction
Spatial data is prevalent in various social science disciplines, such as political science, sociology, and public health. The spatial differences in the correlations among variables (e.g., demographic and socioeconomic variables) have raised numerous research questions, particularly in the studies of neighborhood effects [25] and social group comparisons [5, 9]. For example, poor Americans exposed to neighbors from a broader range of socioeconomic classes tend to have better financial outcomes [10, 11]. Likewise, neighborhood centers promoting social interaction among the elderly are associated with reducing depressive symptoms, especially in low socioeconomic neighborhoods [22].
Traditional neighborhood effect analysis in social science is primarily hypothesis-driven with a focus on a broad social group at a coarse-grained level (e.g., the elderly in one particular city) due to limited high-granular datasets [33]. However, the recent availability of large-scale individual-level geospatial datasets (e.g., L2 Voter and Consumer Data [19]) has provided experts with new opportunities for analyzing detailed neighborhood effects across social groups, such as partisan segregation in activity space [34] and the adoption of prosocial behavior in different partisan areas [3]. Nevertheless, the expansion in data volume and the high diversity of variables introduce new data-driven analytical demands for variable selection, spatial modeling, and comparative analysis between groups. Social scientists who adopt such datasets often face challenges in exploring, interpreting, and comparing the modeled neighborhood effects among diverse social groups in an effective approach. For instance, scholars may observe a negative association between neighborhood socioeconomic status and voting participation at the aggregated level in the model results, but they cannot easily understand the variations across neighborhoods and social groups from numbers. Better tools are desired to help social scientists dive into specific contexts and examine how voting participation differs across neighborhoods.
Visual analytics thus offers a promising solution to overcome the aforementioned analytical demands by utilizing intuitive visual representations and interactions. However, developing a visualization system for analyzing neighborhood effects over various social groups still poses three challenges. First, conventional social science methodologies lack a coherent workflow for multivariate spatial analysis that effectively surfaces neighborhood effects on social groups. Most approaches involve multiple separate models and tools (e.g., statistical and geographic information systems (GIS) software), which require much effort to go back and forth between different analytical steps. Second, visually presenting the complex spatial and social relationships among different neighborhoods and social groups is difficult. Previous work has focused on visualizing either spatial patterns [13, 31] or multivariate social groups [29]. It is essential to bridge the gap between these approaches with unified representations that support the effective analysis of both spatial and multivariate social group data. Third, designing a visualization system to support social scientists in exploring, analyzing, and verifying insights in an interactive and rigorous manner is a non-trivial task. Multiple coordinated views are required to support data-driven and intuitive exploration. Moreover, providing the experts with contextual details is also necessary to verify their findings.
To address the first challenge, we formulate a data abstraction (Section A.1) and characterize the problem domain of neighborhood effects on social groups with our experts. We then propose an analytical framework combining data-driven techniques with domain-specific models. For the second challenge, we apply visualization techniques (e.g., 1D Map Projection and Parallel Sets) to reveal neighborhood effects and inter-group differences, enhancing comprehension of complex spatial and social relationships. For the third challenge, we present _NeighViz_, a visual analysis system that aids social scientists in modeling, exploring, and verifying neighborhood effects on social groups across social science issues. We evaluate _NeighViz_ through a case study with a domain expert to showcase its effectiveness and usability.
## 2 Requirement Analysis
Incorporating both neighborhood effects and social group analysis is essential for a holistic understanding of social dynamics and the complex relationships between people and their environments. To facilitate the generation of hypotheses about the neighborhood effect on social groups with the power of a visual analytic system, we have collaborated with two social scientists (\(E_{A}\) and \(E_{B}\)) in sociology over the past year. Though our system primarily targets social scientists who lack proficiency in geospatial analysis, we also sought consultation with
a GIS expert (\(E_{C}\)) about spatial modeling. Based on their traditional analytical approaches, we have summarized a three-stage workflow for the modeling, exploration, and verification of the neighborhood effect on social groups as follows.
**Model Generation.** In the first stage, experts aim to get an overview of potential factors for the specific social problem and identify a target social group for in-depth analysis:
**T1: Selecting variables for the spatial model.** Experts usually start from the variable selection process with a global model (e.g., Ordinary Least Square (OLS)). The system should support both univariate and multivariate exploration via intuitive visualization. Additionally, it should also help detect issues lying in model robustness, such as multicollinearity and spatial autocorrelation.
**T2: Identifying social groups for further exploration.** Selecting a social group of interest from various possibilities (e.g., based on multiple demographic attributes, such as Whites with college education) based solely on domain knowledge can be challenging. A data-driven approach is thus essential to help experts identify groups based on the spatial heterogeneity of the spatial global model, and focus on it in the subsequent local-level analysis.
**Geographical Exploration.** After locating factors and social groups of interest, the experts will conduct local spatial analysis to identify and understand neighborhood effects:
**T3: Exploring geographical distribution.** Scalable visualization should be designed to reveal the various spatial attributes (e.g., multi-dimensional raw data and outputs of the spatial model) and facilitate comparisons through geo-distribution. Conventional 2D maps are difficult to compare multiple variables intuitively.
**T4: Detecting areas with potential neighborhood effect.** One core task for experts is to detect and explore neighborhood effects. Thus, the system should cluster spatial units into neighborhoods with shared characteristics and internal cohesion based on expert-focused attributes. Moreover, novel visualization is required to reveal contextual information about each neighborhood, such as the statistical summary and spatial spillover [26], to help understand and interpret such effects.
**Comparison and Verification.** Based on spatial patterns, the next step is to drill down to additional contexts beyond the spatial model and statistics to interpret specific neighborhood effects. However, experts usually need to switch between multiple data sources and software to gain such insights, which is laborious and inefficient:
**T5: Comparing across social groups.** To comprehend the driving factors of the neighborhood effects on a specific social group, experts want the tools to support the comparison of different social groups. The system should use visual comparison techniques to help compare groups with multiple attributes.
**T6: Verifying and explaining neighborhood effects.** In addition to group comparison, qualitative geo-information is also important for the expert to learn the neighborhood environments and interpret the factors affecting the residents' behaviors.
## 3 Neighborhood Effect Analysis Framework
We propose a web-based application consisting of a backend and a frontend (see Fig. 1) for analyzing neighborhood effects on social groups (more implementation details available in Section A.2). This section focuses on the data analysis pipeline in the backend (Fig.1-A3 to A7), which combines conventional and advanced social science analysis methods. We developed the pipeline based on the social science literature [36, 35, 8, 30] with our domain experts.
**Linear Model.** We initially use ordinary least-square (OLS) as the baseline model to explore variable relationships (Fig. 1-A3), where \(y\) represents the dependent variable and \(X\) represents the independent variables selected by users (Fig. 1-1-2).
\[y=XB+\epsilon \tag{1}\]
However, OLS treats spatial units as independent observations, neglecting spatial dependencies by neighborhood effects. To measure spatial dependencies in the model, we compute Moran's Index [23]. A high Moran's Index indicates unaccounted spatial dependencies.
**Spatial Global Model.** To account for spatial dependencies, we transition from the OLS to Spatial Durbin Model (SDM) [20]. The SDM equation includes additional terms for the spatial lag in dependent (\(\rho Wy\)) and independent variables (\(\gamma WXB\)), reflecting the influence from neighboring spatial units. A spatial weight matrix (\(W\)) determines the extent of this influence, which is the Gaussian kernel by default but also supports various kernel-based and contiguity-based weighting methods.
\[y=XB+\rho Wy+\gamma WXB+\epsilon \tag{2}\]
However, as a global model, SDM cannot effectively address spatial heterogeneity due to location-specific factors (e.g., culture and history). Consequently, Moran's Index is utilized again to detect spatial heterogeneity in SDM results. The Moran's Indices for both OLS and SDM (Fig.1-2-1) of all the social groups will be computed and displayed in the frontend (Fig.1-B2). Users can rank groups based on these indices and choose a group of interest for further local analysis (Fig.1-2-2).
**Spatial Local Model.** To explore spatially varying neighborhood effects over places, we extend the SDM to the geographically Weighted Regression (GWR) model [7] (Fig. 1-A5). GWR, a local regression technique, estimates a set of regression coefficients for each spatial unit,
Fig. 1: _Neigh/Viz_ comprises the backend (A1-A7) and the frontend modules (B1-B6). Data is initially preprocessed and stored in the database (A1) and then aggregated via a Data Query Engine (A2) based on user-selected demographic attributes (1-1) for social group analysis. The filtered data is then sent to the data analysis pipeline (A3-A7, Section 3). The six visualization views (B1-B6, Section 4) interact with backend components through HTTP requests, facilitating interactive data analysis (1-1 to 4).
considering spatial heterogeneity and Tobler's First Law of Geography [27] with the following equation.
\[y_{i}=X_{i}\beta_{i}(u_{i},v_{i})+\rho W_{ij}v_{i}+\gamma_{i}W_{i}X_{i}\beta_{i}( u_{i},v_{i})+\epsilon_{i} \tag{3}\]
The GWR model is similar to OLS and SDM while including \(\beta_{i}(u_{i},v_{i})\), a function that depends on the spatial coordinates of spatial unit \(i\) (\(u_{i}\) and \(v_{i}\)). It estimates the spatially varying coefficients at each location by giving more weight to nearby observations and less weight to more distant observations. The GWR model generates unique coefficients per variable per spatial unit. These coefficients are used in the subsequent steps of Regionalization and Spillover Effect Analysis.
**Regionalization.** To assist in the identification of areas that demonstrate potential neighborhood effects, we apply regionalization to process outputs of the spatial local model, where each spatial unit is characterized by a model coefficient for each variable. Regionalization groups spatial units into regions based on similar attribute values and model coefficients, thereby facilitating the discovery of latent neighborhoods with similar characteristics (Fig. 1-A6). We implement spatially constrained hierarchical clustering, which combines elements of hierarchical clustering with spatial constraints, ensuring that the resulting clusters are both internally homogeneous and spatially contiguous. The number of clusters is set to 5 by default and can be adjusted by users.
**Spillover Effect Analysis.** To explore the dynamics of the spillover effect in different regions, a local spillover effect algorithm has been developed. This algorithm utilizes the coefficients of the spatially lagged variables in the spatial local model as its input. It computes the magnitude and direction of the spillover effect from each spatial unit to its neighboring units. (Fig. 1-A7). For each neighboring unit \(j\) of the focal unit \(i\), we multiply the coefficient of spatially lagged variables, \(\gamma_{j}(u_{j},v_{j})\), with the weight \(W_{ij}\):
\[S_{ij}=\gamma_{j}(u_{j},v_{j})\cdot W_{ij} \tag{4}\]
We then calculate the relative direction of \(j\) to \(i\) and categorize the direction into one of the 16 cardinal directions. We finally aggregate the strength of the spillover effect of variables in 16 cardinal directions for the focal spatial unit.
All the analysis results in the pipeline are fed into the six visualization views (Fig. 1-B1 to B6) for interactive analysis.
## 4 Visual Design
_NeighViz_ features six views to support the analytical tasks in Section 2. The expert can start from the _Variable View_ (Fig. 2-A) to select variables and generate spatial models based on the multivariate analysis using the _Correlation Matrix_ (Fig. 2-A2) (**T1**). Next, the expert can compare the model results of different social groups in the _LineUp Table_[15] in the _Social Group View_ (Fig. 2-B), and select an interesting group for further analysis (**T2**). Through _Projection View_ (Fig. 2-C) and _Map View_ (Fig. 2-D), the expert can explore the geographical information on the clusters that are regionalized based on the model's coefficients (**T3, T4**). Meanwhile, the _Cluster View_ (Fig. 2-E) is provided to investigate the detailed distributions of variables with multiple density histogram charts. Finally, the expert can zoom into a cluster of interest and utilize the _Detail View_ (Fig. 2-F) for in-depth details. _Detail View_ supports social group comparisons through randomly sampled individual data displayed on _Parallel Sets Chart_[18] (Fig. 2-F1), and neighborhood context exploration via Google Street View [14] (Fig. 2-F2) (**T5, T6**). In the following section, we introduce two main views, _Projection View_ and _Map View_, in detail.
### _Projection View_
The _Projection View_ (Fig. 2-C) uses 1D _Projection Bars_ to show the spatial distribution of attributes in regionalized clusters (**T3**).
_Description:_ each _Projection Bar_ (Fig. 2-C1) corresponds to a variable and is partitioned based on regionalization, with cluster segment bars showing the scope of clusters at the top (Fig. 2-C2). We apply binary tree traversal to generate _Projection Bars_ as a dimensionality reduction approach [12]. Specifically, we use agglomerative hierarchical clustering for the regionalization (Section 3). Then, we use the leaf order of the dendrogram to project the variables from the 2D map to 1D _Projection Bars_. The normalized value of each spatial unit is encoded using a color scale from light yellow (lowest) to dark red (highest). Users can customize regionalization parameters to fine-tune the results (Fig. 2-C3). They can also click a _Projection Bar_ to show the detailed distribution of a variable in the _Map View_ with the same color scheme. We first tried small-multiple maps to show variable distributions [17, 28].
Fig. 2: The system interface of _NeighViz_. (A) The _Variable View_ supports the variable selection for the spatial model using the _Correlation Matrix_. (B) The _Social Group View_ summarizes global model results and Moran’s indices for social groups to help group selection. (C) The _Projection View_ offers a regionalized overview of spatial distributions of attribute values and spatial local model coefficients in 1D _Projection Bars_. (D) The _Map View_ uses a choropleth map to show the spatial distribution of a selected variable, with cluster glyphs summarizing variable statistics and spillover effects of neighborhood clusters. (E) The _Cluster View_ lists variable distributions of different neighborhood clusters. (F) The _Detail View_ provides detailed information of a specific cluster via the _Parallel Sets Chart_ and the street view.
However, they became unscalable and hard to compare as the number of variables increased. Thus, we chose the projection method to show multivariate spatial distributions in a compact way.
### Map View
Although the _Projection View_ shows variables compactly, it loses part of the spatial relationships. The _Map View_ thus provides more details with a 2D map (Fig. 2-D) **(T3)**, which includes cluster glyphs showing aggregated statistics and spillover effect in clusters **(T4)**.
_Description:_ our design uses a choropleth map to show the spatial distribution of a chosen variable, with the same color encoding in the _Projection View_. We specifically designed a cluster glyph (Fig. 2-D2, D3) for each spatial cluster to show the statistics and illustrate the within-cluster spillover effect. The inner layer is a radar chart showing the mean values of selected variables across all spatial units of a cluster, with each axis representing the normalized mean of a variable. The outer layer reveals the spillover effect of the spatially lagged variables along 16 cardinal directions as described in Section 3, presented as a closed cardinal curve. The radius of the curve along each direction represents the mean magnitude of the spillover effect of the spatial units within that cluster, and a larger radius indicates a stronger influence of the cluster along that direction. Users can click on a glyph to highlight other information about this cluster in multiple views (Fig.1-4).
## 5 Case Study
We applied _NeighVz_ to study relations between race and political engagement in the US. Specifically, we used data from the L2 Voter and Consumer Data [19] that contains 180 million registered voters in the US. We focused on New York City as a demonstration. To study neighborhood influence on political engagement, we used the voter turnout rate (\(\frac{Number\ of\ votes\ cult}{Total\ number\ of\ eligible\ voters}\)) in general elections as the dependent variable. The data were aggregated to the census block group (CBG) level as described in Section A.1. We invited the political scientist \(E_{A}\) (Section 2) to freely explore the system and finally organized his observations into a case as follows.
**Model Generation.**\(E_{A}\) began his study in Queens County for its cultural diversity, starting with the 2016 election data. Intrigued by the potential impact of racial segregation on the turnout rate, he selected the race ratios as independent variables, together with other potential factors, including education, income, and age. Then he noticed a high correlation between the _Education Index (EI)_ and _Income Index (II)_ in the _Correlation Matrix_ (Fig. 2-A2). To prevent multicollinearity, he excluded _II_. He added a spatially lagged variable of the _turnout rate_ to study the voting behavior spillover effect. For social groups, he selected _edu_ and _race_ attributes to examine social group voting patterns. The final model and the resulting social group analysis are presented in the _Social Group View_ (**T1**). The _LineUp Table_ in the _Social Group View_ shows lower Moran's Indices for the Spatial Durbin Model (SDM) than the Ordinary Least Square (OLS), indicating better spatial dependence captured by the SDM. Noticing that the "no college" group had the highest Moran's 1 for SDM (Fig. 2-B1), \(E_{A}\) inferred its spatial heterogeneity wasn't fully captured. Consequently, the employed the Geographically Weighted Regression (GWR) on this group to delve deeper into its local-level neighborhood effect. **(T2)**.
**Geographical Exploration.** The GWR model coefficients were then automatically regionalized into 5 clusters, showing distinct correlations between independent and dependent variables. \(E_{A}\) used glyphs in the _Map View_ (Fig. 2-D2, D3) to compare cluster statistics and quickly found that clusters 1 (blue) and 4 (red) were intriguing. _"Two adjacent regions: cluster 5 has an extremely high Asian Ratio, but cluster 2 looks more diverse. And why the spillover effect of cluster 2 is much stronger than that of cluster 5?"_ He then clicked the two glyphs to highlight the clusters in _Projection View_, _Map View_, and _Cluster View_ (**T3, T4**) to for more details. The _Projection View_ (Fig. 2-C1) confirmed the racial diversity in cluster 2 and racial segregation in cluster 5. Several model coefficients varied between clusters, leading him to conduct a closer comparison using _Cluster View_. The density histograms (Fig. 2-E1) showed a negative correlation between Asian concentration and less-educated voter turnout in cluster 5, while cluster 2 exhibited the reverse. \(E_{A}\) commented it might illustrate the dynamics of the ethnic enclave theory [16] and that community diversity could enhance political participation among less-educated minority populations, accounting for stronger voting spillover in cluster 2 (**T3, T4**).
**Comparison and Verification.** To delve deeper, \(E_{A}\) compared social groups, selecting college graduates for comparison **(T5)**. _Parallel Sets Chart_ right bars confirmed that cluster 2 had an evenly distributed racial mix (Fig. 3-A1), whereas cluster 5 was predominantly White and Asian (Fig. 3-A2). Strand widths showed higher voter turnout among White college graduates in both clusters, but Asian voter turnout was inconsistent. This emphasized the spatial heterogeneity of political landscapes and varying Asian political participation, leading \(E_{A}\) to wonder about the exact locations of these clusters. \(E_{A}\) virtually toured clusters 2 and 4 via the _Street View_ (**T6**) and identified cluster 2 as Forest Hills (Fig. 3-B1, C1), a diverse, family-friendly neighborhood. He noted the northwest-southeast orientation of the boundary and inferred a strong localized spillover of voting. Cluster 5 overlapped with Flushing (Fig. 3-B2, C2), known for its Chinatown and high renter population, and suggested weaker social ties could limit voting spillover. Finally, he reflected, _"An ethnic enclave leads to lower voter turnout among the Asian population, but a diverse community can boost political participation. I'll examine more U.S. regions for this pattern. "_
**Expert Feedback.** We gathered feedback from our experts in Section 2 (\(E_{A}\)-\(E_{C}\)). All experts thought _NeighVz_ provided a streamline for effectively exploring the neighborhood effect on the political engagement of minority social groups, offering valuable insights for future studies. Specifically, \(E_{B}\) suggested that the analysis framework could be generalized to study the neighborhood effect of income inequality. \(E_{A}\) appreciated the projection bars and spillover glyphs that make the model results intuitive to explore and understand. \(E_{C}\), as a GIS expert in econometrics, suggested including rigorous statistical testing in specific steps in the framework, yet she still appreciated the usefulness of _NeighVz_ to generate hypotheses for further studies.
## 6 Conclusion
In this study, we present _NeighVz_, an interactive visual analytics system to help social scientists model, explore, and verify neighborhood effects on different social groups. Future research using _NeighVz_ will include expanding options for statistical testing and visual encoding, incorporating diverse datasets for broader social science research, and enhancing the system's ability to handle time-varying data.
## Acknowledgments
This work was partially supported by RGC GRF grant 16210321.
Figure 3: The neighborhood effect was compared and verified in detail. (A1) The _Parallel Sets Chart_ revealed high voter turnout among Asians in cluster 2, especially Asians with college degrees. (A2) The _Parallel Sets Chart_ indicated a low voting rate among Asians in cluster 5, regardless of their college attendance. The maps and street views of cluster 2 (B1, C1) and cluster 5 (B2, C2) showed that they are located near Forest Hills and Flushing, respectively. |
2309.08105 | Libriheavy: a 50,000 hours ASR corpus with punctuation casing and
context | In this paper, we introduce Libriheavy, a large-scale ASR corpus consisting
of 50,000 hours of read English speech derived from LibriVox. To the best of
our knowledge, Libriheavy is the largest freely-available corpus of speech with
supervisions. Different from other open-sourced datasets that only provide
normalized transcriptions, Libriheavy contains richer information such as
punctuation, casing and text context, which brings more flexibility for system
building. Specifically, we propose a general and efficient pipeline to locate,
align and segment the audios in previously published Librilight to its
corresponding texts. The same as Librilight, Libriheavy also has three training
subsets small, medium, large of the sizes 500h, 5000h, 50000h respectively. We
also extract the dev and test evaluation sets from the aligned audios and
guarantee there is no overlapping speakers and books in training sets. Baseline
systems are built on the popular CTC-Attention and transducer models.
Additionally, we open-source our dataset creatation pipeline which can also be
used to other audio alignment tasks. | Wei Kang, Xiaoyu Yang, Zengwei Yao, Fangjun Kuang, Yifan Yang, Liyong Guo, Long Lin, Daniel Povey | 2023-09-15T01:59:21Z | http://arxiv.org/abs/2309.08105v2 | # Libriheavy: A 50,000 hours ASR corpus with punctuation casing and context
###### Abstract
In this paper, we introduce Libriheavy, a large-scale ASR corpus consisting of 50,000 hours of read English speech derived from LibriVox. To the best of our knowledge, Libriheavy is the largest freely-available corpus of speech with supervisions. Different from other open-sourced datasets that only provide normalized transcriptions, Libriheavy contains richer information such as punctuation, casing and text context, which brings more flexibility for system building. Specifically, we propose a general and efficient pipeline to locate, align and segment the audios in previously published Librilight to its corresponding texts. The same as Librilight, Libriheavy also has three training subsets small, medium, large of the sizes 500h, 5000h, 50000h respectively. We also extract the dev and test evaluation sets from the aligned audios and guarantee there is no overlapping speakers and books in training sets. Baseline systems are built on the popular CTC-Attention and transducer models. Additionally, we open-source our dataset creatation pipeline which can also be used to other audio alignment tasks.
Wei Kang, Xiaoyu Yang, Zengwei Yao, Fangjun Kuang, Yifan Yang, Liyong Guo, Long Lin, Daniel Povey Xiaomi Corp., Beijing, China
{kangwei1, dpovey}@xiaomi.com Speech recognition, Corpus, Audio alignment, Librivox
## 1 Introduction
In the past decade, various system architectures, like Connectionist Temporal Classification (CTC) [1], RNN-T [2] and encoder-decoder based model [3], have been proposed, pushing the dominant framework from the hybrid Hidden Markov Models (HMM) [4] to end-to-end models. In general, the neural network models are supposed to be more data hungry than traditional systems.
A lot of work has been done on publishing open source datasets, for example, the Wall Street Journal corpus [5], SwitchBoard [6], Fisher [7] and the famous LibriSpeech corpus [8]. While these are all small or medium size datasets with less than 2,000 hours, which is too small to train a good enough end-to-end model. In recent years, there are also large-scale corpora like GigaSpeech [9], People's Speech [10] and MLS [11]. One drawback of these datasets is that they only provide normalized transcriptions, making it impossible to train a model that needs full-format texts, such as punctuation prediction.
Typical ASR corpora aim at training ASR systems to recognize independent utterances. However, the preceding context of the current utterance may convey useful information. Contextualized speech recognition utilizes the cross-utterance context to improve the accuracy of ASR systems and yields promising results [12, 13]. However, training such systems usually requires utterance-level context for each training utterance, which is not available in most existing ASR corpora. Therefore, such a dataset with textual context information is highly desirable.
Motivated by the aforementioned points, we introduce Libriheavy, a large-scale (50,000 hours) corpus containing not only fully formatted transcripts but also textual context, which is suitable for various speech recognition related tasks. In addition, unlike other open-source datasets that have their own creating pipelines, we propose a general audio alignment method and release it as a standard package. Our contributions are as follows:
* We release a 50,000-hour of labeled audio containing punctuation casing and preceding text;
* We propose and open-source a general audio alignment pipeline, which makes it easier to construct ASR corpora;
* We provide solid evaluation results on Libriheavy, which demonstrate the high quality of the corpus and the robustness of our pipeline.
## 2 Libriheavy corpus
In this section, we provide a detailed description of the Libriheavy corpus, including audio files, metadata, data partitions, text styles, and other aspects. Instructions and scripts are available in the Libriheavy GitHub repository1.
Footnote 1: [https://github.com/k2-fsa/libriheavy](https://github.com/k2-fsa/libriheavy)
### Librilight
Librilight [14] is a collection of unlabeled spoken English audio derived from open-source audio books from the LibriVox project 2. It contains over 60,000 hours of audio and aims for training speech recognition systems under limited or no supervision. The corpus is free and publicly available 3.
Footnote 2: [https://librivox.org](https://librivox.org)
Footnote 3: [https://github.com/facebookresearch/libri-light](https://github.com/facebookresearch/libri-light)
### Libriheavy
Libriheavy is a labeled version of Librilight. We align the audio files in Librilight to their corresponding text in the orig
inal book and segment them into smaller pieces with durations ranging from 2 to 30 seconds. We maintain the original dataset splits of Librilight and have three training subsets (small, medium, large). In addition, we further extract evaluation subsets (dev, test-clean, test-other) for validation and testing. Table 1 shows the statitics of these subsets.
#### 2.2.1 Metadata
We save the metadata of the dataset as Lhotes [15] cuts in JSON lines. Each line is a self-contained segment, including the transcript and its audio source. Users can clip the corresponding audio segment with the given _start_ and _duration_ attributes. Unlike other publicly available corpora that only provide normalized transcripts, Libriheavy includes richer information such as punctuation, casing, and text context. The text context is the transcription of the preceding utterances, located in the _pre_texts_ entry, with a default length of 1000 bytes. There are also _begin_byte_ and _end_byte_ attributes, which allow users to easily slice any length of text context from the original book pointed to by the _text_path_ attribute. Of course, there are other supplementary entries that might be usefull for other tasks, such as _id_, _speaker_, etc.
#### 2.2.2 Evaluation Sets
As mentioned above, we have three evaluation sets in Libriheavy, namely dev, test-clean, test-other. We ensure that the evaluation sets have no overlapping speakers and books in the training set. To make the evaluation sets contain as many speakers and books as possible while not dropping out too much training data, we filtered out speakers and books with shorter durations as candidates. We then determine the _clean_ speakers and _other_ speakers using the same method as in [8] and divide the candidates into _clean_ and _other_ pool. We randomly select 20 hours of audio from the _clean_ pool, half of which forms the test-clean set and the other half is appended to the dev set. We follow the same procedure for the _other_ pool. Librilight ensures that audio files from the LibriSpeech evaluation sets are not present in the corpus, therefore, the LibriSpeech evaluation sets can also be used as our evaluation sets.
## 3 Audio Alignment
This section describes the creation pipeline of the Libriheavy corpus. The key task of audio alignment is to align the audio files to the corresponding text and split them into short segments, while also excluding segments of audio that do not correspond exactly with the aligned text. Our solution presented here is a general pipeline that can be applied to other data generation tasks as well. The implementation of all the following algorithms and corresponding scripts are publicly available4.
Footnote 4: [https://github.com/k2-fsa/text_search](https://github.com/k2-fsa/text_search)
### Downloading text
To align the audio derived from audiobooks, we require the original text from which the speaker read the audiobook. From the metadata provided by Librilight, we can obtain the URL of the textbook for each audio file. We have written scripts to automatically extract the text and download the sources for all audiobooks. We then apply simple clean-up procedures such as removing redundant spaces and lines to the text sources.
### First alignment stage
The goal of this stage is to locate the audio to its corresponding text segments (e.g. chapter) in the original book. First, we obtain the automatic transcript of the audio file. Then we treat the automatic transcript as **query** and the text in the original book as **target**5, and find the close matches (Sec 3.2.2) for each elements in the query over the target. Finally, we determine the text segment of the audio by finding the longest increasing pairs (Sec 3.2.3) of query elements and their close matches. Note, we did not use the VAD tool provided by Librilight for audio segmenting and as our algorithm requires a relatively long text to guarantee its accuracy.
Footnote 5: We will normalize the text to upper case and remove the punctuation, but keep the index into original text.
#### 3.2.1 Transcribe audios
The audios in Librilight have a large variance in duration, from a few minutes to hours. To avoid excessive computation on long audio files, we first split the long audio into 30-second segments with 2 seconds of overlap at each side, and then recognize these segments with an ASR model trained on Librispeech. Finally, we combine the transcripts that belong to the same audio by leveraging the timestamps of the recognized words.
#### 3.2.2 Close matches
Now we have the automatic transcript and the original book for each audio. To obtain the most similar text segment in the original book of the automatic transcript roughly, we propose the _close matches_. First, we concatenate query and target to a long sequence (target follows query), then a suffix array is constructed on the sequence using the algorithm in [16]. The _close matches_ of the element in query position \(i\) is defined as two positions in the original sequence that are within the target portion, and which immediately follow and precede, in the suffix array, query position \(i\). This means that the suffixes ending at those positions are reverse-lexicographically close to the suffix ending at position \(i\). Figure 1 shows a simple
\begin{table}
\begin{tabular}{l l l l l} \hline \hline subset & hours & books & per-spk hrs & total spks \\ \hline small & 509 & 173 & 1.22 & 417 \\ medium & 5042 & 960 & 3.29 & 1531 \\ large & 50794 & 8592 & 7.54 & 6736 \\ \hline dev & 22.3 & 180 & 0.16 & 141 \\ test-clean & 10.5 & 87 & 0.15 & 70 \\ test-other & 11.5 & 112 & 0.16 & 72 \\ \hline \hline \end{tabular}
\end{table}
Table 1: The dataset statistics of Libriheavy.
example of finding the close matches of query _"LOVE"_ over target _"ILOVEQU"_.
#### 3.2.3 Longest increasing pairs
Let us think of those _close matches_ which we obtained above as a set of \((i,j)\) pairs, where \(i\) is an index into query sequence and \(j\) is an index into the target sequence. The query and its corresponding segment in the target should be monotonic aligned, so we can get the approximate alignment between the two sequences by finding the longest chain of pairs: \(\left(i_{1},j_{1}\right),\left(i_{2},j_{2}\right),...\left(i_{N},j_{N}\right)\), such that \(i_{1}<=i_{2}<=...<=i_{N}\), and \(j_{1}<=j_{2}<=...<=j_{N}\).
### Second alignment stage
From the longest chain obtained from the previous step, we can roughly locate the region in the target sequence relative to the query. At this stage, we use the Levenshtein alignment [17] to find the best single region of alignment between the recognized audio (query) and the text segment (obtained by the longest chain pairs). Since Levenshtein alignment is a quadratic time complexity algorithm, and will be very inefficient for long sequences. We can use the traceback through the pairs in the longest chain as the backbone for the Levenshtein alignment, so that we limit the Levenshtein alignment into blocks defined by the \((i,j)\) positions in this traceback. By concatenating the Levenshtein alignments of all the blocks along the query index, we obtain the Levenshtein alignment of the whole query.
### Audio segmentation
The goal of audio segmentation is to break long audio into shorter segments, ranging from 2 seconds to 30 seconds, which are more suitable for ASR training. We use a two-stage scoring method to search for good segmentations 6. All books in LibriVox have punctuation, so we decided to split the sentence only at punctuation indicating the end of a sentence, namely, ".", "?" and "!" 7. We select the positions in the alignment that follow chosen punctuations as Begin Of a Segment (BOS) and the positions followed by chosen punctuations as End Of a Segment (EOS), then we compute scores for these positions:
Footnote 6: The scores mentioned below will be normalized to the same scale, so none of the scores would dominate the final score.
* The number of silence seconds this position follows or is followed by, up to 3 seconds.
* The score corresponding to the number of insertions, deletions and substitutions within a certain region of this position.
Each pair of BOS and EOS forms a segment. The following rule is applied to assign scores to potential segments:
* The score of BOS plus the score of EOS.
* A score related to the duration of the segment, which guarantees the duration is in the range of 2 to 30 seconds and encourages a duration between 5 to 20 seconds.
* A bonus for the number of matches in the alignment.
* A penalty for the number of errors in the alignment.
For each BOS, we find the 4 best-scoring EOS and vice versa. We then append the preceding 2 sets of segments to get a list of candidate segments. We determine the best segmentations by getting the highest-scoring set of segments that do not overlap. In practise, to avoid dropping out too much audio, we allow some kind of overlap if the overlapping length is less than a quarter of the segment.
## 4 Experiments
In this section, we present the baseline systems and experimental results for two popular models, namely CTC-Attention [18] and neural transducer [2]. We then compare the performance between the models trained on normalized text and texts with punctuation and casing.
### CTC-Attention baseline system
We build the CTC-Attention baseline using the Wenet [19] framework. We use the classic setup of Wenet toolkit which consists of a 12-layer Conformer [20] encoder and a 6-layer Transformer decoder. The embedding dimension is set to 512. The kernel size of the convolution layers is set to 31. The feedforward dimension is set to 2048. The modeling units are 500-class Byte Pair Encoding (BPE) [21] word pieces. The loss function is a logarithmic linear combination of the CTC loss (weight = 0.3) and attention loss with label smoothing (weight = 0.1). The input features are 80-channel Fbank extracted on 25 ms windows shifted by 10 ms with dither equal 0.1. SpecAugment [22] and on-the-fly Speed perturbation [23] are also applied to augment the training data. During training, we use the Adam optimizer [24] with the maximum learning rate of 0.002. We use the Noam [25] learning rate scheduler with 25k warm-up steps.
The model is trained for 90, 60 and 15 epochs on the small, medium and large subsets, respectively. Table 2 shows the Word Error Rate (WER) of the models on Libriheavy test
Figure 1: Example of finding close matches for a query (_LOVE_) over the target (_ILOVEQU_). The dash arrows point from the query elements to their close matches.
sets. As a reference, we also show the WER on the LibriSpeech test sets. The N-Best hypotheses are first generated by the CTC branch and then rescored by the attention branch. Note that for the LibriSpeech results, we apply some simple text normalization, such as converting numbers to their corresponding text and converting abbreviations (e.g "Mr." to "Mister") on the hypotheses to make it compatible with the LibriSpeech transcripts. We also apply these normalization procedures in the following experiments.
### Transducer baseline system
We build the transducer baseline system using the icefall 8 which is one of the projects in the Next-gen Kaldi toolkit. Icefall implements a transformer-like transducer system, which consists of a encoder and a stateless decoder [26]. Different from the setting in [26] which only has an embedding layer, an extra 1-D convolution layer with a kernel size of 2 is added on top of it. The encoder used in this baseline is a newly proposed model called Zipformer. The Zipformer paper has not been released yet, but the implementation details and training pipeline can be found in icefall 9. We use the default setting in the Zipformer LibriSpeech recipe in icefall for all the following experiments.
Footnote 8: [https://github.com/k2-fsa/icefall](https://github.com/k2-fsa/icefall)
Footnote 9: [https://github.com/k2-fsa/icefall/blob/master/egs/librspeech/ASR/zipformer/zipformer.py](https://github.com/k2-fsa/icefall/blob/master/egs/librspeech/ASR/zipformer/zipformer.py)
The same as CTC-Attention baseline system, we train the model for 90, 60 and 15 epochs for the small, medium and large subsets, respectively. Table 3 shows the decoding results of the models trained on different training subsets, and the WERs on LibriSpeech and Libriheavy test sets are presented. We use the beam search method proposed in [27] which limits the maximum symbol per frame to one to accelerate the decoding.
### Training with punctuation and casing
This section benchmarks the performance of models trained on texts with punctuation and casing, and compares them with the performance of models trained on normalized texts. The system setting is almost the same as the transducer baseline system mentioned above. The only difference is that we adopt 756-class BPE word pieces rather than 500 for modeling, because we open the _fallback_bytes_ flag when training the BPE model to handle rare characters, so we need an additional 256 positions for bytes. Table 4 shows the WERs and Char Error Rate (CER) of models trained on texts with punctuation and casing.
Table 5 compares the results of systems trained on normalized texts (upper case without punctuation) and unnormalized texts (casing with punctuation). In this experiment, we normalized both the transcripts and decoding results to upper case and removed the punctuation when calculating the WERs. From the results, the performance gap between two types of training texts is large when the training set is small, but as the training set grows, the gap becomes negligible. This indicates that when the training set is large enough, the style of training texts will not make much difference on performance, while training with texts with punctuation and casing brings us more information and flexibilities.
## 5 Conclusion
We release a large-scale (50,000 hours) corpus containing punctuation, casing and text context, which can be used in various of ASR tasks. We also propose and open-source a general and efficient audio alignment toolkit, which makes constructing speech corpora much easier. Finally, we conduct solid experiments on the released corpus, and the results show that our corpus is of high quality and demonstrates the effectiveness of our creation pipeline.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline subset & \multicolumn{3}{c}{WER} & \multicolumn{2}{c}{CER} \\ \cline{2-5} & \multicolumn{2}{c}{lh-clean} & \multicolumn{2}{c}{lh-other} & \multicolumn{2}{c}{lh-clean} & \multicolumn{2}{c}{lh-other} \\ \hline small & 13.04 & 19.54 & 4.51 & 7.90 \\ medium & 9.84 & 13.39 & 3.02 & 5.10 \\ large & 7.76 & 11.32 & 2.41 & 4.22 \\ \hline \hline \end{tabular}
\end{table}
Table 4: The Libriheavy WERs and CERs on transducer system trained on texts with punctuation and casing.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline subset & ls-clean & ls-other & lh-clean & lh-other \\ \hline small & 5.76 & 15.60 & 6.94 & 15.17 \\ medium & 3.15 & 7.88 & 3.80 & 8.80 \\ large & 2.02 & 5.22 & 2.74 & 6.68 \\ \hline \hline \end{tabular}
\end{table}
Table 2: The WERs of LibriSpeech (ls) and Libriheavy (lh) test sets on CTC-Attention system.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline subset & \multicolumn{2}{c}{ls-clean} & \multicolumn{2}{c}{lh-other} & \multicolumn{2}{c}{lh-clean} & \multicolumn{2}{c}{lh-other} \\ \hline small & 4.05 & 9.89 & 4.68 & 10.01 \\ medium & 2.35 & 4.82 & 2.90 & 6.57 \\ large & 1.62 & 3.36 & 2.20 & 5.57 \\ \hline \hline \end{tabular}
\end{table}
Table 3: The WERs of LibriSpeech (ls) and Libriheavy (lh) test sets on transducer system.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline subset & text & ls-clean & ls-other & lh-clean & lh-other \\ \hline \multirow{2}{*}{small} & UNP & 4.05 & 9.89 & 4.68 & 10.01 \\ & C\&P & 4.51 & 10.84 & 5.16 & 11.12 \\ \hline \multirow{2}{*}{medium} & UNP & 2.35 & 4.82 & 2.90 & 6.57 \\ & C\&P & 2.45 & 5.03 & 3.05 & 6.78 \\ \hline \multirow{2}{*}{large} & UNP & 1.62 & 3.36 & 2.20 & 5.57 \\ & C\&P & 1.72 & 3.52 & 2.28 & 5.68 \\ \hline \hline \end{tabular}
\end{table}
Table 5: The comparison of WERs between models trained on Upper case No Punctuation (UNP) and Casing with Punctuation (C&P). |
2309.11440 | Flat band superconductivity in a system with a tunable quantum metric :
the stub lattice | Over the past years, one witnesses a growing interest in flat band (FB)
physics which has become a playground for exotic phenomena. In this study, we
address the FB superconductivity in onedimensional stub chain. In contrast to
the sawtooth chain or the creutz ladder, for a given strength of the attractive
electron-electron interaction, the stub chain allows the tuning of the real
space spreading of the FB eigenstates (quantum metric or QM). We study in
detail the interplay between the interaction strength and the mean value of the
QM \langle g \rangle on the pairings and on the superfluid weight D_s. Our
calculations reveal several interesting and intriguing features. For instance,
in the weak coupling regime, D_s with respect to \langle g \rangle exhibits two
different types of behaviour. Despite the fact that the pairings differs
drastically, D_s scales linearly with the QM only when its \langle g \rangle is
large enough (small gap limit). On the other hand, when the QM is of small
amplitude an unusual power law is found, more precisely D_s \propto \langle g
\rangle^\nu where \nu \longrightarrow 2 in the limit of large single particle
gap. In addition to the numerical calculations, we have provided several
analytical results which shed light on the physics in both the weak and strong
coupling regime. Finally, we have addressed the impact of the thermal
fluctuations on the superfluid weight. | Maxime Thumin, Georges Bouzerar | 2023-09-20T16:14:17Z | http://arxiv.org/abs/2309.11440v1 | # Flat band superconductivity in a system with a tunable quantum metric :
###### Abstract
Over the past years, one witnesses a growing interest in flat band (FB) physics which has become a playground for exotic phenomena. In this study, we address the FB superconductivity in one-dimensional stub chain. In contrast to the sawtooth chain or the recutz ladder, for a given strength of the attractive electron-electron interaction, the stub chain allows the tuning of the real space spreading of the FB eigenstates (quantum metric or QM). We study in detail the interplay between the interaction strength and the mean value of the QM \(\langle g\rangle\) on the pairings and on the superfluid weight \(D_{s}\). Our calculations reveal several interesting and intriguing features. For instance, in the weak coupling regime, \(D_{s}\) with respect to \(\langle g\rangle\) exhibits two different types of behaviour. Despite the fact that the pairings differs drastically, \(D_{s}\) scales linearly with the QM only when its \(\langle g\rangle\) is large enough (small gap limit). On the other hand, when the QM is of small amplitude an unusual power law is found, more precisely \(D_{s}\propto\langle g\rangle^{\nu}\) where \(\nu\to 2\) in the limit of large single particle gap. In addition to the numerical calculations, we have provided several analytical results which shed light on the physics in both the weak and strong coupling regime. Finally, we have addressed the impact of the thermal fluctuations on the superfluid weight.
## I Introduction
For the last two decades, the interest in flat bands (FB) material has been growing a lot, placing this emerging family of compounds at the heart of the physics of strongly correlated systems [1; 2; 3; 4; 5; 6]. Due to destructive quantum interference, the eigenstates can be localized [7], leading to a constant energy band over the whole Brillouin zone (BZ). The kinetic energy being quenched, the interaction energy becomes the unique relevant energy scale, and exotic phases of quantum matter can emerge in such materials. On top of those quantum phases stands the superconductivity which has been intensively studied lately. Experimentally, superconducting phases which are very likely of FB origin have been reported in graphene based material such as the twisted bilayer graphene (TBG) [8; 9; 10] as well as in graphite [11; 12], while theoretical studies have covered a wide range of low dimensional systems. Despite the Mermin-Wagner theorem [13; 14], two-dimensional systems such as the TBG [15], the Lieb lattice [16; 17] or the dice lattice [18] often considered as systems in which a superconducting phase transition of topological nature can occur without spontaneous continuous symmetry breaking. It corresponds to the Berezinsky-Kosterlitz-Thouless (BKT) transition [19; 20; 21]. More recently, one-dimensional systems are getting under the spotlights [22; 23; 24; 25]. Indeed, one and quasi-one-dimensional systems are good candidates to facilitate the understanding of the underlying physics and may as well be relevant for the superconductivity in anisotropic systems [26; 27; 28]. In FB superconductors, the superfluid weight has two kind of contribution : a conventional intraband component (vanishing in the strictly FB limit), and an interband term of geometric nature. In the weak coupling regime, the geometric contribution varies linearly with the quantum metric (QM) tensor as defined in Ref. [29] and with the interaction strength [30].
The purpose of the present work is to consider a FB system where the QM is tunable. The lattice chosen to pursue this study is a bipartite chain with 3 atoms per unit cell so-called the stub lattice, as illustrated in Fig 1a. Unlike the sawtooth chain or the Creutz ladder, the stub chain is bipartite and hosts a FB for any value of the out-of-chain hopping, \(\alpha t\) in Fig.1a, which provides the freedom to tune the QM. Despite its absence of natural realization, one should mention that the artificial stub lattice can be experimentally engineered, for instance, within the optical lattice framework [31], or even by the realization of micro-pillar optical cavities [32].
## II Model and methods
Electrons in the stub lattice, in the presence of attractive electron-electron interaction, are described by the Hubbard model,
\[\hat{H}=\sum_{\langle i\lambda,j\eta\rangle,\sigma}t_{ij}^{\lambda\eta}\;\hat {c}_{i\lambda,\sigma}^{\dagger}\hat{c}_{j\eta,\sigma}-\mu\hat{N}-|U|\sum_{i \lambda}\hat{n}_{i\lambda\uparrow}\hat{n}_{i\lambda\downarrow}, \tag{1}\]
where the operator \(\hat{c}_{i\lambda\sigma}^{\dagger}\) creates an electron of spin \(\sigma\) at site \(\mathbf{r}_{i\lambda}\), \(i\) being the cell index and \(\lambda=\)A,B and C. The sums run over the lattice, \(\langle i\lambda,j\eta\rangle\) refers to nearest-neighbor pairs for which the hopping integral \(t_{ij}^{\lambda\eta}\) is \(t\) for (AB) pairs and \(\alpha t\) for (AC) pairs. \(\hat{N}=\sum_{i\lambda,\sigma}\hat{n}_{i\lambda,\sigma}\) is the particle number operator, \(\mu\) is the chemical potential, and finally \(|U|\) is the strength of the on-site attractive electron-electron interaction. In what follows, the lattice spacing \(a\) will be set to 1.
To address the FB superconductivity in the stub chain, we propose to handle the electron-electron inter
action term within the mean-field Bogoliubov-De-Gennes (BdG) approach. Before we proceed, it is crucial to justify the relevance and accuracy of BdG as compared to methods such as exact diagonalization (ED), density matrix renormalisation group (DMRG), Quantum-Monte-Carlo (QMC) and dynamical mean field theory (DMFT). In the case of the Lieb lattice, a good agreement was found between BdG and ED calculations of the superfluid weight \(D_{s}\)[16]. Similarly, in the case of FB superconductivity in CuO\({}_{2}\) layers, BdG has fairly reproduced the pair structure factor as obtained in the QMC simulations [17]. Moreover, in one-dimensional systems, such as the sawtooth chain, the creutz ladder and other quasi-one dimensional FB systems, the calculation of \(D_{s}\) by BdG and DMRG has revealed an impressive quantitative agreement [23; 24]. Furthermore, We should mention that the BCS wavefunction is the exact ground-state for any bipartite lattice hosting FB while the FB is gapped and \(|U|\) is smaller than the gap [16; 30]. In addition, we quote as well the fact that the mean field unrestricted Hartree-Fock theory has been shown to be very accurate to describe the magnetic phases of strongly correlated electrons in two-dimensional decorated lattices which exhibit quasi-FB in the vicinity of the Fermi energy [33]. Thus, one can confidently and safely consider that the BdG approach is a suitable and reliable tool to address quantitatively the FB superconductivity in the stub lattice.
Before discussing our results, we recall briefly the BdG theory. The U-term is decoupled as follows, \(\hat{n}_{\lambda,\uparrow}\hat{n}_{i\lambda,\downarrow}\longrightarrow\langle \hat{n}_{i\lambda,\downarrow}\rangle_{th}\hat{n}_{i\lambda,\uparrow}+\langle\hat {n}_{i\lambda,\uparrow}\rangle_{th}\hat{n}_{i\lambda,\downarrow}-\frac{\Delta _{i\lambda}}{|U|}\hat{c}^{\dagger}_{i\lambda,\uparrow}\hat{c}^{\dagger}_{i \lambda,\downarrow}-\frac{\Delta_{i\lambda}}{|U|}\hat{c}_{i\lambda,\downarrow} \hat{c}_{i\lambda,\uparrow}-C_{i\lambda}\) where \(\Delta_{i\lambda}=-|U|\langle\hat{c}_{i\lambda,\downarrow}\hat{c}_{i\lambda, \uparrow}\rangle_{th}\) are the pairing order parameters, and \(C_{i\lambda}=\langle\hat{n}_{i\lambda,\uparrow}\rangle_{th}\langle\hat{n}_{i \lambda,\downarrow}\rangle_{th}+\langle\hat{c}_{i\lambda,\downarrow}\hat{c}_{ i\lambda,\uparrow}\rangle_{th}\langle\hat{c}^{\dagger}_{i\lambda,\uparrow}\hat{c}^{ \dagger}_{i\lambda,\downarrow}\rangle_{th}\). For a fixed temperature and a given density of electrons, the pairings and the occupations are calculated self-consistently. Notice that translation invariance implies that the thermal average \(\langle\ldots\rangle_{th}\) of a local operator is cell-independent. Thus, we drop the cell index. We consider as well a paramagnetic ground-state, \(\langle\hat{n}_{\lambda,\uparrow}\rangle_{th}=\langle\hat{n}_{\lambda, \downarrow}\rangle_{th}=\langle\hat{n}_{\lambda}\rangle_{th}/2\). Eq. (1) becomes,
\[\hat{H}_{BdG}=\sum_{k}\left[\hat{c}^{\dagger}_{k\uparrow}\quad\hat{c}_{-k \downarrow}\right]\begin{bmatrix}h^{\uparrow}(k)&\hat{\Delta}\\ \hat{\Delta}^{\dagger}&-h^{+}(-k)\end{bmatrix}\begin{bmatrix}\hat{c}_{k \uparrow}\\ \hat{c}^{\dagger}_{-k\downarrow}\end{bmatrix}, \tag{2}\]
where \(\hat{c}^{\dagger}_{k\sigma}=\left(\hat{c}^{\dagger}_{kA,\sigma},\hat{c}^{ \dagger}_{kB,\sigma},\hat{c}^{\dagger}_{kC,\sigma}\right)\), \(c^{\dagger}_{k\lambda,\sigma}\) is the Fourier transform (FT) of \(c^{\dagger}_{i\lambda,\sigma}\). \(\hat{h}^{\sigma}(k)=\hat{h}_{0}(k)-\mu-\hat{V}_{\sigma}\) where \(\hat{h}^{\sigma}_{0}\) is the FT of the tight-binding term in Eq. (1), \(\hat{V}_{\sigma}=\frac{|U|}{2}\operatorname{diag}(\langle\hat{n}_{A}\rangle_{ th},\langle\hat{n}_{B}\rangle_{th},\langle\hat{n}_{C}\rangle_{th})\) and \(\hat{\Delta}=\operatorname{diag}(\Delta_{A},\Delta_{B},\Delta_{C})\).
## III Results and Discussions
### Quasi-particles dispersions
In the present study we focus our attention on the half-filled case for which \(\mu=-|U|/2\) and \(n_{\lambda}=1\) as it is predicted by the uniform density theorem in bipartite lattices [34]. In Fig. 1**b,c** are plotted the quasi-particle (QP) dispersions for \(|U|/t=0\) and \(|U|/t=1\), with respectively \(\alpha=0.1\) and \(\alpha=1\). First, for \(U=0\), a gap \(\delta_{0}\) of amplitude \(|\alpha|t\) opens up in the one particle spectrum between the FB and the dispersive bands at \(k=\pi\) (bands are degenerate). When \(U\) is switched on, the degeneracy of each band is lifted. For small values of \(\alpha\) (\(\alpha=0.1\)), we observe pronounced differences between the vicinity of \(k=0\) and \(k=\pi\). The splitting of the high energy bands is significant in vicinity of \(k=\pi\), whilst in the rest of the Brillouin zone (BZ) it is negligible. On the other hand, the former FB remain flat except near the BZ boundary where it behaves as a massive Dirac excitation, with a small QP gap \(\Delta_{QP}\) of the order of \(0.025\,t\) (\(\alpha=0.1\)) for \(|U|/t=1\). Notice that the splitting between the quasi-FBs is of the order of \(|U|\) at the zone center. In contrast, for larger values of \(\alpha\), the high energy bands splitting is almost k-independent and the former FBs are quasi-flat in the whole BZ. Notice, that the splitting of the quasi-FB at \(k=0\) is smaller than for \(\alpha=0.1\), i.e. \(0.29\ |U|\).
### Pairings and quasi-particles gap
Fig. 2**(a)** depicts the pairings and \(\Delta_{QP}\) as a function of \(|U|\) for \(\alpha=1\). Note that the pairings are taken real, since they all have the same phase which can be removed by global gauge transformation. For small \(|U|\), both \(\Delta_{B}\) and \(\Delta_{C}\) scale linearly with \(|U|\) and \(\Delta_{A}\propto|U|^{2}\). Such
Figure 1: **(a)** Schematic representation of the attractive Hubbard Hamiltonian (Eq. (1) in the main text) for electrons in the stub lattice. Quasi-particle dispersions for \(|U|/t=0\) (dashed lines) and \(|U|/t=1\) (continuous lines) with respectively \(\alpha=0.1\)**(b)** and \(\alpha=1\)**(c)**. The symbol ‘\(\times\)2’ means that the eigenvalues are twofold degenerate.
a behavior is consistent with what has been reported in recent studies [22; 30] and as it has been pointed out in former studies [35; 36; 37]. It will be discussed in more details in what follows. This scaling contrasts with the conventional BCS theory which predicts \(\Delta_{BCS}\propto t\,e^{-1/|U|\rho(E_{F})}\) for the half-filled one-dimensional chain [38]. As anticipated, in the strong coupling regime (\(|U|\gg t\)), the pairing increases linearly with \(|U|\).
In addition, \(\Delta_{\lambda}\) is found orbital-independent and \(\Delta_{\lambda}\simeq\frac{|U|}{2}\) as expected for the half-filled system when the charge density is uniform. In Fig. 2**(b)** is plotted \(\Delta_{\lambda}/|U|\) as a function of \(\alpha\) for \(|U|\leqslant t\). The numerical data are obtained for both \(|U|=0.1\)\(t\) and \(|U|=t\). For small values of \(\alpha\), we find \(\Delta_{B}\propto\ |U|\alpha\) and \(\Delta_{C}\approx\frac{|U|}{2}\) which can be understood by considering the expression of the FB compact localized eigenstate (CLS) that reads \(|\text{CLS}_{i}\rangle=\frac{1}{\sqrt{2+\alpha^{2}}}(|C_{i}\rangle+|C_{i+1} \rangle-\alpha\,|B_{i}\rangle)\). In this regime, the weight is roughly constant on C-sites and varies linearly with \(\alpha\) on B-sites. Thus, as \(\alpha\) increases, \(\Delta_{C}\) decays, and simultaneously \(\Delta_{B}\) rises until they finally cross at \(\alpha_{c}\approx 1.2\pm 0.1\), where the CLS weight is comparable on both B and C sites. In addition, Fig. 2**b** reveals two distinct regimes for the QP gap. More specifically, for \(\alpha\leqslant\alpha_{c}\), the gap is located at \(k=\pi\) and \(\Delta_{QP}=\Delta_{B}\). On the other hand, for \(\alpha\geqslant\alpha_{c}\), it moves to \(k=0\), it is weakly \(\alpha\)-dependent and lies between \(\Delta_{B}\) and \(\Delta_{C}\). Within a first order perturbation calculation with respect to \(|U|\)(see Appendix A), one gets the following set of analytical expressions,
\[\begin{split}\Delta_{B}=\frac{|U|\alpha}{2\sqrt{4+\alpha^{2}}}, \\ \Delta_{C}+\Delta_{B}=\frac{|U|}{2}.\end{split} \tag{3}\]
Thus, the sum \(\Delta_{C}+\Delta_{B}\) is \(\alpha\)-independent. We find as well for the QP gap,
\[\Delta_{QP}=\left\{\begin{array}{ccc}\Delta_{B}&\text{for}&\alpha \leqslant\alpha_{c}\\ \frac{\alpha^{2}\Delta_{B}+4\Delta_{C}}{\alpha^{2}+4}&\text{for}&\alpha \geqslant\alpha_{c},\end{array}\right. \tag{4}\]
where \(\alpha_{c}=\frac{2}{\sqrt{3}}=1.155\). As it can be seen, Fig. 2**b** nicely illustrates the excellent (resp. good) quantitative agreement between the numerical calculations for \(|U|=0.1\)\(t\) (resp. \(|U|=t\)) and the analytical calculations.
### Superfluid weight and quantum metric
The SC phase is characterized by the superfluid weight [39; 40; 41] defined as
\[D_{s}=\frac{1}{N_{c}}\frac{\partial^{2}\Omega(q)}{\partial q^{2}}\Big{|}_{q=0}, \tag{5}\]
where \(N_{c}\) is the number of unit cell of the lattice, \(\Omega(q)\) is the grand-potential and \(q\) mimics the effect of a vector potential, introduced by a standard Peierls substitution \(t_{ij}^{\lambda\eta}\longrightarrow t_{ij}^{\lambda\eta}iq^{iq(x_{i\lambda}-x _{j\eta})}\).
Fig. 3**a** depicts \(D_{s}\) as a function of \(|U|\) for different values of \(\alpha\). We first consider the low U region where one observes that \(D_{s}\propto|U|\) as it has been established for isolated FB [30]. Starting from \(\alpha=1\) and as we reduce it, the slope increases very rapidly. We find \(\frac{\partial D}{\partial|U|}=0.23\), \(0.61\) and \(3.4\) for respectively \(\alpha=1\), \(0.5\) and \(0.1\). Simultaneously, the region where \(D_{s}\propto|U|\) shrinks significantly as \(\alpha\) decreases. Additionally, as \(\alpha\) increases beyond \(\alpha=1\), the slope is now drastically suppressed, e.g. for \(\alpha=2\) it drops to \(0.06\). Fig. 3**b** illustrates the connection between \(D_{s}\) and the mean value of the quantum metric (QM) of the FB eigenstates defined as \(\langle g\rangle=\frac{1}{2\pi}\int_{-\pi}^{\pi}dkg(k)\), where we recall the definition of the QM [29],
\[g(k)=\left\langle\partial_{k}\psi_{k}^{FB}\big{|}\partial_{k}\psi_{k}^{FB} \right\rangle-\big{|}\left\langle\psi_{k}^{FB}\big{|}\partial_{k}\psi_{k}^{ FB}\right\rangle|^{2} \tag{6}\]
\(\big{|}\psi_{k}^{FB}\big{\rangle}\) is the FB eigenstate of the non-interacting Hamiltonian. For the stub lattice, one finds \(g(k)=\frac{\sin\left(\frac{k}{2}\right)}{\alpha^{2}+4\cos^{2}\left(\frac{k}{2 }\right)}\) which leads to \(\langle g\rangle=\frac{1}{2|\alpha|\sqrt{2+\alpha^{2}}}\).
For isolated half-filled FBs, within BdG approach it has been shown analytically that \(D_{s}=2|U|n_{\phi}\langle g\rangle\), \(n_{\phi}^{-1}\) being the number of orbitals on which the FB wave function is finite [25]. The validity and accuracy of this result has been for instance confirmed numerically by DMRG for the Creutz ladder [22; 42].
Figure 2: **(a)** Pairings (\(\lambda=A,B,C\)) and quasi-particle gap (\(\lambda=QP\)) as a function of \(|U|\) for \(\alpha=1\). The weak interaction region is magnified in the inset. **(b)**\(\frac{\Delta_{\lambda}}{|U|}\) as a function of \(\alpha\) where the symbols represent the numerical data. The filled (resp. empty) symbols correspond to \(|U|=0.1\,t\) (resp. \(|U|=t\)) and the continuous lines are the analytical calculations.
In the limit of vanishing U, we find two distinct types of behaviour. For \(\alpha\leqslant\alpha_{c}\) the SF weight scales linearly with the QM and a fit of the plotted data gives a ratio \(R=\frac{D_{s}}{\langle g\rangle}\approx 1.38\). Notice that according to Ref.[30] and with \(n_{\alpha}^{-1}=2\), one would find \(R=1\). From our analytical calculations, available in the Appendix C, in the regime \(\alpha\ll\alpha_{c}\) a ratio \(R=3/2\) has been found. On closer inspection, this is intriguing. Recall that to obtain \(\frac{D_{s}}{|U|}\propto\langle g\rangle\) it requires (i) a uniform pairing on the sites where the CLS weight is finite and (ii) a large gap (\(\delta_{0}\gg|U|\)) between the dispersive bands and the FB. While condition (ii) is fulfilled, the first one is not. Indeed, in the limit \(\alpha\ll 1\), the ratio \(\Delta_{B}/\Delta_{C}\) is of the order of \(\alpha\) (see Fig. 2**b**) which means as well that the pairing occurs essentially on C-sites. Hence, for a finite \(|U|\), one would expect instead a vanishing superfluid weight as \(\alpha\) goes to zero.
It raises the crucial question of how to resolve these contradictions. For this, notice that the square root of the mean value of the QM provides a measure of the mean spread of the FB eigenstates [43; 44; 45]. More precisely the QM can be re-expressed,
\[g(k)=\left\langle\psi_{k}^{FB}\right|(\hat{x}^{2}-\langle\hat{x}\rangle_{k}^{ 2})\left|\psi_{k}^{FB}\right\rangle, \tag{7}\]
where \(\hat{x}\) is the position operator and \(\langle\hat{x}\rangle_{k}=\left\langle\psi_{k}^{FB}\right|\hat{x}\left|\psi_{k }^{FB}\right\rangle\). This leads, for \(\alpha\ll 1\), to a mean spread of the FB eigenstates \(\bar{L}=\sqrt{\langle g\rangle}=\frac{1}{2\sqrt{\alpha}}\). From a dimensional point of view, the SF weight is the ratio of a typical energy scale \(\delta E\) of the quasi-FB (QFB) divided by the the square of a typical momentum \(q_{typ}\). For small U, the bandwidth \(\delta W\) of the QFB is the relevant energy scale. As shown in the Appendix A, in the limit of vanishing \(\alpha\), this bandwidth is \(\alpha\)-independent and \(\delta W=\frac{|U|}{2}\). On the other hand, the natural choice for \(q_{typ}\) is \(\frac{1}{L}\), since there is no Fermi wave vector for flat bands. Thus, the SF weight should scale as,
\[D_{s}\sim\delta W\times\bar{L}^{2}, \tag{8}\]
which can be as well rewritten \(D_{s}\sim|U|\langle g\rangle\).
On the other hand, for \(\alpha\geqslant\alpha_{c}\), the data show that \(\frac{D_{s}}{|U|}\) is inconsistent with a linear dependence of the QM. A fit of the numerical data suggest an unusual scaling, \(D_{s}=3.1|U|\langle g\rangle^{\nu}\), where \(\nu\approx 1.7\). However, it is found that the power law is sensitive to the region chosen for the fit. Moreover, the convergence becomes more difficult as \(\alpha\) becomes too large. Based on our numerical data, for \(\alpha\gg 1\), \(\nu\) seems to converge to \(2\). Using arguments similar to those discussed above, one can also explain this change of behavior. For large \(\alpha\), the bandwidth of the QFB is now \(\alpha\)-dependent and falls out rapidly as \(\alpha\) increases. More precisely, it is found that \(\delta W=\frac{2|U|}{\alpha^{2}}\) as shown in the Appendix A, and from the QM expression one has \(\bar{L}^{2}=\frac{1}{2\alpha^{2}}\), yielding \(D_{s}\sim\frac{|U|}{\alpha^{4}}\propto|U|\langle g\rangle^{2}\). This scaling is confirmed by the detailed analytical calculations available in the Appendix C.
We now propose to discuss the intermediate and strong coupling regime. For any \(\alpha\), the shape of \(D_{s}\) as a function of \(|U|\) is similar, namely, after a linear increase with respect to \(|U|\), the SF weight reaches a maximum \(D_{s}^{max}\) and then decays monotonously as \(|U|\) gets larger. The location of the maximum \(U_{max}\) strongly depends on \(\alpha\) and \(D_{s}^{max}\) decreases monotonously with \(\alpha\). More precisely, it is found that \(D_{s}^{max}/t\approx 0.21\), \(0.40\), \(0.52\) and \(0.63\) for respectively \(\alpha=2\), \(1\), \(0.5\) and \(0.1\), where \(U_{max}/t\approx 5.5\), \(3.1\), \(2.2\) and \(1.4\) respectively. In the limit of large \(U\), \(D_{s}\) is found to scale as \(1/|U|\). This is expected since the physics is that of repulsive hardcore bosons whose effective mass is proportional to \(|U|\)[46]. Here, for \(|U|\gg t\), it can be shown analytically that \(D_{s}=\frac{2t^{2}}{|U|}\) (see appendix B for the details), it corresponds to the dashed line in Fig. 3**a**. Let us finally discuss the specific case of large values of \(\alpha\), e.g. \(\alpha=5\) as plotted in the inset of Fig 3**a**. The shape of \(D_{s}(U)\) has notably changed with respect to the cases discussed previously. The SF weight increases slowly as
Figure 3: **(a)** Superfluid weight \(D_{s}\) at \(T=0\) as a function of \(|U|\) for \(\alpha=0.1,0.5,1\) and \(2\). The inset represents \(D_{s}\) for a large value of \(\alpha\). The black dashed line is the analytical expression in the limit \(|U|\gg t\) (see text). **(b)**\(\frac{\partial D}{\partial U}|_{U=0}\) as a function of \(\langle g\rangle\), the mean value of the quantum metric (square symbols). The corresponding values of \(\alpha\) are depicted on the upper \(x\)-axis. The dashed lines are data fits discussed in the main text.
increases up to \(|U|/t\approx 10\) after which it exhibits now a sudden jump before reaching its maximum at \(|U|/t\simeq 14\) and beyond \(|U|/t\approx 25\) it finally scales as \(D_{s}=2t^{2}/|U|\).
### Thermal fluctuation effects
In this section, the temperature is introduced. Our main purpose is to understand how thermal fluctuations affects the superfluid weight and by extension characterize the cross-over temperature \(T^{*}\) between the metallic (or insulating) and superconducting phases.
In Fig. 4 is shown \(D_{s}\) (rescaled with respect to its value at \(T=0\)) as a function of \(T\) for \(|U|/t=1\) and different values of \(\alpha\). Since \(D_{s}(0)\) has already been discussed in Fig. 3**a**, the discussion here focuses on the evolution of the shape and concavity of \(D_{s}(T)\). Two different regimes are observed. First, for the largest values of \(\alpha\) (\(\alpha=1,2\)), \(D_{s}(T)\) is concave and similar to a conventional BCS curve. As \(\alpha\) decreases, an inflection point appears for \(\alpha\leqslant 0.5\), and \(D_{s}(T)\) is now convex at higher temperature. Furthermore, the region where \(D_{s}(T)\approx D_{s}(0)\) is found to shrink drastically as \(\alpha\) reduces. For instance, it decays by a factor 6 when \(\alpha\) varies from 2 to 0.1. More importantly, after the inflexion point, \(D_{s}(T)\) exhibits a long tail before it finally vanishes. This means that the characteristic temperature for estimating the magnitude of thermal fluctuations is much lower than the BCS critical temperature.
According to Mermin-Wagner theorem, a continuous symmetry cannot be spontaneously broken at finite temperature in both one and two-dimensional systems. However, in 2D systems, a transition of topological nature can occur at finite temperature, it is known as the Berezinsky-Kosterlitz-Thouless transition [19; 20; 21]. In this case, no continuous symmetry is broken and a quasi long range order below \(T_{BKT}\) is established. Above \(T_{BKT}\) the pair-pair correlation functions decay exponentially, and below \(T_{BKT}\) they exhibit a power law decay where the exponent is T dependent. In two-dimensional superconducting systems, \(T_{BKT}\) is defined as follows [30; 47],
\[D_{s}(T_{BKT})=\frac{8}{\pi}k_{B}T_{BKT}. \tag{9}\]
In order to define for our one-dimensional chain a characteristic temperature \(T^{*}\) above which the SF weight strongly reduces, we propose to use Eq. (9) as criterion. Instead, we could have chosen a different criterion such as \(D_{s}(T^{*})=0.3D_{s}(0)\) but that would only have minor effects on the following discussion.
In Fig. 5 is depicted \(T^{*}\) as a function of the interaction strength for different values of \(\alpha\). First, at weak coupling, \(T^{*}\) scales linearly with \(|U|\)[48; 49; 50] for any value of \(\alpha\). However, as \(\alpha\) increases, the slopes decrease drastically from 0.2 for \(\alpha=0.1\) down to 0.02 for \(\alpha=2\). In the mean time, the region where \(T^{*}\propto|U|\) has tripled between \(\alpha=0.1\) and \(\alpha=1\), before it finally drops as \(\alpha\) increases further. In the intermediate regime, when \(\alpha\leqslant 1\), one observes an \(\alpha\)-independent maximum for \(T^{*}\) located at \(|U|/t\approx 3\). On the other hand, for \(\alpha\geqslant 1\), it varies strongly with \(\alpha\), e.g. for \(\alpha=2\), the maximum location is \(|U|/t\approx 5.8\). In contrast, Fig. 3**a** has shown that the value of \(|U|\) for which \(D_{s}(0)\) is maximum varies with \(\alpha\) even when \(\alpha\leqslant 1\). Note that \(T^{*}\) reaches its maximum after the linear region discussed above, except for the peculiar case of \(\alpha=0.1\) where a clear quasi-plateau is observed for \(|U|/t\in[0.2,1]\). After the plateau, \(T^{*}\) inflates up to its maximum. Beyond the maximum, \(T^{*}\) decays and converge towards an \(\alpha\)-independent behavior. To find out how \(T^{*}\) scales with the interaction strength, we have plotted in the inset the ratio \(r=T^{*}/D_{s}(0)\) as a function of \(|U|/t\). We clearly find a constant ratio \(r=r_{\infty}=0.39\) independent of \(\alpha\) when \(|U|/t\geqslant 4\). However, the more \(\alpha\) increases the faster \(r=r_{\infty}\). Indeed, for \(\alpha=2\), the limit is already reached for \(|U|/t=1\) while for \(\alpha=0.1\)
Figure 4: Superfluid weight (rescaled by its value at \(T=0\)) as a function of temperature for \(|U|/t=1\) and \(\alpha=0.1,0.25,0.5,1\) and \(2\).
Figure 5: Crossover temperature as a function of \(|U|/t\) for \(\alpha=0.1,0.5,1\) and \(2\). The inset pictures the ratio \(r=T^{*}/D_{s}(0)\).
\(|U|/t\) must be larger than 4. In addition, the smaller \(\alpha\) is, the stronger \(r\) depends on \(U\) before \(r=r_{\infty}\). From the asymptotic scaling \(D_{s}(0)=\frac{2t^{2}}{|U|}\) (Fig. 3**a**), in the large \(|U|\) limit, one finds \(T^{*}=0.39\,D_{s}(0)\simeq 0.8t^{2}/|U|\) which corresponds, in Fig. 5, to the black dashed line.
## Conclusion
To conclude, we have addressed the FB superconductivity in the one-dimensional stub chain which allows the independent tuning of the QM and of the electron-electron interaction strength \(|U|\). For that purpose, within the Bogoliubov-de-Gennes approach, we have studied in details the competition between \(|U|\) and the QM \(\langle g\rangle\) on the pairings and on the superfluid weight. In addition to the numerical calculations, we have provided several analytical results in both the weak and strong coupling regime. In the weak coupling regime, it is shown that the SF weight \(D_{s}\) scales linearly with \(|U|\) and exhibits two different type of behavior with respect to \(\langle g\rangle\). First, when \(\delta_{0}\), the single particle gap, is smaller than the in-chain hopping (\(t\)), then \(D_{s}\propto\langle g\rangle\). On the other hand, for \(\delta_{0}\geqslant t\), it has been revealed that \(D_{s}\propto\langle g\rangle^{\eta}\), where \(\eta\to 2\) in the limit of large gap. We have as well considered the thermal fluctuations effects on \(D_{s}\). In particular, it is found that the shape of the SF weight depends strongly on the QM. Finally, the crossover temperature \(T^{*}\) has been studied as a function of the interaction strength and the out-of-chain coupling. It is found that \(|U|\) dependence of \(T^{*}\) exhibits different behaviors which strongly depend on the mean value of the QM.
## Appendix A : Parings in the weak coupling regime.
Within a first order perturbation theory, the purpose of this appendix is to derive analytically the expression of the pairings as a function of U and \(\alpha\) for the half-filled stub chain. For \(q=0\), the BdG Hamiltonian reads,
\[\hat{H}_{BdG}=\sum_{k}\left[\hat{c}_{k\uparrow}^{\dagger}\,\,\,\hat{c}_{-k \downarrow}\right]\begin{bmatrix}h_{0}(k)&\hat{\Delta}\\ \hat{\Delta}^{\dagger}&-h_{0}(-k)\end{bmatrix}\begin{bmatrix}\hat{c}_{k \uparrow}\\ \hat{c}_{-k\downarrow}^{\dagger}\end{bmatrix}, \tag{10}\]
where \(\hat{\Delta}=\text{diag}(\Delta_{A},\Delta_{B},\Delta_{C})\) is the pertubation and \(\hat{h}_{0}\) is the single particle Hamiltonian. At half-filling, we recall that \(\mu=-|U|/2\) and \(n_{\lambda}=1\) (uniform occupation of A, B and C sites).
At \(U=0\), the eigenstates of \(\hat{H}_{BdG}\) are,
\[\ket{\Psi_{n}^{p}}=\begin{bmatrix}\ket{\phi_{n}}\\ 0\end{bmatrix}\qquad\ket{\Psi_{n}^{h}}=\begin{bmatrix}0\\ \ket{\phi_{n}}\end{bmatrix}, \tag{11}\]
where \(\ket{\phi_{n}}\) (\(n=1\), \(2\) and \(3\)) are the eigenvectors of \(\hat{h}_{0}(k)\), with energy \(\epsilon_{n}(k):\epsilon_{1}(k)=-\epsilon_{3}(k)=td(k)\) and \(\epsilon_{2}(k)=0\). In addition, the corresponding eigenvectors are \(\bra{\phi_{1,3}}=\frac{1}{\sqrt{2}}(\pm 1,-f_{x}/d,-\alpha/d)\) and \(\bra{\phi_{2}}=(0,\alpha/d,-f_{x}/d)\), where \(f_{x}=-2\cos(k/2)\) and \(d(k)=\sqrt{f_{x}^{2}+\alpha^{2}}\).
Thus, the respective eigenvalues of \(\ket{\Psi_{n}^{p}}\) and \(\ket{\Psi_{n}^{h}}\) are \(E_{n}^{p}=+\epsilon_{n}(k)\) and \(E_{n}^{h}=-\epsilon_{n}(k)\). The particle-hole symmetry of \(\hat{h}_{0}\) implies that \(E_{n}^{p}=E_{4-n}^{h}\) (\(n=1,2\), and \(3\)). For each pair of degenerate eigenstates (\(\ket{\Psi_{n}^{p}}\),\(\ket{\Psi_{4-n}^{h}}\)), one can perform a first order perturbation calculation with respect to \(\hat{\Delta}\). With the definition \(\delta_{n}=\bra{\Psi_{4-n}^{h}}\hat{\Delta}\ket{\Psi_{n}^{p}}\), one easily finds,
\[\begin{split}\delta_{1}=\delta_{3}&=\frac{1}{2d^{2 }}(d^{2}\Delta_{A}-f_{x}^{2}\Delta_{B}-\alpha^{2}\Delta_{C}),\\ \delta_{2}&=-\frac{1}{d^{2}}(\alpha^{2}\Delta_{B}+f_{ x}^{2}\Delta_{C}).\end{split} \tag{12}\]
At the lowest order in \(\Delta_{\lambda}\), the eigenstates of \(\hat{H}_{BdG}\) are,
\[\ket{\Psi_{n}^{\pm}}=\frac{1}{\sqrt{2}}(\pm\ket{\Psi_{n}^{p}}+\ket{\Psi_{4-n}^ {h}}), \tag{13}\]
with energy \(E_{n}^{\pm}=\epsilon_{n}(k)\pm|\delta_{n}|\) (\(n=1,2\), and \(3\)).
The quasi-flat band eigenstates correspond to \(n=2\). At the lowest order in \(\Delta_{\lambda}\) their dispersion is,
\[E_{2}^{\pm}=\pm\Big{(}\Delta_{B}\frac{\alpha^{2}}{d^{2}(k)}+\Delta_{C}\frac{f_ {x}^{2}(k)}{d^{2}(k)}\Big{)}. \tag{14}\]
This allows the determination of the quasi-particle gap \(\Delta_{QP}\). Indeed, one immediately finds, \(E_{2}^{\pm}(k=0)=\pm\frac{1}{4+\alpha^{2}}(\alpha^{2}\Delta_{B}+4\Delta_{C})\) and \(E_{2}^{\pm}(k=\pi)=\pm\Delta_{B}\). As a consequence, the quasi-particle gap is located at \(k=\pi\) when \(\Delta_{B}\leqslant\Delta_{C}\) and,
\[\Delta_{QP}=\Delta_{B}. \tag{15}\]
On the other hand, when \(\Delta_{B}\geqslant\Delta_{C}\), the gap is located at \(k=0\) and,
\[\Delta_{QP}=\frac{1}{4+\alpha^{2}}(\alpha^{2}\Delta_{B}+4\Delta_{C}). \tag{16}\]
In order to derive the expression of the pairings one has to recall their definition,
\[\Delta_{\lambda}=|U|\frac{1}{N_{c}}\sum_{k,n,s=\pm}\bra{\Psi_{n}^{s}}\hat{O}_ {\lambda}\ket{\Psi_{n}^{s}}f_{FD}(E_{n}^{s}), \tag{17}\]
with \(\hat{O}_{\lambda}=\hat{c}_{k\lambda,\uparrow}\hat{c}_{-k\lambda,\downarrow}\) (\(\lambda=A\),B and C) and \(f_{FD}(E)=(1+e^{-\beta E})^{-1}\) is the Fermi-Dirac function. Using the expressions of \(\ket{\Psi_{n}^{\pm}}\) as given in Eq. (13), one gets for the matrix elements the following results: \(\bra{\Psi_{1}^{\pm}}\hat{O}_{A}\ket{\Psi_{1}^{\pm}}=\bra{\Psi_{3}^{\pm}}\hat{O}_ {A}\ket{\Psi_{3}^{\pm}}=\pm\frac{1}{4}\) and \(\bra{\Psi_{2}^{\pm}}\hat{O}_{A}\ket{\Psi_{2}^{\pm}}=0\) because of the vanishing weight on A sites for the FB eigenstates. At \(T=0\), the only eigenstates which contribute to \(\Delta_{\lambda}\) are \(\ket{\Psi_{3}^{\pm}}\) and \(\ket{\Psi_{2}^{-}}\). Hence, at the lowest order in \(|U|\) one finds,
\[\Delta_{A}=0+O(|U|^{2}). \tag{18}\]
Similarly one obtains, \(\left\langle\Psi_{1}^{\pm}\right|\hat{O}_{B}\left|\Psi_{1}^{\pm}\right\rangle= \left\langle\Psi_{3}^{\pm}\right|\hat{O}_{B}\left|\Psi_{3}^{\pm}\right\rangle= \mp\frac{1}{4}\frac{f_{2}^{2}}{f_{2}^{2}+\alpha^{2}}\) and \(\left\langle\Psi_{2}^{+}\right|\hat{O}_{B}\left|\Psi_{2}^{\pm}\right\rangle= \pm\frac{1}{2}\frac{\alpha^{2}}{f_{2}^{2}+\alpha^{2}}.\) The contribution from \(\left|\Psi_{3}^{+}\right\rangle\) and \(\left|\Psi_{3}^{-}\right\rangle\) cancel out and as expected the only non vanishing remaining contribution comes from the quasi-FB eigenstate \(\left|\Psi_{2}^{-}\right\rangle\). From Eq. (100), we end up with,
\[\Delta_{B}=\frac{\left|U\right|}{2}\frac{1}{N_{c}}\sum_{k}\frac{\alpha^{2}}{f _{x}^{2}+\alpha^{2}}, \tag{101}\]
Additionally, with the same arguments it follows,
\[\Delta_{C}=\frac{\left|U\right|}{2}\frac{1}{N_{c}}\sum_{k}\frac{f_{x}^{2}}{f _{x}^{2}+\alpha^{2}}, \tag{102}\]
For any value of \(\alpha\), it implies that,
\[\Delta_{B}+\Delta_{C}=\frac{\left|U\right|}{2}. \tag{103}\]
Finally, the sum in Eq. (101) can be calculated analytically leading to,
\[\Delta_{B}=\frac{\left|U\right|}{2}\frac{\left|\alpha\right|}{\sqrt{\alpha^{2 }+4}}. \tag{104}\]
These expressions of \(\Delta_{A}\) and \(\Delta_{B}\) obtained in the limit of vanishing \(\left|U\right|\) are plotted in Fig(2)b.
## Appendix B Superfluid weight in the strong coupling regime.
In this appendix, our goal is to calculate analytically the expression of the superfluid weight \(D_{s}\) as a function of \(U\) in the strong coupling regime. We recall that the SF weight is define as
\[D_{s}=\frac{1}{N_{c}}\frac{\partial\Omega(q)}{\partial q^{2}}\Big{|}_{q=0}, \tag{105}\]
where \(\Omega(q)\) is the grand potential that reads, at T=0,
\[\Omega(q)=\sum_{k,n}E_{n}^{-}(k,q). \tag{106}\]
In Eq. (106), \(E_{n}^{\pm}(k,q)\) refers to the energies of the eigenstates \(\left|\Psi_{n}^{\pm}\right\rangle\) of \(H_{BdG}\) after the Pieirls substitution. From this expression of \(\Omega(q)\), following Refs [30; 18], one find the following exact expression for the SF weight,
\[\begin{split} D_{s}=\frac{2}{N_{c}}\sum_{k,mn}&\frac {|\left\langle\Psi_{n}^{-}\right|\frac{\partial\hat{H}_{BdG}}{\partial q} \left|\Psi_{m}^{+}\right\rangle|^{2}}{E_{n}^{-}-E_{m}^{+}}-\\ &\frac{|\left\langle\Psi_{n}^{-}\right|\frac{\partial\hat{H}_{ BdG}}{\partial k}\left|\Psi_{m}^{+}\right\rangle|^{2}}{E_{n}^{-}-E_{m}^{+}}\Big{|}_{q=0}, \end{split} \tag{107}\]
Let us now focus on the half-filled case. According to Lieb's theorem [34], the occupation is uniform \(n_{\lambda}=1\) yielding \(\hat{h}^{\sigma}(k)=\hat{h}_{0}(k)\). The only dependence on the coupling being in \(\Delta_{\lambda}\), one can express \(D_{s}\) as a function of the one particle velocity operator \(\hat{v}_{0}(k)=\frac{\partial\hat{h}_{0}(k)}{\partial k}\) as follows,
\[D_{s}=\frac{2}{N_{c}}\sum_{k,mn}\frac{|\left\langle\Psi_{n}^{-}\right|\hat{V} \left|\Psi_{m}^{+}\right\rangle|^{2}-|\left\langle\Psi_{n}^{-}\right|\hat{V} \left|\Psi_{m}^{+}\right\rangle|^{2}}{E_{n}^{-}-E_{m}^{+}}, \tag{108}\]
where are introduced the \(6\times 6\) matrices \(\hat{\Gamma}=\text{diag}(\hat{\mathbb{I}}_{3\times 3},-\hat{\mathbb{I}}_{3\times 3})\) and \(\hat{V}=\text{diag}(\hat{v}_{0},\,\hat{v}_{0})\). In the strong coupling regime, all pairings are uniform i.e. \(\Delta_{\lambda}=\Delta=\frac{\left|U\right|}{2}\) and the diagonalization of \(\hat{H}_{BdG}\) gives
\[\begin{split} E_{n}^{\pm}&=\pm\sqrt{\epsilon_{n}^{ 2}+\left|\Delta\right|^{2}},\\ \left|\Psi_{n}^{+}\right\rangle&=u_{n}\left|\Psi_{n}^{ p}\right\rangle+v_{n}\left|\Psi_{n}^{h}\right\rangle,\\ \left|\Psi_{n}^{-}\right\rangle&=-v_{n}^{*}\left|\Psi_ {n}^{p}\right\rangle+u_{n}^{*}\left|\Psi_{n}^{h}\right\rangle,\end{split} \tag{109}\]
with \(|u_{n}|^{2}=\frac{1}{2}\Big{(}1+\frac{\epsilon_{n}}{E_{n}^{+}}\Big{)}\) and \(|u_{n}|^{2}+|v_{n}|^{2}=1\). \(|\Psi_{n}^{p}\rangle\) and \(\left|\Psi_{n}^{h}\right\rangle\) are defined in Eq.(106). In the limit \(|U|\gg t\) one finds \(|u_{n}|^{2}=|v_{n}|^{2}=\frac{1}{2}\) and \(E_{n}^{\pm}=\pm|\Delta|\). From this, we can give the expression of the matrix elements of Eq.(108) as a function of the one-particle Hamiltonian (\(\hat{h}_{0}\)) eigenstates \(\left|\phi_{n}\right\rangle\),
\[\begin{split}\left\langle\Psi_{n}^{-}\right|\hat{\Gamma}\hat{V} \left|\Psi_{m}^{+}\right\rangle&=-\left\langle\phi_{n}\right| \hat{v}_{0}\left|\phi_{m}\right\rangle\\ \left\langle\Psi_{n}^{-}\right|\hat{V}\left|\Psi_{m}^{+}\right\rangle& =0.\end{split} \tag{110}\]
Thus, the SF weight becomes,
\[D_{s}=\frac{2}{|U|N_{c}}\sum_{k,nm}|\left\langle\phi_{n}\right|\hat{v}_{0} \left|\phi_{m}\right\rangle|^{2}. \tag{111}\]
The sum coincides with \(\text{Tr}[\hat{v}_{0}^{2}]\) whose value is,
\[\frac{1}{N_{c}}\text{Tr}[\hat{v}_{0}^{2}]=t^{2}. \tag{112}\]
This finally leads to,
\[D_{s}=\frac{2t^{2}}{|U|}. \tag{113}\]
## Appendix C Superfluid weight in the weak coupling regime
In this appendix our purpose is to derive analytically the expression of the superfluid weight \(D_{s}\) as a function of \(\alpha\) in the limit of small \(|U|\). The calculations are done for the half-filled stub lattice and at \(T=0\). Starting with the definition as given in Eq. (105) one can write,
\[D_{s}=\frac{1}{N_{c}}\sum_{k,n}\frac{\partial^{2}E_{n}^{-}(k,q)}{\partial q^{2}} \Big{|}_{q=0}, \tag{114}\]
where \(E_{n}^{-}(k,q)\) (\(n=1\),\(2\) and \(3\)) are the negative eigenvalues of the filled QP. With the same notation as those used in the Appendix A, the eigenstates of \(\hat{H}_{BdG}\) for \(q\neq 0\) and \(U=0\) are,
\[\ket{\Psi_{n}^{p}}=\begin{bmatrix}\ket{\phi_{n}^{q}}\\ 0\end{bmatrix}\qquad\ket{\Psi_{n}^{h}}=\begin{bmatrix}0\\ \ket{\phi_{n}^{-q}}\end{bmatrix} \tag{10}\]
with respective eigenvalues \(E_{n}^{p}=\epsilon_{n}(k+q)\) and \(E_{n}^{h}=-\epsilon_{n}(k-q)\), where \(\ket{\phi_{n}^{\pm q}}=\ket{\phi_{n}(k\pm q)}\) (\(n=1\), \(2\) and \(3\)). We recall that \(\ket{\phi_{n}}\) is the eigenvector of \(\hat{h}_{0}\), with energy \(\epsilon_{n}\). For a non vanishing \(q\), the dispersive bands (DB) are non degenerate and \(E_{3}^{p}\neq E_{1}^{h}\), whilst the FB energy is doubly degenerate \(E_{2}^{p}=E_{2}^{h}\). When the perturbation \(\hat{\Delta}\) is introduced, at first order the DB energy remains unchanged and the degeneracy of the FBs is lifted, leading to,
\[\ket{\Psi_{2}^{\pm}}=\frac{1}{\sqrt{2}}\begin{bmatrix}\ket{\phi_{n}^{q}}\\ \pm\ket{\phi_{n}^{-q}}\end{bmatrix}, \tag{11}\]
where the energy of these quasi-FB eigenstates is,
\[E_{2}^{\pm}(k,q)=\pm\frac{1}{d_{k+q}.d_{k-q}}\Big{(}\alpha^{2} \Delta_{B}+\Delta_{C}f_{x}^{k+q}f_{x}^{k-q}\Big{)}, \tag{12}\]
with the notations of the Appendix A, \(f_{x}^{k\pm q}=-2\cos\bigl{(}\frac{1}{2}(k\pm q)\bigr{)}\) and \(d_{k\pm q}=\sqrt{(f_{x}^{k\pm q})^{2}+\alpha^{2}}\).
Thus, the ground-state energy per unit cell is given by,
\[E^{GS}(q)/N_{c}=\frac{1}{N_{c}}\sum_{k}(E_{3}^{p}+E_{1}^{h}+E_{2}^{-}). \tag{13}\]
To get the expression of the SF weight, we now left with the calculation of the second derivative of \(E^{GS}\) with respect to q. First, notice that it can be easily shown that \(\sum_{k}\frac{\partial^{2}E^{R}}{\partial q^{2}}=\sum_{k}\frac{\partial^{2}E _{k}^{h}}{\partial q^{2}}=0\). Thus, as expected for the half-filled chain, \(D_{s}\) depends only on the second derivative of the energy of the occupied quasi-FB. A direct double derivation with respect to q of Eq. (12) gives,
\[\frac{\partial^{2}E_{2}^{-}}{\partial q^{2}}\Big{|}_{q=0}=-2 \alpha^{2}\Delta_{B}\Big{(}\frac{c_{k}}{d_{k}^{4}}+2\frac{s_{k}^{2}}{d_{k}^{6} }\Big{)}+\] \[2\Delta_{C}\Big{(}\frac{1}{d_{k}^{2}}-2\frac{c_{k}(c_{k}+1)}{d_{k }^{4}}-4\frac{s_{k}^{2}(c_{k}+1)}{d_{k}^{6}}\Big{)}, \tag{14}\]
where \(c_{k}=\cos(k)\), \(s_{k}=\sin(k)\) and \(d_{k}=d(k)\). To obtain the final expression of the SF weight, one has to calculate several integrals of the form,
\[C_{p}^{nm}=\int_{-\pi}^{+\pi}\frac{c_{k}^{n}s_{k}^{m}}{d_{k}^{p} }\frac{dk}{2\pi}. \tag{15}\]
The first term in Eq. (14) depends on \(C_{1}^{10}\) and \(C_{6}^{02}\) and the second one on \(C_{2}^{00}\), \(C_{4}^{10}\), \(C_{4}^{20}\), \(C_{6}^{12}\) and \(C_{6}^{02}\). To facilitate the calculation of this set of integrals, it is convenient to define the following function, \(F(u)=\frac{1}{2\pi}\int_{-\pi}^{+\pi}\frac{dk}{u+\cos(k)}\). It can be shown (standard residue calculation) that for \(u>1\), \(F(u)=\frac{1}{\sqrt{u^{2}-1}}\). Using F and its derivative F', and after some lengthy but straightforward steps, one finds,
\[C_{2}^{00} = \frac{1}{2}F(\eta), \tag{16}\] \[C_{4}^{10} = \frac{1}{4}(F(\eta)+\eta F^{\prime}(\eta)),\] (17) \[C_{4}^{20} = \frac{1}{4}(-\eta F(\eta)-F^{\prime}(\eta)+1),\] (18) \[C_{6}^{02} = -\frac{1}{16}(F(\eta)+\eta F^{\prime}(\eta)),\] (19) \[C_{6}^{12} = \frac{1}{8}(\frac{1}{2}F^{\prime}(\eta)+\eta F(\eta)-1), \tag{20}\]
where for practical reason the variable \(\eta=1+\frac{\alpha^{2}}{2}\) is introduced.
After inserting in Eq. (14) the \(C_{p}^{nm}\)'s given above, we finally end up with the analytical expression of the SF weight,
\[D_{s}=-\frac{1}{4}\alpha^{2}\Delta_{B}\Big{(}F(\eta)+\eta F^{ \prime}(\eta)\Big{)}+\] \[\frac{1}{2}\Delta_{C}\Big{(}F(\eta)+F^{\prime}(\eta)-\eta F^{ \prime}(\eta)\Big{)}. \tag{21}\]
Using the expressions of \(\Delta_{B}\) and \(\Delta_{C}\) as given in the Appendix A, we have been able to compare this analytical expression of \(D_{s}\) with the numerical data. The result depicted in Fig. 6 reveals an excellent agreement between the numerical and analytical data for the whole range of values of \(\alpha\).
From Eq. (21) one can now extract the asymptotic behaviour of the SF weight for both limits : (1) \(\alpha\ll 1\) which correspond to large values of the QM and small
Figure 6: \(\frac{\partial D_{s}}{\partial U}\ket{U=0}\) as a function of \(\bra{g}\), the mean value of the quantum metric (square symbols). The corresponding values of \(\alpha\) are depicted on the upper \(x\)-axis. The symbols are the numerical data and the continuous line is the analytical result as given in Eq. (21).
one-particle gap and (2) \(\alpha\gg 1\) for which the QM is small and the gap is large.
In the first case, one gets,
\[D_{s}=\frac{3}{8\alpha}|U|=\frac{3}{2}|U|\langle g\rangle \tag{14}\]
On the other hand, in the second one (\(\alpha\gg 1\)) one finds,
\[D_{s}=\frac{3}{\alpha^{4}}|U|=12|U|\langle g\rangle^{2} \tag{15}\]
The SF weight scales linearly with \(\langle g\rangle\) when \(\alpha\ll 1\) and as \(\langle g\rangle^{2}\) when \(\alpha\gg 1\).
|
2309.08111 | Control of Static Friction by Designing Grooves on Friction Surface | This study numerically investigated the friction of viscoelastic objects with
grooves. A 3D viscoelastic block with grooves on a rigid substrate is slowly
pushed from the lateral side under uniform pressure on the top surface. The
local friction force at the interface between the block and the substrate obeys
Amontons' law. Numerical results obtained using the finite element method
reveal that the static friction coefficient decreases with increasing groove
width and depth. The propagation of the precursor slip is observed before bulk
sliding. Furthermore, bulk sliding occurs when the area of slow precursor slip
reaches a critical value, which decreases with increasing groove size. A
theoretical analysis based on a simplified model reveals that the static
friction coefficient is related to the critical area of the precursor, which is
determined by the instability of the precursor. A scaling law for the critical
area is theoretically predicted, and it indicates that the decrease in the
effective viscosity due to the formation of the grooves leads to a decrease in
the static friction coefficient. The validity of the theoretical prediction is
numerically confirmed. | Wataru Iwashita, Hiroshi Matsukawa, Michio Otsuki | 2023-09-15T02:34:44Z | http://arxiv.org/abs/2309.08111v2 | # Control of Static Friction by Designing Grooves on Friction Surface
###### Abstract
This study numerically investigated the friction of viscoelastic objects with grooves. A 3D viscoelastic block with grooves on a rigid substrate is slowly pushed from the lateral side under uniform pressure on the top surface. The local friction force at the interface between the block and the substrate obeys Amontons' law. Numerical results obtained using the finite element method reveal that the static friction coefficient decreases with increasing groove width and depth. The propagation of the precursor slip is observed before bulk sliding. Furthermore, bulk sliding occurs when the area of slow precursor slip reaches a critical value, which decreases with increasing groove size. A theoretical analysis based on a simplified model reveals that the static friction coefficient is related to the critical area of the precursor, which is determined by the instability of the precursor. A scaling law for the critical area is theoretically predicted, and it indicates that the decrease in the effective viscosity due to the formation of the grooves leads to a decrease in the static friction coefficient. The validity of the theoretical prediction is numerically confirmed.
Static friction coefficient, Groove design, Precursor slip, Amontons' law, Viscoelastic object
## 1 Introduction
Friction forces occur in different situations, such as sliding parts of machines and contact surfaces between tires and the ground, and prevent the relative motion between two objects in contact. Friction forces are desirable in applications requiring low slippage, whereas they are undesirable in sliding parts of machines due to energy loss. Therefore, the control of friction forces is important in engineering [1; 2; 3; 4; 5; 6; 7]. One of the known methods to control friction forces is designing friction surfaces by forming grooves. Generally, grooves on the surfaces of tires and sliding parts of machines are formed to reduce undesirable lubrication in wet conditions, which leads to a decrease in the friction coefficient and results in accidental slippage [8; 9; 10; 11]. However, the dependence of friction force on grooves in dry conditions has not been established clearly.
Generally, for friction between solids in dry conditions, Amontons' law is expected to hold [1; 2; 3; 4; 5; 6; 7]. According to Amontons' law, the friction coefficient does not depend on the external pressure or size and shape of the object. However, the phenomenological explanation of Amontons' law is based on the adhesion of microscopic asperities at the friction interface [1; 2; 3; 4; 5; 6; 7; 12; 13], and implicitly assumes the uniformity of the stress field. For macroscopic objects associated with the non-uniform stress field, Amontons' law may not hold. Therefore, the friction coefficient may depend on the shape of the macroscopic objects in dry conditions.
In fact, recent studies on the friction of objects with flat friction surfaces have shown that Amontons' law is not satisfied when the non-uniformity of the stress field is significant [14; 15; 16; 17; 18]. In Refs. [16; 18], numerical simulations and analysis of a simplified model have revealed the mechanism of breakdown of Amontons' law in viscoelastic materials. The analysis clarified that the local precursor slip before bulk sliding due to the non-uniform stress field leads to the breakdown of Amontons' law, and that the static friction coefficient exhibits characteristic load dependence. The relationship between the precursor slip and breakdown of Amontons' law and the load dependence of the static friction coefficient have been verified in experiments on acrylic glass blocks [17]. Precursor slip relates to earthquake [19; 20; 21; 22; 23; 24] and fracture [25; 26; 27; 28; 29; 30; 31], and has been extensively studied in experiments [15; 17; 29; 20; 24; 25; 26; 27; 28; 29; 32; 33] and numerical simulations [18; 23; 30; 31; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45]. However, many of these studies have considered only flat friction surfaces, and the effects of grooves in the friction surface on frictional properties and precursor slip are yet to be discovered.
Recently, several studies have been conducted to reveal the dependence of the friction coefficient on the macroscopic shape of the friction surface. In Refs. [46; 47; 48; 49; 50; 51], shapes of friction surfaces were represented by a spatial dependence of the local friction coefficient in 1D or 2D spring-block models. The frictional properties of the models vary with the spatial pattern of the local friction coefficient [46; 47; 48; 49; 50; 51], and these results have been applied to experiments of macroscopic objects [52; 53]. However, it is unclear to what extent the results of the spring-block models with a spatial pattern of local friction coefficient reflect the effect of the actual surface shape. Experiments with rubber and gel blocks have also revealed the dependence of the friction coefficient on the macroscopic shape of the friction surface [54; 55]. However, it is unclear whether the results for relatively soft objects such as rubber and gel can be applied to harder materials where Amontons' law is locally satisfied.
In this study, using the finite element method (FEM), we numerically investigate the friction of a 3D viscoelastic material with grooves in a dry condition, where the friction force locally obeys Amontons' law. The dependence of the static friction coefficient on the groove shape is investigated. We find that the static friction coefficient is a decreasing function of the groove width and depth. We also observe that local precursor slip occurs before bulk sliding of the viscoelastic material. The bulk sliding occurs when the area of the precursor slip reaches a critical value. The static friction coefficient is scaled by the normalized critical area of the precursor slip. The propagation of the precursor slip is analytically studied based on a simplified model. We derive the conditions for the onset of bulk sliding and the dependence of the static friction coefficient on the groove shape. The results show that the static friction coefficient decreases due to the decrease in effective viscosity as the groove width and depth increase. Both pillars in the friction surface and main body supporting them play an important role. Our results aid in the improvement of sliding interface design by making grooves for both wet and dry conditions.
## 2 Model and Methods
We consider grooved viscoelastic blocks on a rigid substrate under a uniform external pressure \(P_{\text{ext}}\) with width \(W\), length \(L\), and height \(H\) along the \(x\), \(y\), and \(z\) axes, respectively, as shown in Fig. 1. The rigid substrate is at \(z=0\). A rigid plate with a width of \(W\) and a height of \(0.5H\) pushes the side of the block at \(y=0\) and \(0.5H\leq z\leq H\) with a slow constant velocity \(V\) along the \(y\) direction. This study considers a longitudinal groove parallel to the \(y\) direction, as shown in Fig. 1. The number of pillars in the friction surface of the block is denoted by \(n_{x}\). The pillars are equally spaced with width \(l_{\text{g}}\). The height and width of the pillar are denoted by \(d\) and \(W/n_{x}-l_{\text{g}}\), respectively. The cross-section perpendicular to the \(y\) direction is symmetrical, as shown in Fig. 1b. The pillar is in contact with the rigid substrate, as shown in Fig. 1c. The ratio \(\phi\) of the area of the non-contact surface to the area of the bottom surface is given by \(\phi=l_{\text{g}}n_{x}/W\), and the contact area of the friction surface is given by \(LW(1-\phi)\). Here, \(\phi=0\) corresponds to a rectangular block without grooves.
The equation of motion for the viscoelastic object is given by
\[\rho\ddot{u}_{i}=\sum_{j}\partial_{j}\sigma_{ij} \tag{1}\]
with density \(\rho\), displacement vector \(\mathbf{u}\), and stress tensor \(\mathbf{\sigma}\); where \(\sigma_{ij}\) is the \(ij\) component of \(\mathbf{\sigma}\), \(u_{i}\) is the \(i\) component of \(\mathbf{u}\), and \(\ddot{u}_{i}\) is its second-order time derivative. We adopt the Kelvin-Voigt model for \(\mathbf{\sigma}\), where \(\mathbf{\sigma}\) is given by \(\mathbf{\sigma}=\mathbf{\sigma}^{\text{(E)}}+\mathbf{\sigma}^{\text{(V)}}\) with the elastic stress \(\mathbf{\sigma}^{\text{(E)}}\) obeying Hooke's law and the viscous stress \(\mathbf{\sigma}^{\text{(V)}}\) proportional to strain rate, which reduces the elastic waves caused by the deformation of the block. We assume that the viscoelastic material of the block is isotropic. The \(ij\) component
Figure 1: Schematic of the system. (a) Grooved viscoelastic block moving on a rigid substrate. (b) Cross-section perpendicular to the \(y\) direction indicated as **I** in (a). (c) The bottom of the block indicated as **I** in (a). The blue region represents the contact area between the rigid substrate and the block.
of the elastic stress tensor \(\sigma^{\rm(E)}_{ij}\) is given by
\[\sigma^{\rm(E)}_{ij}=\frac{E}{1+\nu}\epsilon_{ij}+\frac{\nu E}{(1+\nu)(1-2\nu)} \sum_{k}\epsilon_{kk}\delta_{ij} \tag{2}\]
with Young's modulus \(E\), Poisson's ratio \(\nu\), Kronecker's delta \(\delta_{ij}\), and strain tensor \(\epsilon_{ij}\). The \(ij\) component of the viscous stress tensor \(\sigma^{\rm(V)}_{ij}\) is given by
\[\sigma^{\rm(V)}_{ij}=\eta_{1}\dot{\epsilon}_{ij}+\eta_{2}\sum_{k}\dot{ \epsilon}_{kk}\delta_{ij} \tag{3}\]
with the two viscosity coefficients \(\eta_{1}\) and \(\eta_{2}\) and the strain rate tensor \(\dot{\epsilon}_{ij}\)[56].
The boundary conditions on the top surface of the block at \(z=H\) are given by \(\sigma_{zz}=-P_{\rm ext}\) and \(\sigma_{xz}=\sigma_{yz}=0\). At surfaces except the top and bottom of the block, free boundary conditions (\(\sum_{j}\sigma_{ij}n_{j}=0\)) are applied, where \(n_{j}\) is the \(j\) component of the normal vector \(\mathbf{n}\) to the surface. The boundary conditions at the contact surface with the rigid plate at \(y=0\) are given by \(\sigma_{xy}=\sigma_{zy}=0\) and \(\dot{u}_{y}=V\), where \(\dot{u}_{y}\) is the velocity along the \(y\) direction. We set \(V\) sufficiently small to push the block quasi-statically.
The friction between the block bottom and substrate obeys Amontons' law locally. Since the substrate is rigid, the \(z\)-direction displacement \(u_{z}\) satisfies \(u_{z}\geq 0\). At the bottom, the tangential stress vector \(\mathbf{t}(x,y)=(\sigma_{xz},\sigma_{yz})\) at the position \((x,y)\) is given by
\[\mathbf{t}=-\frac{\mathbf{v}}{v}\,\sigma^{\rm(fric)}\;, \tag{4}\]
\[\sigma^{\rm(fric)}(x,y)=\mu(v(x,y))\,p(x,y)\;, \tag{5}\]
where \(\sigma^{\rm(fric)}\) is the frictional stress, \(\mathbf{v}(x,y)=(\dot{u}_{x},\dot{u}_{y})\) is the slip velocity vector with velocities along the \(i\) direction \(\dot{u}_{i}\), and \(v(x,y)=|\mathbf{v}|\) is the slip velocity [57]. The bottom pressure \(p(x,y)=-\sigma_{zz}(x,y,z=0)\) is set to satisfy \(u_{z}\geq 0\), where \(p=0\) for \(u_{z}>0\). Here, \(\mu(v)\) is the local friction coefficient depending on \(v\). In the static region with \(v(x,y)=0\), \(\mu(v)\) is lower than \(\mu_{\rm S}\), and set to balance the local internal shear stress with the frictional stress. In the slip region with \(v(x,y)>0\), \(\mu(v)\) is given by
\[\mu(v)=\left\{\begin{array}{ll}\mu_{\rm S}-(\mu_{\rm S}-\mu_{\rm K})\,v/v_{ \rm c},&0<v<v_{\rm c}\\ \mu_{\rm K},&v\geq v_{\rm c}\end{array}\right.\;, \tag{6}\]
where \(\mu_{\rm S}\) and \(\mu_{\rm K}\) are the local static and dynamic friction coefficients, respectively. Here, \(v_{\rm c}\) is the characteristic velocity. The local Amontons' law is expected to hold when a local region considered in the interface contains a sufficiently large number of real contact points, and has a negligibly small spatial variation in internal stress [12; 13; 58]. Note that the rate and state-dependent friction law [7] might be more appropriate to represent the local friction, but it coincides with the velocity-weakening friction law in Eq. (6) for a sufficiently large slip length, which is satisfied in the poly methyl methacrylate (PMMA) experiments [17; 26; 32; 59]. Hence, we
have adopted Eq. (6). For blocks without grooves, the analysis using the velocity-weakening friction law [16] has been shown to reproduce the PMMA experimental results [17].
We numerically solve Eq. (1) using FEM. The viscoelastic block is divided into cubes with length \(\Delta x\), comprising six tetrahedrons. The displacements of its nodes are evolved based on Eq. (1), and the displacement and velocity within each element are approximated using linear interpolation. The local friction coefficient \(\mu(v)\) is approximately given as
\[\mu(v)=\begin{cases}\mu_{\mathrm{S}}\,v/v_{\mathrm{e}},&0\leq v\leq v_{ \mathrm{e}}\\ \mu_{\mathrm{S}}-(\mu_{\mathrm{S}}-\mu_{\mathrm{K}})\,v/v_{\mathrm{c}},&v_{ \mathrm{e}}<v<v_{\mathrm{c}}\\ \mu_{\mathrm{K}},&v\geq v_{\mathrm{c}}\end{cases} \tag{7}\]
with a sufficiently small velocity scale \(v_{\mathrm{e}}\). The state with \(0\leq v\leq v_{\mathrm{e}}\) corresponds to the static region, and the state with \(v>v_{\mathrm{e}}\) corresponds to the slip region. We set \(v_{\mathrm{e}}/V=2.5\times 10^{-2}\) to satisfy \(v_{\mathrm{e}}\ll V,v_{\mathrm{c}}\), and use \(\Delta x/H=1/48\), \(\Delta tv_{\mathrm{s}}/H\approx 10^{-6}\), and \(V/v_{\mathrm{s}}=2.83\times 10^{-5}\) with \(v_{\mathrm{s}}=\sqrt{E/\rho}\). Here, \(v_{\mathrm{s}}\) represents the elastic wave velocity. We set the driving speed \(V\) to satisfy the condition \(V<<v_{\mathrm{s}}\), where the elastic waves are sufficiently dissipated. We have confirmed that the dependence of the results on the driving velocity \(V\) is negligible under the condition of \(V\ll v_{\mathrm{c}}\ll v_{\mathrm{s}}\).
In our simulation, we first apply a uniform pressure \(P_{\mathrm{ext}}\) to the top surface and relax the system to an equilibrium state. From the time \(t=0\) after the relaxation, the rigid plate pushes the side of the block with a constant velocity \(V\), and the calculation continues until a periodic stick-slip is observed.
We set the length and width of the block to \(L/H=4\) and \(W/H=1\), respectively. Qualitatively similar results are obtained for \(L/H=2\), as shown in Appendix A. We adopt \(\nu=0.34\), \(\eta_{1}v_{\mathrm{s}}/(HE)=2.83\), \(\eta_{2}/\eta_{1}=1\), \(\mu_{\mathrm{S}}=0.38\), \(\mu_{\mathrm{K}}=0.1\), and \(v_{\mathrm{c}}/v_{\mathrm{s}}=4.81\times 10^{-4}\) following previous simulations [16; 18]. We select the number of pillars in the friction surface as \(n_{x}=3\), and confirm that the dependence of the numerical results on \(n_{x}\) is small, as shown in Appendix B. In this study, we investigate the dependence on the external pressure \(P_{\mathrm{ext}}\), fraction of non-contact area \(\phi\), and groove depth \(d\).
## 3 Results
### Numerical Simulation
Figure 2 shows the friction force \(F_{\mathrm{T}}\) against the displacement of the rigid plate \(U=Vt\) at time \(t\) for \(P_{\mathrm{ext}}/E=0.003\) and \(d/H=0.5\). Here, \(F_{\mathrm{T}}\) is given by the force on the rigid plate in the \(y\) direction. In Fig. 2, \(F_{\mathrm{T}}\) is normalized by the normal load \(F_{\mathrm{N}}=P_{\mathrm{ext}}LW\) applied to the top surface of the block. The thin and thick solid lines represent the results for \(\phi=0\) and \(\phi=0.5\), respectively. For each \(\phi\), \(F_{\mathrm{T}}/F_{\mathrm{N}}\) increases approximately linearly with \(U\), and rapidly decreases after reaching a peak value. When the rapid decrease occurs, the entire system slides, and the block returns to a static state after reaching a minimum value close to the local dynamic friction coefficient \(\mu_{\mathrm{K}}\). The increase and decrease in \(F_{\mathrm{T}}/F_{\mathrm{N}}\) repeat periodically, which corresponds
to stick-slip motion. We define the maximum value of \(F_{\mathrm{T}}/F_{\mathrm{N}}\) in the periodic stick-slip region as the macroscopic static friction coefficient \(\mu_{\mathrm{M}}\), which is lower than the local static friction coefficient \(\mu_{\mathrm{S}}\). Figure 2 shows that \(\mu_{\mathrm{M}}\) for the block with grooves is lower than that for the flat block.
In Fig. 3, we plot the macroscopic static friction coefficient \(\mu_{\mathrm{M}}\) against the groove depth \(d\) for different values of \(\phi\) with \(P_{\mathrm{ext}}/E=0.003\) and \(0.006\). Note that the results for \(\phi=0\) are independent of \(d\). For each \(P_{\mathrm{ext}}\), \(\mu_{\mathrm{M}}\) is a decreasing function of \(d\). As \(d\) approaches \(0\), \(\mu_{\mathrm{M}}\) converges to that for \(\phi=0\). The macroscopic static friction coefficient \(\mu_{\mathrm{M}}\) is a decreasing function of \(\phi\). These results indicate that the static friction force decreases as the size of the groove increases. Comparing Fig. 3a and b, we find that \(\mu_{\mathrm{M}}\) is a decreasing function of \(P_{\mathrm{ext}}\), which is consistent with the results of previous studies on rectangular blocks without grooves [16; 17; 18].
In Fig. 4, we present the spatial distributions of the slip velocity \(v\) in the friction surface at \(z=0\) for the displacements \(U=U_{1},U_{2},U_{3}\), and \(U_{4}\) shown in Fig. 2 with \(P_{\mathrm{ext}}/E=0.003\), \(\phi=0.5\), and \(d/H=0.5\). Here, we select \(U_{1}/L=4.6\times 10^{-3}\), \(U_{2}/L=5\times 10^{-3}\), \(U_{3}/L=5.4\times 10^{-3}\), and \(U_{4}/L=5.57\times 10^{-3}\) in the periodic stick-slip region. In Fig. 4, the blue area represents the static region, and the yellow-green and yellow areas represent the sliding regions with \(v\leq V\) and \(v>V\), respectively.
Figure 3: Macroscopic static friction coefficient \(\mu_{\mathrm{M}}\) against \(d\) for different values of \(\phi\) with (a) \(P_{\mathrm{ext}}/E=0.003\) and (b) \(P_{\mathrm{ext}}/E=0.006\). The dotted and dashed lines represent \(\mu_{\mathrm{S}}\) and \(\mu_{\mathrm{K}}\), respectively.
Figure 2: Ratio of friction force \(F_{\mathrm{T}}\) to applied normal force \(F_{\mathrm{N}}\) against displacement of the rigid plate \(U\) for \(P_{\mathrm{ext}}/E=0.003\) and \(d/H=0.5\). The thin and thick solid lines represent the results for \(\phi=0\) and \(\phi=0.5\), respectively. The thin and thick horizontal solid lines represent macroscopic static friction coefficient \(\mu_{\mathrm{M}}\) for \(\phi=0\) and \(\phi=0.5\), respectively. The dotted and dashed lines represent \(\mu_{\mathrm{S}}\) and \(\mu_{\mathrm{K}}\), respectively.
The quasi-static precursor slip with \(v\leq V\) begins to propagate from the region near the rigid plate at \(y=0\) for \(U=U_{1}\), and the area of precursor slip expands quasi-statically as \(U\) increases to \(U_{2}\) and \(U_{3}\). After \(U_{3}\), the area of precursor slip develops rapidly, and the entire system begins to slide, leading to bulk sliding at \(U_{4}\) (see Supplementary Videos). During bulk sliding, the slip velocity \(v\) exceeds \(V\). We confirm that the displacement due to these slips is approximately along the \(y\) direction for all parameters.
Figure 5 shows the normalized slip area \(\tilde{S}=S/[LW(1-\phi)]\) against \(U/L\) for \(P_{\text{ext}}/E=0.003\) and \(d/H=0.5\). Here, the precursor slip area \(S\), defined by the sum of the yellow-green and yellow areas in Fig. 4, is normalized by the contact area in friction surface \(LW(1-\phi)\). When \(\tilde{S}=0\), the entire friction surface is static, while \(\tilde{S}=1\) indicates bulk sliding where the entire friction surface is sliding. The thin and thick solid lines represent the results for \(\phi=0\) and \(\phi=0.5\), respectively. The normalized precursor slip area \(\tilde{S}\) increases gradually with \(U\) for small \(\tilde{S}\), but the oscillation of \(\tilde{S}\) appears as \(\tilde{S}\) becomes large. The slip associated with the oscillation in \(\tilde{S}\) is called bounded rapid precursor (BRP). In BRP, the slip front propagates close to the elastic wave speed, but the slip quickly slows down and stops. According to
Figure 4: Spatial distributions of the slip velocity \(v\) in the friction surface at \(z=0\) for \(U=U_{1},U_{2},U_{3}\), and \(U_{4}\) shown in Fig. 2 for \(P_{\text{ext}}/E=0.003\), \(\phi=0.5\), and \(d/H=0.5\). The blue area represents the static region. The yellow-green and yellow areas represent the slip regions with \(v\leq V\) and \(v>V\), respectively. The rigid plate pushes the block at \(y=0\). The white area represents the groove region.
Figure 5: Normalized precursor slip area \(\tilde{S}\) against \(U/L\) for \(P_{\text{ext}}/E=0.003\) and \(d/H=0.5\). The thin and thick lines represent the results for \(\phi=0\) and \(\phi=0.5\), respectively. The thin and thick horizontal lines represent the normalized critical area of precursor slip \(\tilde{S}_{c}\) for \(\phi=0\) and \(\phi=0.5\), respectively. Crosses represent the peaks of the oscillation of \(\tilde{S}\) in the region of the periodic stick-slip motion for \(\phi=0\).
our analysis in Sect. 3.2, the BRP is caused by oscillatory instability. The oscillation of \(\tilde{S}\) in Fig. 5 and small drops of \(F_{\rm T}/F_{\rm N}\) in Fig. 2 before bulk sliding is caused by the sequence of the BRP [16]. Each BRP reduces the stress and \(\tilde{S}\), but they both recover quickly due to a slight increase in the driving force. The BRP becomes significant depending on the values of the parameters. When \(\tilde{S}\) reaches a threshold value \(\tilde{S}_{\rm c}\), the propagation speed of \(\tilde{S}\) suddenly increases, and \(\tilde{S}\) reaches unity, which corresponds to the bulk sliding.
We evaluate the critical slip \(\tilde{S}_{\rm c}\) for bulk sliding in the periodic stick-slip motion as the maximum value of \(\tilde{S}\) in the sequence of the BRP. For example, we have plotted the peaks of \(\tilde{S}\) due to BRP in the region of periodic stick-slip motion, \(U/L>3\), as crosses for \(\phi=0\) in Fig. 5. The last peaks before bulk sliding represent \(\tilde{S}_{\rm c}\), which are shown as horizontal lines in Fig. 5. Figure 5 shows that \(\tilde{S}_{\rm c}\) decreases with increasing \(\phi\).
In Fig. 6, we show the normalized critical area of the precursor slip \(\tilde{S}_{\rm c}\) against the groove depth \(d\) for different values of \(\phi\) with \(P_{\rm ext}/E=0.003\) and \(0.006\). We find that \(\tilde{S}_{\rm c}\) is a decreasing function of \(d\). As \(d\) approaches \(0\), \(\tilde{S}_{\rm c}\) approaches that for \(\phi=0\). We also find that \(\tilde{S}_{\rm c}\) decreases with increasing \(\phi\). Comparing Fig. 6a and b, we see that \(\tilde{S}_{\rm c}\) is a decreasing function of \(P_{\rm ext}\), which is consistent with the results of previous studies on blocks without grooves [16; 17; 18].
The dependence of the macroscopic static friction coefficient \(\mu_{\rm M}\) on \(\phi\) and \(d\) shown in Fig. 3 is similar to that of \(\tilde{S}_{\rm c}\) shown in Fig. 6. This similarity indicates a close relation between \(\mu_{\rm M}\) and \(\tilde{S}_{\rm c}\). In fact, as shown in Fig. 7, \(\mu_{\rm M}\) is an almost linear function of \(\tilde{S}_{\rm c}\) for different values of \(\phi\) and \(d\) with \(P_{\rm ext}/E=0.003\) and \(0.006\). Figure 7 also shows that \(\mu_{\rm M}\) lies between \(\mu_{\rm S}\) and \(\mu_{\rm K}\). This scaling of \(\mu_{\rm M}\) using \(\tilde{S}_{\rm c}\) is consistent with the results of previous studies on blocks without grooves [16; 17; 18].
Figure 8a shows the spatial distribution of the ratio of the frictional stress \(\sigma^{\rm(fric)}\) to the bottom pressure \(p\) in the friction surface at \(U=U_{3}\) for \(P_{\rm ext}/E=0.003\), \(\phi=0.5\) and \(d/H=0.5\). Here, \(U=U_{3}\) indicates the state just before bulk sliding, as shown in Figs. 2 and 5. Note that the local static friction in the static state with \(v=0\) takes any values for \(0<\sigma^{\rm(fric)}/p<\mu_{\rm S}\). As shown in Supplementary Videos and previous studies [16; 18], the ratio \(\sigma^{\rm(fric)}/p\) returns to the value near \(\mu_{\rm K}\) in the entire area just after the bulk sliding. As the block is pushed, \(\sigma^{\rm(fric)}/p\) reaches the local static friction coefficient \(\mu_{\rm S}\) near the region pushed by the rigid plate. The area with
Figure 6: Normalized critical area of precursor slip \(\tilde{S}_{\rm c}\) against \(d\) for different values of \(\phi\) with (a) \(P_{\rm ext}/E=0.003\) and (b) \(P_{\rm ext}/E=0.006\).
gradually increases as \(U\) increases. The region with \(\sigma^{\text{(fric)}}/p\approx\mu_{\text{S}}\) corresponds to the slip region at \(U=U_{3}\) in Fig. 4, while \(\sigma^{\text{(fric)}}/p\) remains near \(\mu_{\text{K}}\) in the static region.
Figure 8b shows the spatial distribution of the bottom pressure \(p\) at \(U=U_{3}\) for \(P_{\text{ext}}/E=0.003\), \(\phi=0.5\) and \(d/H=0.5\). Although a uniform pressure \(P_{\text{ext}}\) is applied at the top surface, the spatial average of \(p\) becomes \(P_{\text{ext}}/(1-\phi)\), because the contact area in the friction surface, \(LW(1-\phi)\), is smaller than the area of the top surface, \(LW\), due to the grooves. We confirm that the bottom pressure is \(p\approx P_{\text{ext}}/(1-\phi)\) in most areas except for the regions near \(y=0\) and \(L\). The spatial distribution of \(p\) is almost independent of the time \(t\), as shown in Supplementary Videos.
### Theoretical Analysis
We theoretically analyze the effect of the longitudinal grooves shown in Sect. 3.1 based on a simplified model [16; 18]. The precursor slip is approximately uniform in the \(x\) direction and propagates toward the \(y\) direction, as shown in Fig. 4. Therefore, we neglect displacements in the \(z\) and \(x\) directions and consider only the \(y\)-dependent
Figure 8: Spatial distribution of stress in the friction surface at \(U=U_{3}\) for \(P_{\text{ext}}/E=0.003\), \(\phi=0.5\), and \(d/H=0.5\). (a) Spatial distribution of ratio of frictional stress \(\sigma^{\text{(fric)}}\) to bottom pressure \(p\). (b) Spatial distribution of \(p\). The rigid plate pushes the block at \(y=0\). The white area in (a) represents the region without contact. The white area in (b) represents the groove area.
Figure 7: Macroscopic static friction coefficient \(\mu_{\text{M}}\) against \(\bar{S}_{\text{c}}\) for different values of \(\phi\) and \(d\). The filled and open symbols represent the results for \(P_{\text{ext}}/E=0.003\) and \(P_{\text{ext}}/E=0.006\), respectively. The solid line represents the analytical results given by Eq. (20). The dotted and dashed lines represent \(\mu_{\text{S}}\) and \(\mu_{\text{K}}\), respectively.
displacement along the \(y\) direction. Since the bottom pressure \(p\) at \(z=0\) is approximately uniform, as shown in Fig. 8b, we assume \(p=P_{\rm ext}/(1-\phi)\). Additionally, since the deformation is significant in the region near the bottom before bulk sliding in our 3D simulations, as shown in Appendix C, we focus on the slip and deformation in the region \(0\leq z/H\leq\alpha\) with a constant \(\alpha\), as shown in the red shaded area in Fig. 9.
We consider the equation of motion for a thin element at \(y\) with small width \(\mathrm{d}y\) indicated by the dotted rectangle in Fig. 9a. The mass of the element is given by \(\rho A(\phi,d)\mathrm{d}y\), where \(A(\phi,d)\) is the cross-sectional area of the red region in Fig. 9b excluding the groove. In Fig. 9a, the forces acting on the left and right surfaces of that element are given by \(A(\phi,d)\sigma_{yy}(y,t)\) and \(A(\phi,d)\sigma_{yy}(y+\mathrm{d}y,t)\), respectively. Here, the normal stress in the \(y\) direction is denoted by \(\sigma_{yy}\). The friction force acting on the bottom is given by \(\mu P_{\rm ext}W\mathrm{d}y\). The equation of motion for the displacement \(q_{y}(y,t)\) of the thin element along the \(y\) direction is given by
\[\rho A(\phi,d)\mathrm{d}y\,\ddot{q}_{y}(y,t)=A(\phi,d)\left[\sigma_{yy}(y+ \mathrm{d}y,t)-\sigma_{yy}(y,t)\right]-\mu(\dot{q}_{y}(y,t))P_{\rm ext}W \mathrm{d}y\, \tag{8}\]
where \(\dot{q}_{y}(y,t)\) and \(\ddot{q}_{y}(y,t)\) are the first and second-order time derivatives of \(q_{y}(y,t)\), respectively. We assume a plane stress state, where the normal stress \(\sigma_{yy}(y,t)\) is given by
\[\sigma_{yy}(y,t)=E_{1}\frac{\partial q_{y}(y,t)}{\partial y}+\eta_{\rm t}\frac {\partial\dot{q}_{y}(y,t)}{\partial y} \tag{9}\]
with the elastic modulus \(E_{1}=E/[(1+\nu)(1-\nu)]\) and viscous modulus \(\eta_{\rm t}=\eta_{1}(\eta_{1}+2\eta_{2})/(\eta_{1}+\eta_{2})\).
The cross-sectional area \(A(\phi,d)\) is given by
\[A(\phi,d)=A_{0}[1-\kappa(\phi,d)] \tag{10}\]
with the cross-sectional area \(A_{0}=\alpha HW\) for \(\phi=0\). Here, \(\kappa(\phi,d)\) is the reduction rate of the cross-sectional area by the groove,
\[\kappa(\phi,d)=\left\{\begin{array}{ll}\phi\,d/(\alpha H),&0\leq d\leq \alpha H\\ \phi,&\alpha H<d\leq H\end{array}\right.. \tag{11}\]
Figure 9: (a) Schematic of the derivation of the simplified model for grooved viscoelastic block. (b) Cross-section perpendicular to the \(y\) direction indicated as **l** in (a). The red shaded areas represent the region from the bottom to the height \(z=\alpha H\). The dotted rectangle represents the element with infinitesimal width \(\mathrm{d}y\).
Substituting Eqs. (9) and (10) into Eq. (8) and taking the limit of \(\mathrm{d}y\to 0\), we obtain
\[\rho(1-\kappa)\dot{q}_{y}(y,t)=(1-\kappa)\left[E_{1}\frac{\partial^{2}q_{y}(y,t) }{\partial y^{2}}+\eta_{\rm t}\frac{\partial^{2}\dot{q}_{y}(y,t)}{\partial y^{ 2}}\right]-\frac{\mu(\dot{q}_{y}(y,t))P_{\rm ext}}{\alpha H}\;. \tag{12}\]
The boundary conditions are given by \(\partial q_{y}(L,t)/\partial y=0\) for the free boundary at \(y=L\) and \(q_{y}(0,t)=U\) for the fixed boundary at \(y=0\).
We set \(t=0\) just after the bulk sliding, where the friction coefficient is given by \(\mu=\mu_{\rm K}\). When a precursor slip occurs with the normalized slip area \(\tilde{S}\) for \(U>0\), the friction coefficient is given by \(\mu=\mu_{\rm S}\) in the region \(0\leq y/L\leq\tilde{S}\), because the slip distances of the precursors are significantly smaller than that in bulk sliding. In the other regions, \(\mu\) remains \(\mu_{\rm K}\) due to the frictional stress drop after the bulk sliding. This is confirmed by direct numerical calculations of Eq. (12) and qualitatively consistent with the results in Sect. 3.1. For sufficiently slow driving with \(\ddot{q}_{y}\approx 0\) and \(\dot{q}_{y}\approx 0\), the quasi-static solution of \(q_{y}\) in Eq. (12) is analytically derived as described in Appendix D. In this quasi-static solution \(q_{\rm a}(y)\), \(\tilde{S}\) is given as an increasing function of \(U\).
We conduct a stability analysis based on Eq. (12) following the procedure in the previous studies [16; 18]. Substituting \(q_{y}(y,t)=q_{\rm a}(y)+\delta q(y,t)\) into Eq. (12) with the perturbation \(\delta q(y,t)\), we obtain the equation for \(\delta q(y,t)\) as
\[\rho(1-\kappa)\delta\ddot{q}(y,t)=(1-\kappa)\left[E_{1}\frac{\partial^{2} \delta q(y,t)}{\partial y^{2}}+\eta_{\rm t}\frac{\partial^{2}\delta\dot{q}(y, t)}{\partial y^{2}}\right]-\frac{(\mu_{\rm S}-\mu_{\rm K})P_{\rm ext}}{v_{\rm c} \alpha H}\delta\dot{q}(y,t)\;. \tag{13}\]
Note that \(\delta q(y,t)\) has a non-zero value in the region \(0<y/L<\tilde{S}\), and \(\delta q(y,t)\) remains zero in the other region due to static friction. Since the perturbation \(\delta q(y,t)\) is zero for \(y=0\) and \(\tilde{S}<y/L<1\), \(\delta q(y,t)\) is expressed as
\[\delta q(y,t)=\sum_{m}q_{m}e^{\lambda_{m}t}\sin k_{m}\xi\;, \tag{14}\]
where \(m\) is a positive integer, \(q_{m}\) is a constant, \(\lambda_{m}\) is the eigenvalue of the time evolution operator with \(k_{m}=m\pi\) and \(\xi=y/(\tilde{S}L)\). Substituting Eq. (14) into Eq. (13), multiplying by \(2\sin k_{n}\xi\) with positive integer \(n\), and integrating in \(0<y<\tilde{S}L\), we obtain
\[(1-\kappa)\rho L^{2}\lambda_{m}^{2}+(1-\kappa)E_{1}\frac{k_{m}^{2}}{\tilde{S} ^{2}}+(1-\kappa)\eta_{\rm t}\frac{k_{m}^{2}}{\tilde{S}^{2}}\lambda_{m}-\frac{( \mu_{\rm S}-\mu_{\rm K})P_{\rm ext}L^{2}}{v_{\rm c}\alpha H}\lambda_{m}=0\;. \tag{15}\]
The perturbation \(\delta q(y,t)\) is unstable in the case of \(\mathrm{Re}\,\lambda_{1}>0\) and \(\mathrm{Im}\,\lambda_{1}\neq 0\). The latter condition, \(\mathrm{Im}\,\lambda_{1}\neq 0\), induces the oscillatory motion. However, the backward motion of the oscillation reduces the frictional stress, and the local slip stops when it becomes smaller than the local maximum static frictional stress, which causes the reduction of the slip area \(\tilde{S}\). This oscillatory instability corresponds to the BRP. The BRP continues in a certain region of \(\tilde{S}\) because the frictional stress increases again
by the drive, and intermittent slip events are observed until the perturbation develops and causes bulk sliding in the case of \(\mathrm{Re}\,\lambda_{1}>0\) and \(\mathrm{Im}\,\lambda_{1}=0\). In Eq. (15), we find that the stability conditions of the system are generally determined by the competition between the viscosity represented by the third term and velocity-weakening friction represented by the fourth term on the left-hand side. These terms are considered as the stabilizing and destabilizing factors, respectively. The stabilizing factor decreases due to \(\tilde{S}^{-2}\) in the third term as the precursor slip area \(\tilde{S}\) increases. When \(\tilde{S}\) reaches the critical area \(\tilde{S}_{\mathrm{c}}\), the destabilizing factor overwhelms the stabilizing factor, and the perturbation \(\delta q(y,t)\) becomes unstable. Therefore, \(\tilde{S}\) increases rapidly just after reaching \(\tilde{S}_{\mathrm{c}}\), as shown in Fig. 5, and bulk sliding occurs. The viscous term is proportional to \(1-\kappa\). The velocity-weakening friction term is proportional to the load on the top of the block but independent of \(\kappa\). Thus, if \(\kappa(\phi,d)\) increases by increasing \(\phi\) and \(d\), the viscosity becomes effectively smaller, which leads to the decrease of \(\tilde{S}_{\mathrm{c}}\).
As \(\tilde{S}\) increases, the mode with \(m=1\) in Eq. (14) becomes unstable first, which determines \(\tilde{S}_{\mathrm{c}}\). By definition, the maximum value of \(\tilde{S}_{\mathrm{c}}\) does not exceed 1, and for \(\tilde{S}_{\mathrm{c}}<1\), \(\tilde{S}_{\mathrm{c}}\) satisfies
\[\pi^{2}(1-\kappa)\eta_{\mathrm{t}}\tilde{S}_{\mathrm{c}}^{-2}+2\pi(1-\kappa)L \sqrt{\rho E_{1}}\tilde{S}_{\mathrm{c}}^{-1}=\frac{(\mu_{\mathrm{S}}-\mu_{ \mathrm{K}})\,P_{\mathrm{ext}}L^{2}}{v_{\mathrm{c}}\alpha H}\, \tag{16}\]
which is derived from Eq. (15). Therefore, \(\tilde{S}_{\mathrm{c}}\) is given by
\[\tilde{S}_{\mathrm{c}}=\min(\tilde{S}_{\mathrm{c}}^{*},1)\, \tag{17}\]
where \(\min(a,b)\) is a function that takes the smaller value between \(a\) and \(b\), and \(\tilde{S}_{\mathrm{c}}^{*}\) is the solution of Eq. (16) given by
\[\tilde{S}_{\mathrm{c}}^{*}=\frac{\pi\eta_{\mathrm{t}}}{L\left(-\sqrt{\rho E_{ 1}}+\sqrt{\rho E_{1}+\frac{\mu_{\mathrm{S}}-\mu_{\mathrm{K}}}{1-\kappa}\,\frac {P_{\mathrm{ext}}\eta_{\mathrm{t}}}{v_{\mathrm{c}}\alpha H}}\right)}. \tag{18}\]
For \(\tilde{S}_{\mathrm{c}}\ll 1\), \(\tilde{S}_{\mathrm{c}}\) is approximately given by
\[\tilde{S}_{\mathrm{c}}\simeq\pi\left\{\frac{[1-\kappa(\phi,d)]\alpha}{\mu_{ \mathrm{S}}-\mu_{\mathrm{K}}}\,\frac{\eta_{\mathrm{t}}v_{\mathrm{c}}}{P_{ \mathrm{ext}}H}\right\}^{\frac{1}{2}}\frac{H}{L}. \tag{19}\]
This result indicates that the normalized critical area of the precursor slip \(\tilde{S}_{\mathrm{c}}\) is a decreasing function of \(P_{\mathrm{ext}}\) and the size of grooves because \(\kappa(\phi,d)\) in Eq. (19) increases with \(\phi\) and \(d\), as described in Eq. (11). These analytical results are qualitatively consistent with those of the FEM simulations shown in Fig. 6.
The macroscopic static friction coefficient \(\mu_{\mathrm{M}}\) can be analytically derived in our simplified model [16; 18]. Since the ratio of the local frictional stress to bottom pressure is \(\mu_{\mathrm{S}}\) in the slip region and \(\mu_{\mathrm{K}}\) in the static region, \(\mu_{\mathrm{M}}\) just before the bulk sliding is given by
\[\mu_{\mathrm{M}}=\mu_{\mathrm{K}}+(\mu_{\mathrm{S}}-\mu_{\mathrm{K}})\tilde{S} _{\mathrm{c}}. \tag{20}\]
This result is qualitatively consistent with the FEM simulations shown in Fig. 7, where Eq. (20) is represented by the solid line.
Substituting Eq. (20) into Eq. (19), we obtain
\[\mu_{\rm M}-\mu_{\rm K}\simeq\pi\left\{(\mu_{\rm S}-\mu_{\rm K})[1-\kappa(\phi,d )]\alpha\,\frac{n_{\rm V_{\rm c}}}{P_{\rm ext}H}\right\}^{\frac{1}{2}}\frac{H}{ L}\;. \tag{21}\]
This equation, together with Eq. (11), implies that the macroscopic static friction coefficient \(\mu_{\rm M}\) is a decreasing function of \(P_{\rm ext}\), \(\phi\) and \(d\). The analytical results are qualitatively consistent with the FEM simulations shown in Fig. 3.
Equations (19) and (21) indicate that \(\tilde{S}_{\rm c}\) and \(\mu_{\rm M}\) for different \(\phi\) and \(d\) are scaled by the reduction rate of cross-sectional area \(\kappa(\phi,d)\). Figures 10 and 11 respectively show \(\tilde{S}_{\rm c}\) and \(\mu_{\rm M}\) obtained from the FEM simulations against \(\kappa(\phi,d)\). Both \(\tilde{S}_{\rm c}\) and \(\mu_{\rm M}\) are scaled by \(\kappa(\phi,d)\) and decrease with increasing \(\kappa(\phi,d)\). The solid line in Fig. 10 represents the analytical result given by Eq. (17). The solid line in Fig. 11 represents the result given by Eqs. (17) and (20). Here, we set \(\alpha=0.2\) to semi-quantitatively reproduce the results of the FEM simulations in Sect. 3.1. The deformation before bulk sliding is significant only in the region \(z/H<0.5\), as shown in Appendix C.
Figure 11: Macroscopic static friction coefficient \(\mu_{\rm M}\) against \(\kappa(\phi,d)\) for different values of \(\phi\) and \(d\) with (a) \(P_{\rm ext}/E=0.003\) and (b) \(P_{\rm ext}/E=0.006\). The symbols represent the results of the FEM simulations. The solid lines represent the analytical results given by Eqs. (17) and (20) with \(\alpha=0.2\). The dotted and dashed lines indicate \(\mu_{\rm S}\) and \(\mu_{\rm K}\), respectively.
Figure 10: Normalized critical area of precursor slip \(\tilde{S}_{\rm c}\) against \(\kappa(\phi,d)\) for different values of \(\phi\) and \(d\) with (a) \(P_{\rm ext}/E=0.003\) and (b) \(P_{\rm ext}/E=0.006\). The symbols represent the results of the FEM simulations. The solid lines represent the analytical results given by Eq. (17) with \(\alpha=0.2\).
This result is consistent with the estimate of \(\alpha=0.2\). The numerical results of FEM are semi-quantitatively consistent with the theoretical analysis.
These results explain the decreases of \(\mu_{\mathrm{M}}\) and \(\tilde{S}_{\mathrm{c}}\) with the increases of \(\phi\) and \(d\) in Figs. 3 and 6. Here, \(\phi\) represents the decrease in the contact area of the pillars, and \(d\) represents the decrease in the size of the main body. These values determine the reduction rate of the cross-sectional area \(\kappa(\phi,d)\). Thus, it can be concluded that \(\mu_{\mathrm{M}}\) and \(\tilde{S}_{\mathrm{c}}\) decrease with increasing \(\phi\) or \(d\) because of the decrease in effective viscoelasticity due to the reduction of the cross-sectional area, leading to the decline of the stability and bulk sliding with a smaller size of the precursor slip.
## 4 Discussion
Generally, grooves on friction surfaces are designed to control lubrication properties in wet conditions. It is considered that the friction coefficient at the wet interface increases with the width and depth of the grooves, because they can eject more lubricant from the friction interface [8; 9; 10; 11]. However, this study reveals that the groove size also affects friction in dry conditions. The static friction coefficient in dry conditions decreases with increases in groove width and depth. This is opposite to the usual consideration for the friction in the wet case. Even in wet conditions, the friction force at the solid-solid interface determines the total friction force after the ejection of the lubricant. These results should aid in improving the design of sliding interfaces with grooves for both wet and dry conditions.
The influence of the groove shape on friction in dry conditions has recently been investigated based on different models, where the effect of grooves is represented by the spatial distribution of the local friction coefficient [46; 47; 48; 49; 50; 51; 52; 53]. These previous works report a decrease in the static friction coefficient by forming longitudinal grooves, which is consistent with our results. However, the effect of the depth of the longitudinal grooves is ignored in their models, while our study is based on a realistic 3D system and reveals its importance. Moreover, previous studies have investigated different patterns of grooves including transversal grooves. Their results have suggested that complex shapes in the friction surface used in various industrial products such as shoe soles and tires and adopted on the surface of living things such as snakes [48; 60; 61] affect the frictional properties. Therefore, the extension of our study to these complex shapes will lead to more efficient guiding principles for groove design.
In this study, the parameter values for virtual materials are adopted to reduce the computational load, which does not correspond to those for real materials. These are selected to compare the results with those in previous simulations [16; 18]. However, the mechanism of changes in the friction coefficient revealed by our theory in Sect. 3.2 is universal and independent of specific parameter values. The numerical results for flat friction surfaces in Ref. [16] have been reproduced in experiments on PMMA [17]. The occurrence of the quasi-static precursor (slow slip event, SSE) and the dependence of its destabilization on pressure are also confirmed in an experiment on PMMA [19]. Therefore, we expect our results will be experimentally verified in future work.
## 5 Conclusion
Friction surfaces of products such as shoe soles, tires, and sliding parts of machines have grooves. Several studies on grooves have focused on controlling lubrication properties via their design. However, it has been empirically known that grooves affect friction even in dry conditions, although no theoretical explanation exists. In this study, we have performed numerical simulations of viscoelastic objects using 3D FEM to clarify the effect of longitudinal grooves on static friction in a dry condition. We have revealed that the static friction coefficient is a decreasing function of the groove size, and that precursor slip occurs before bulk sliding. The static friction coefficient is scaled by the normalized critical area of the precursor slip. Based on the simplified model, we have theoretically derived the equation for the static friction coefficient depending on the groove size. The theoretical result indicates that the static friction coefficient decreases with the reduction rate of the cross-sectional area in the viscoelastic object. The decrease in the cross-sectional area reduces the effective viscosity, which enhances the instability of the precursor slip and decreases the static friction coefficient. Our results provide new guiding principles for groove design for static friction control beyond the empirical laws for both wet and dry conditions. Investigation of different types of grooves and their effect on dynamic friction will be the subject of future studies.
## Data Availability
The data that support the findings of this study are available from the corresponding author upon reasonable request.
|
2309.15626 | Models for irreducible representations of the symplectic algebra using
Dirac-type operators | In this paper we will study both the finite and infinite-dimensional
representations of the symplectic Lie algebra $\mathfrak{sp}(2n)$ and develop a
polynomial model for these representations. This means that we will associate a
certain space of homogeneous polynomials in a matrix variable, intersected with
the kernel of $\mathfrak{sp}(2n)$-invariant differential operators related to
the symplectic Dirac operator with every irreducible representation of
$\mathfrak{sp}(2n)$. We will show that the systems of symplectic Dirac
operators can be seen as generators of parafermion algebras. As an application
of these new models, we construct a symplectic analogue of the Rarita-Schwinger
operator using the theory of transvector algebras. | Guner Muarem | 2023-09-27T12:54:51Z | http://arxiv.org/abs/2309.15626v1 | # Models for irreducible representations of the symplectic algebra using Dirac-type operators
###### Abstract.
In this paper we will study both the finite and infinite-dimensional representations of the symplectic Lie algebra \(\mathfrak{sp}(2n)\) and develop a polynomial model for these representations. This means that we will associate a certain space of homogeneous polynomials in a matrix variable, intersected with the kernel of \(\mathfrak{sp}(2n)\)-invariant differential operators related to the symplectic Dirac operator with every irreducible representation of \(\mathfrak{sp}(2n)\). We will show that the systems of symplectic Dirac operators can be seen as generators of parafermion algebras. As an application of these new models, we construct a symplectic analogue of the Rarita-Schwinger operator using the theory of transvector algebras.
Key words and phrases:Clifford analysis, symplectic Dirac operators, representation theory, higher spin, completely pointed modules, parastatistics
## 1. Introduction
In the paper by Van Lancker, Sommen & Constales [32] the authors constructed polynomial models for the orthogonal algebra \(\mathfrak{so}(n)\). More specifically, they associated with every \(\mathfrak{so}(n)\)-irreducible representation \(\mathbb{V}_{\lambda}\), a highest weight (vector) which was realised in terms of homogeneous polynomials in a matrix variable. Of course, this idea can be generalised for any Lie algebra \(\mathfrak{g}\) and gives rise to the following situation: Given an irreducible \(\mathfrak{g}\)-representation of highest weight \((\lambda_{1},\dots,\lambda_{N})_{\mathfrak{g}}\) we want to associate a polynomial model in terms of multi-homogeneous polynomials of degree \((\lambda_{1},\dots,\lambda_{N})\) intersected with the kernel of some \(\mathfrak{g}\)-invariant differential operators \(G_{j}\) for \(j\in J\), where \(J\) is a finite index set. Schematically, we are looking for the following association:
\[\begin{array}{ccc}\text{Irreducible $\mathfrak{g}$-representation}& \longleftrightarrow&\text{Polynomial model}\\ (\lambda_{1},\dots,\lambda_{N})_{\mathfrak{g}}&&\longleftrightarrow& \mathcal{P}_{\lambda_{1},\dots,\lambda_{N}}(\mathbb{R}^{2n\times N}\cap \ker(G_{j})\end{array}\]
Note that the case of \(\mathfrak{g}=\mathfrak{so}(n)\) was completely covered by the aforementioned paper. The setting for these polynomial models is the one of Clifford analysis: a hypercomplex function theory which revolves around the Dirac operator on \(\mathbb{R}^{n}\). This operator acts on functions with values in the finite-dimensional spinor space \(\mathbb{S}\) and is given by the expression \(\underline{\partial}_{x}=\sum_{j=1}^{n}e_{j}\partial_{x_{j}}\), where \(e_{j}\) are the so-called Clifford algebra generators satisfying \(\{e_{j},e_{k}\}=-2\delta_{jk}\). Note that this means that the elements of the Clifford algebra are anti-commuting complex units. We refer the reader to [12] for a thorough study of Clifford analysis. The key property of the Dirac operator is that it factorises the Laplacian \(\Delta=\sum_{j=1}^{n}\partial_{x_{j}}^{2}\) on \(\mathbb{R}^{n}\) in the sense that \(\underline{\partial}_{x}^{2}=-\Delta\). This leads to the observation that Clifford analysis is also a _refinement_ of harmonic analysis. It is a well-established fact in representation theory that the spaces \(\mathcal{H}_{k}(\mathbb{R}^{n},\mathbb{C})\) of harmonic polynomials, which are homogeneous of degree \(k\), form irreducible representations of the Lie algebra \(\mathfrak{so}(n)\) of highest weight \((k,0,\dots,0)_{\mathfrak{so}(n)}\). The spinor representations (for \(n\) even) of highest weight \((k+\frac{1}{2},\frac{1}{2},\dots,\pm\frac{1}{2})_{\mathfrak{so}(n)}\) can be modelled by the spaces \(\mathcal{M}_{k}(\mathbb{R}^{n},\mathbb{S})\) of \(k\)-homogeneous solutions of the Dirac operator known as _monogenics_. When working with (spinor-valued) functions depending on matrix variables one needs
to work with systems of Dirac and Laplace operators (and associated operators, see later) which are related to the theory of Howe dualities [24]. In [32] (and already started in [20]) this idea was used to associate with weights of the form \((\lambda_{1},\ldots,\lambda_{N})_{\mathfrak{so}(n)}\) with dominant weight condition \(\lambda_{1}\geq\cdots\lambda_{N}\) certain spaces of simplicial harmonics and monogenics (we come back to this in Section 2), related to these aforementioned systems of differential operators (see also [8]). The goal of this paper is to focus on the case where \(\mathfrak{g}=\mathfrak{sp}(2n)\) in the setting of symplectic Clifford analysis, see [9]. The notion of the Dirac operator naturally generalises to the symplectic framework: the symplectic Dirac operator \(D_{s}\) is now obtained as a contraction between the Weyl algebra generators \(\{z_{1},\ldots,z_{n},\partial_{z_{1}},\ldots,\partial_{z_{n}}\}\) and the vector fields \(\{\partial_{x_{1}},\ldots,\partial_{y_{n}}\}\) using the canonical symplectic form \(\omega_{0}\) on \(\mathbb{R}^{2n}\). In other words, we can write \(D_{s}=\langle\underline{z},\underline{\partial}_{y}\rangle-\langle\underline{ \partial}_{x},\underline{\partial}_{z}\rangle\), where \(\langle\cdot,\cdot\rangle\) is the usual Euclidean inner product on \(\mathbb{R}^{n}\). Note that this definition of the symplectic Dirac operator is slightly different than the one of [9] but was already used in [14, 15]. We first of all stress that obtaining polynomial models for \(\mathfrak{g}=\mathfrak{sp}(2n)\) will be slightly more intricate as the symplectic spinor space \(\mathbb{S}^{\infty}\) is infinite-dimensional. Therefore, we will work in two steps:
* We will first consider the finite-dimensional representations. The \(\mathfrak{sp}(2n)\)-invariant differential operators \(G_{j}\) (from the scheme above) then will lead to the notion of _symplectic simplicial harmonics_ (see Section 3). In some sense, these results are anticipated by the general theory of Howe dualities (see [24]).
* Afterwards, we will focus on an interesting class of infinite-dimensional representations of highest weight \((\lambda_{1},\ldots,\lambda_{N})_{\mathfrak{sp}(2n)}\boxtimes\mathbb{S}^{\infty}\), where \(\boxtimes\) denotes the Cartan product and \[\mathbb{S}^{\infty}=\left(-\frac{1}{2},\ldots,-\frac{1}{2}\right)_{\mathfrak{ sp}(2n)}\oplus\left(-\frac{1}{2},\ldots,-\frac{3}{2}\right)_{\mathfrak{sp}(2n)}.\] These types of representations are connected with the theory of the symplectic Dirac operator. As a consequence, this will lead to the novel notion of _symplectic simplicial monogenics_ (see Section 4) and a connection with parastatistics.
As an application, we will define 'higher spin' Dirac operators (the Rarita-Schwinger operator describing particles with spin-\(\frac{3}{2}\) is an example of such an operator) in the symplectic framework. We will call these _higher metaplectic Dirac operators_. By means of example, we will introduce the symplectic Rarita-Schwinger operator in Section 5.
## 2. Simplicial harmonics and monogenics
### The harmonic Fischer decomposition
Consider the regular action of the Lie group \(\mathsf{SO}(n)\) on the space of polynomials \(\mathcal{P}(\mathbb{R}^{n},\mathbb{C})\):
\[H:\mathsf{SO}(n)\to\mathsf{Aut}(\mathcal{P}(\mathbb{R}^{n},\mathbb{C})),\quad g \mapsto\left(H(g)[P](\underline{x}):=P(g^{-1}\underline{x})\right).\]
On the level of the Lie algebra \(\mathfrak{so}(n)\), the so-called _derived action_\(dH\) on the polynomial space \(\mathcal{P}(\mathbb{R}^{n},\mathbb{C})\) is given by the _angular momenta operators_ (known from quantum mechanics) which are defined by means of \(L_{ab}=x_{a}\partial_{x_{b}}-x_{b}\partial_{x_{a}}\) for \(1\leq a<b\leq n\). In order to decompose the space of polynomials \(\mathcal{P}(\mathbb{R}^{n},\mathbb{C})\) into irreducible representations for the orthogonal Lie algebra, we first note that the space can be written as a direct sum of \(k\)-homogeneous polynomials, i.e. \(\mathcal{P}(\mathbb{R}^{n},\mathbb{C})=\bigoplus_{k=0}^{\infty}\mathcal{P}_{k }(\mathbb{R}^{n},\mathbb{C}).\) The space of \(k\)-homogeneous polynomials \(\mathcal{P}_{k}(\mathbb{R}^{n},\mathbb{C})\) further decomposes under the action of \(\mathsf{SO}(n)\) in terms of solutions of the Laplacian on \(\mathbb{R}^{n}\). The space of _\(k\)-homogeneous harmonics_ is denoted by
\[\mathcal{H}_{k}(\mathbb{R}^{n},\mathbb{C}):=\mathcal{P}_{k}(\mathbb{R}^{n}, \mathbb{C})\cap\ker(\Delta).\]
Each of the spaces \(\mathcal{H}_{k}(\mathbb{R}^{n},\mathbb{C})\) (for \(n\geq 3\)) forms an irreducible \(\mathfrak{so}(n)\)-representation of highest weight \((k)_{\mathfrak{so}(n)}\) (see [20]). Using these \(\mathfrak{so}(n)\)-irreducible modules \(\mathcal{H}_{k}(\mathbb{R}^{n},\mathbb{C})\) we
can further decompose the space of \(k\)-homogeneous polynomials (see [18])
\[\mathcal{P}_{k}(\mathbb{R}^{n},\mathbb{C})=\bigoplus_{j=0}^{\lfloor\frac{k}{2} \rfloor}|\underline{x}|^{2j}\mathcal{H}_{k-2j}(\mathbb{R}^{n},\mathbb{C}). \tag{1}\]
The decomposition from above is not _multiplicity-free_: every irreducible representation \(\mathcal{H}_{j}(\mathbb{R}^{n},\mathbb{C})\) appears infinitely many times on each row. In order to make this decomposition multiplicity-free, one can use a _hidden_ symmetry (apart from the obvious \(\mathsf{SO}(n)\)-invariance). The multiplication operator \(|\underline{x}|^{2}\) allows us the raise the degree by two. On the other hand, the Laplace operator \(\Delta\), reduces the degree by two. Lastly, the Euler operator \(\mathbb{E}=\sum_{j=1}^{n}x_{j}\partial_{x_{j}}\) measures the degree of homogeneity of the corresponding \(\mathcal{H}_{k}(\mathbb{R}^{n},\mathbb{C})\). This can be visualised as follows:
One can check that when these three operators are re-scaled as
\[X:=-\frac{1}{2}\Delta,\qquad Y:=\frac{1}{2}|\underline{x}|^{2},\qquad H:=[X,Y] =-\left(\mathbb{E}+\frac{n}{2}\right),\]
we have \(\operatorname{Alg}(X,Y,H)\cong\mathfrak{sl}(2)\). This algebra is the 'hidden symmetry' which was mentioned above. As a result, the space \(\mathcal{P}(\mathbb{R}^{n},\mathbb{C})\) has a multiplicity-free decomposition under the _joint action_ of the (harmonic) Howe dual pair \(\mathsf{SO}(n)\times\mathfrak{sl}(2)\).
### The refined monogenic Fischer decomposition
Let \(\{e_{1},\ldots,e_{n}\}\) be an orthonormal basis for \(\mathbb{R}^{n}\) and let \(\mathcal{F}(\mathbb{R}^{n},\mathbb{C})\) be a function space (e.g. the space of complex-valued polynomials). Then the _Dirac operator_ on \(\mathbb{R}^{n}\) is defined as \(\underline{\partial}_{x}=\sum_{j=1}^{n}e_{j}\partial_{x_{j}}\). The _multiplication operator_ is given by \(\underline{x}=\sum_{j=1}^{n}e_{j}x_{j}.\) A solution \(f\) of the Dirac operator \(\underline{\partial}_{x}\) is called a _monogenic_ function. The space of all monogenic polynomials of degree \(k\in\mathbb{N}\) is denoted
\[\mathcal{M}_{k}(\mathbb{R}^{n},\mathbb{S}):=\mathcal{P}_{k}(\mathbb{R}^{n}, \mathbb{S})\cap\ker\underline{\partial}_{x}.\]
We already mentioned that Clifford analysis is a _refinement_ of harmonic analysis. This refinement is also reflected in terms of the harmonic Howe duality \(\mathsf{SO}(n)\times\mathfrak{sl}(2)\) found in the previous section. The Dirac operator is in a spin-invariant operator, so that the group part of the refined Howe duality is \(G=\mathsf{Spin}(n)\).
The 'hidden symmetry' is then determined by the algebra generated by the Dirac operator \(\underline{\partial}_{x}\) and its adjoint \(\underline{x}\). It is a well-known fact that is no longer a Lie algebra, but a Lie superalgebra \(\mathfrak{osp}(1|2)\). This means that we obtain the _refined_ Howe duality \(\mathsf{Spin}(n)\times\mathfrak{osp}(1|2)\) which governs the decomposition of the space \(\mathcal{P}(\mathbb{R}^{n},\mathbb{C})\otimes\mathbb{S}\) of spinor-valued polynomials. However, we already know that the space of polynomials decomposes into the space of harmonics. This then leads to the following decomposition of the space of spinor-valued harmonics \(\mathcal{H}_{k}(\mathbb{R}^{n},\mathbb{S}):=\mathcal{H}_{k}(\mathbb{R}^{n}, \mathbb{C})\otimes\mathbb{S}\):
**Theorem 2.1** (Monogenic refinement [2]).: _Let \(H_{k}(\underline{x})\in\mathcal{H}_{k}(\mathbb{R}^{n},\mathbb{S})\) be a \(k\)-homogeneous harmonic polynomial with spinor values. Then one has_
\[H_{k}(\underline{x})=M_{k}(\underline{x})+\underline{x}M_{k-1}(\underline{x}),\]
_where \(M_{j}(\underline{x})\in\mathcal{M}_{j}(\mathbb{R}^{n},\mathbb{S})\) (we ignore the parity of the spinors here)._
The definition of \(k\)-homogeneous monogenics can now be given an algebraic interpretation. The refinement theorem above can be formulated, purely in terms of \(\mathfrak{so}(n)\) weights, as the tensor product (for \(n\) even):
\[(k)_{\mathfrak{so}(n)}\otimes\left(\frac{1}{2},\ldots,\frac{1}{2},\pm\frac{1} {2}\right)_{\mathfrak{so}(n)}.\]
The tensor product rules for \(\mathfrak{so}(n)\) (see for example [19]) allow us to decompose this tensor product (up to isomorphism) as follows:
\[(k)_{\mathfrak{so}(n)}\otimes\left(\frac{1}{2},\ldots,\pm\frac{1}{2}\right)_{ \mathfrak{so}(n)}\cong\left(k+\frac{1}{2},\ldots,\pm\frac{1}{2}\right)_{ \mathfrak{so}(n)}\oplus\left(k-\frac{1}{2},\ldots,\mp\frac{1}{2}\right)_{ \mathfrak{so}(n)}.\]
In order to make the isomorphims from above into an equality, we need the so-called _embedding factors_. These are \(\mathfrak{so}(n)\)-invariant differential operators. The first component in the tensor product decomposition is trivially embedded using the identity operator, whereas the second component is embedded by the operator \(\underline{x}\). This now leads to the monogenic refinement of the Fischer decomposition as described in Theorem 2.1. In other words, \(\mathcal{M}_{k}(\mathbb{R}^{n},\mathbb{S})\) is exactly the Cartan product (for \(n\) even)
\[(k)_{\mathfrak{so}(n)}\boxtimes\left(\frac{1}{2},\ldots,\frac{1}{2},\pm\frac{ 1}{2}\right)_{\mathfrak{so}(n)}=\left(k+\frac{1}{2},\ldots,\frac{1}{2},\pm \frac{1}{2}\right)_{\mathfrak{so}(n)}=(k)^{\prime}_{\mathfrak{so}(n)}.\]
We now obtain the full Fischer decomposition of the space of spinor-valued polynomials:
**Theorem 2.2** (Complete monogenic FD).: _Under the joint action of \(\mathsf{Spin}(n)\times\mathfrak{osp}(1|2)\), the space of spinor valued polynomials decomposes as_
\[\mathcal{P}(\mathbb{R}^{n},\mathbb{S})=\bigoplus_{k=0}^{\infty}\bigoplus_{j=0 }^{k}\underline{x}^{j}\mathcal{M}_{k-j}(\mathbb{R}^{n},\mathbb{S}).\]
### Simplicial harmonic and monogenics
In the previous section we considered polynomials in the vector variable \(\underline{x}=(x_{1},\ldots,x_{n})\in\mathbb{R}^{n}\) and considered the multiplicity-free decomposition governed by the Howe duality \(\mathsf{SO}(n)\times\mathfrak{sl}(2)\). We will now work with \(N\) vector variables \((\underline{u}_{1},\ldots,\underline{u}_{N})\in\mathbb{R}^{n\times N}\). Note that this is also often called a _matrix variable_ (see Chapter 3, Section 4 in [20] for more details), we denote it by \(\underline{U}=(\underline{u}_{1},\ldots,\underline{u}_{N})\in\mathbb{R}^{n \times N}\). The Howe duality \(\mathsf{SO}(n)\times\mathfrak{sl}(2)\) which governs the multiplicity-free decomposition of the space of polynomials \(\mathcal{P}(\mathbb{R}^{n},\mathbb{C})\) is only the first example of the general Howe duality \(\mathsf{SO}(n)\times\mathfrak{sp}(2N)\) (see e.g. [21]) associated with the regular action
\[H[g][P](\underline{U}):=P(g^{-1}\underline{u}_{1},\ldots,g^{-1}\underline{u}_ {N})\]
of \(\mathsf{SO}(n)\) on the space of polynomials \(\mathcal{P}(\mathbb{R}^{n\times N},\mathbb{C})\) in a matrix variable.
Note that in the spinor-valued case, the action is naturally extended by the multiplicative action of spinor space. The space \(\mathcal{P}_{\lambda_{1},\ldots,\lambda_{N}}(\mathbb{R}^{n\times N},\mathbb{C})\) of polynomials which are \(\lambda_{j}\)-homogeneous in each variable \(\underline{u}_{j}\) also has a decomposition in terms of \(\mathsf{SO}(n)\)-irreducible representations under the regular action above. The irreducible modules appearing here, are in fact a generalisation of the spaces \(\mathcal{H}_{k}(\mathbb{R}^{n},\mathbb{C})\) from the previous section and are defined as follows (see [32]):
**Definition 2.3**.: Let \(N\) be an integer such that \(1\leq N\leq\lfloor\frac{n}{2}\rfloor\). A function \(f:\mathbb{R}^{n\times N}\to\mathbb{C},(\underline{u}_{1},\ldots,\underline{u }_{N})\mapsto f(\underline{u}_{1},\ldots,\underline{u}_{N})\) is called _simplicial harmonic_ if it satisfies the system of equations
\[\langle\underline{\partial}_{\underline{u}_{i}},\underline{ \partial}_{\underline{u}_{j}}\rangle f =0\quad\text{for all }1\leq i,j\leq N\] \[\langle\underline{u}_{i},\underline{\partial}_{\underline{u}_{j}} \rangle f =0\quad\text{for all }1\leq i<j\leq N.\]
The vector space of simplicial harmonic polynomials which are homogeneous of degree \(\lambda_{i}\) in the vector variable \(\underline{u}_{i}\) is denoted by \(\mathcal{H}_{\lambda_{1},\ldots,\lambda_{N}}(\mathbb{R}^{n\times N},\mathbb{C})\). Note that we must have \(\lambda_{1}\geq\cdots\lambda_{N}\geq 0\) in order to satisfy the dominant weight condition.
**Definition 2.4**.: Let \(N\) be an integer such that \(1\leq N\leq\lfloor\frac{n}{2}\rfloor\). A spinor-valued function \(f:\mathbb{R}^{n\times N}\to\mathbb{S},(\underline{u}_{1},\ldots,\underline{u }_{N})\mapsto f(\underline{u}_{1},\ldots,\underline{u}_{N})\) is called _simplicial monogenic_ if
it satisfies the system of equations
\[\underline{\partial}_{\omega_{i}}f=0 \text{for all }1\leq i\leq N\] \[\langle\underline{u}_{i},\underline{\partial}_{u_{j}}\rangle f=0 \text{for all }1\leq i<j\leq N.\]
The vector space of simplicial harmonic polynomials which are homogeneous of degree \(\lambda_{i}\) in the vector variable \(\underline{u}_{i}\) is denoted by \(\mathcal{S}_{\lambda_{1},\dots,\lambda_{N}}(\mathbb{R}^{n\times N},\mathbb{S})\).
**Theorem 2.5** (Van Lancker, Sommen & Constales [32]).: _Let \(N\) be an integer such that \(1\leq N\leq\lfloor\frac{n}{2}\rfloor\) and suppose that the dominant weight condition is satisfied (for the Lie algebra \(\mathfrak{so}(n)\)). Then we have the following properties:_
* _The spaces_ \(\mathcal{H}_{\lambda_{1},\dots,\lambda_{N}}(\mathbb{R}^{n},\mathbb{C})\) _are irreducible of highest weight_ \((\lambda_{1},\dots,\lambda_{N})_{\mathfrak{so}(n)}\)_._
* _The spaces_ \(\mathcal{S}_{\lambda_{1},\dots,\lambda_{N}}(\mathbb{R}^{n},\mathbb{C})\) _are irreducible of highest weight_ \((\lambda_{1},\dots,\lambda_{N})^{\prime}_{\mathfrak{so}(n)}\)_._
In [3, 11] these models were proven to be useful to construct so-called higher spin Dirac operators. In the rest of this paper we will focus on the symplectic Lie algebra \(\mathfrak{sp}(2n)\) instead of \(\mathfrak{so}(n)\). This implies that the symmetric bilinear inner product on \(\mathbb{R}^{n}\) will be replaced by a skew-symmetric one on \(\mathbb{R}^{2n}\). More specifically, it has the following \((2n\times 2n)\)-block matrix representation
\[\Omega_{0}=\begin{pmatrix}0&I_{n}\\ -I_{n}&0\end{pmatrix}.\]
Later on, we will also use the notation
\[\langle\underline{v},\underline{w}\rangle_{s}:=\underline{v}^{T}\Omega_{0} \underline{w} \tag{2}\]
for all \(\underline{v},\underline{w}\in\mathbb{R}^{2n}\). As a consequence, the finite-dimensional Clifford algebra becomes an infinite-dimensional Weyl algebra and the Dirac operator is replaced by the symplectic Dirac operator. Due to the infinite-dimensionality of the symplectic spinors, this transition is rather intricate.
**Remark 2.6**.: We further mention that there exist generalisations of the Fischer decomposition from equation (1) and the refinement from Theorem 2.1 for matrix variables, see [26, 31].
## 3. The notion of symplectic simplicial harmonics
So far we discussed finite-dimensional irreducible representations of the Lie algebra \(\mathfrak{so}(n)\) and characterised these modules in terms of simplicial harmonics and simplicial monogenics (see Theorem 2.5). We will now do exactly the same thing for the symplectic Lie algebra \(\mathfrak{sp}(2n)\). We first focus on the finite-dimensional representations of \(\mathfrak{sp}(2n)\). Recall that the finite-dimensional irreducible representations of the symplectic algebra are uniquely characterised by their labels satisfying \(\lambda_{1}\geq\lambda_{2}\geq\dots\geq\lambda_{n}\geq 0\). In order to get some intuition with these finite-dimensional \(\mathfrak{sp}(2n)\)-irreducible representations, we start by calculating their dimension using the Weyl dimension formula (see [6] for more background).
**Proposition 3.1**.: _Let \(g_{i}:=n-i+1\) and \(m_{i}:=\lambda_{i}+g_{i}\) for \(1\leq i\leq n\) then_
\[\dim(\lambda_{1},\dots,\lambda_{n})_{\mathfrak{sp}(2n)}=\prod_{i}\left(\frac{ m_{i}}{g_{i}}\right)\prod_{i<j}\left(\frac{m_{i}-m_{j}}{g_{i}-g_{j}}\right) \prod_{i<j}\left(\frac{m_{i}+m_{j}}{g_{i}+g_{j}}\right).\]
Using this formula, one can check that the dimension of module \((k)_{\mathfrak{sp}(2n)}\) is equal to
\[\dim(k)_{\mathfrak{sp}(2n)}=\binom{k+2n-1}{2n-1}.\]
One might recognize this number as the dimension of the space of \(k\)-homogeneous polynomials \(\mathcal{P}_{k}(\mathbb{R}^{2n},\mathbb{C})\). This is no coincidence:
**Proposition 3.2**.: _The space of \(k\)-homogeneous polynomials \(\mathcal{P}_{k}(\mathbb{R}^{2n},\mathbb{C})\) is an irreducible \(\mathfrak{sp}(2n)\)-module of weight \((k)_{\mathfrak{sp}(2n)}\) and the corresponding highest weight vector is \(w_{k}(\underline{x})=x_{1}^{k}\)._
Proof.: The generators of Lie algebra \(\mathfrak{sp}(2n)\) can be realised on the space \(\mathcal{P}(\mathbb{R}^{2n},\mathbb{C})\) as the following differential operators:
\[\begin{cases}Y_{jj}=x_{j}\partial_{y_{j}}&j=1,\dots,n\qquad n\\ Z_{jj}=y_{j}\partial_{x_{j}}&j=1,\dots,n\qquad n\\ X_{jk}=x_{j}\partial_{x_{k}}-y_{k}\partial_{y_{j}}&j,k=1,\dots,n\qquad n^{2}\\ Y_{jk}=x_{j}\partial_{y_{k}}+x_{k}\partial_{y_{j}}&j<k=1,\dots,n\qquad\frac{n( n-1)}{2}\\ Z_{jk}=y_{j}\partial_{x_{k}}+y_{k}\partial_{x_{j}}&j<k=1,\dots,n\qquad\frac{n( n-1)}{2}\end{cases} \tag{3}\]
The positive roots are \(\Phi^{+}=\{X_{jk},Y_{jk},Y_{jj}\mid 1\leq j<k\leq n\}\) and the Cartan algebra is given by \(X_{jj}=x_{j}\partial_{x_{j}}-y_{j}\partial_{y_{j}}\) for \(1\leq j\leq n\). One easily checks that \(w_{k}=x_{1}^{k}\) is a solution of all the positive roots. Moreover, the action of the Cartan algebra \(\mathfrak{h}\) on \(w_{k}\) gives \(H_{1}x_{1}^{k}=kx_{1}^{k}\) and \(H_{2}x_{1}^{k}=\dots=H_{n}x_{1}^{k}=0\).
**Remark 3.3**.: At this point it is worthwhile to compare the orthogonal and symplectic framework. In the orthogonal setting, the space of \(k\)-homogeneous polynomials is _reducible_ as an \(\mathfrak{so}(n)\)-module and further decomposes into harmonic polynomials (the latter are irreducible modules). In other words, we have
\[(k)_{\mathfrak{so}(n)}\longleftrightarrow\mathcal{P}_{k}(\mathbb{R}^{n}, \mathbb{C})\cap\ker(\Delta).\]
In the symplectic framework, we have
\[(k)_{\mathfrak{sp}(2n)}\longleftrightarrow\mathcal{P}_{k}(\mathbb{R}^{2n}, \mathbb{C}).\]
In some sense, this is to be expected as the Laplace operator \(\Delta\) can be written as \(\langle\underline{\partial}_{x},\underline{\partial}_{x}\rangle\) whereas the natural operator in the symplectic setting would be \(\langle\underline{\partial}_{x},\underline{\partial}_{x}\rangle_{s}\) which is of course zero, so that no extra condition is to be imposed.
**Remark 3.4**.: Note that the remark above implies that there is no natural way to associate a second-order differential operator (the Laplacian) to the symplectic Dirac operator \(D_{s}\) on \(\mathbb{R}^{n}\). However, in [23] it was shown that if one allows a compatible complex structure \(\mathbb{J}\) on \(\mathbb{R}^{2n}\) one can define a second symplectic Dirac operator \(D_{t}\) (which is basically a rotation of the operators \(D_{s}\) using the complex structure) such that \([D_{s},D_{t}]=-i\Delta\). Recently, this led to a hermitian refinement of symplectic Clifford analysis in which the notion of complex harmonics was used to describe the solution space of the symplectic Dirac operators \(D_{s}\) and \(D_{t}\) using a Fischer decomposition, see [16].
### The case of \(N=2\)
We now focus on the case where \(N=2\) which allows us to take a look at the more general module \((\lambda_{1},\lambda_{2})_{\mathfrak{sp}(2n)}\) with \(\lambda_{1}\geq\lambda_{2}\geq 0\). We denote the two vector variables \((\underline{x},\underline{u})\in\mathbb{R}^{2n\times 2}\) by
\[\underline{x} =(x_{1},\dots,x_{n},y_{1},\dots,y_{n})\in\mathbb{R}^{2n}\] \[\underline{u} =(u_{1},\dots,u_{n},v_{1},\dots,v_{n})\in\mathbb{R}^{2n}.\]
At this point, one might expect that \((\lambda_{1},\lambda_{2})_{\mathfrak{sp}(2n)}\leftrightarrow\mathcal{P}_{ \lambda_{1},\lambda_{2}}(\mathbb{R}^{2n},\mathbb{C})\). This is however not the case as \(\mathcal{P}_{\lambda_{1},\lambda_{2}}(\mathbb{R}^{2n},\mathbb{C})\) is _not_ irreducible as \(\mathfrak{sp}(2n)\)-module, as illustrated in the following counterexample:
**Example 3.5**.: We start by calculating the dimension of the space \(\mathcal{P}_{\lambda_{1},\lambda_{2}}(\mathbb{R}^{2n},\mathbb{C})\):
\[\dim(\mathcal{P}_{\lambda_{1},\lambda_{2}}(\mathbb{R}^{2n},\mathbb{C}))={ \lambda_{1}+2n-1\choose 2n-1}{\lambda_{2}+2n-1\choose 2n-1}.\]
Let us now take \(n=4\) and consider by means of example the module \((2,1)_{\mathfrak{sp}(8)}\). The dimension of the polynomial algebra is given by
\[\dim(\mathcal{P}_{2,1}(\mathbb{R}^{4},\mathbb{C}))=\binom{9}{7}\binom{8}{7}=288.\]
However, one can check that \(\dim(2,1)_{\mathfrak{sp}(8)}=160\). This means that the suggested model \(\mathcal{P}_{\lambda_{1},\lambda_{2}}(\mathbb{R}^{2n},\mathbb{C})\) is too big and some kind of _reduction_ has to be done.
The tool for this dimensional reduction is the Howe duality \(\mathsf{Sp}(2n)\times\mathfrak{so}(4)\).
**Proposition 3.6**.: _Let \(N=2\) and \((\underline{x},\underline{u})\in\mathbb{R}^{2n\times 2}\). Then the irreducible representation given by the weight \((\lambda_{1},\lambda_{2})_{\mathfrak{sp}(2n)}\) can be realised in terms of homogeneous polynomials as follows:_
\[\mathcal{P}_{\lambda_{1},\lambda_{2}}(\mathbb{R}^{2n\times 2},\mathbb{C}) \cap\ker(\langle\underline{x},\underline{\partial}_{u}\rangle,\langle \underline{\partial}_{x},\underline{\partial}_{u}\rangle_{s})\]
_where the operators \(\langle\underline{x},\underline{\partial}_{u}\rangle\) and \(\langle\underline{\partial}_{x},\underline{\partial}_{u}\rangle_{s}\) are \(\mathfrak{sp}(2n)\)-invariant operators and \(\langle\cdot,\cdot\rangle_{s}\) is the'symplectic inner product' (see equation 2)._
Proof.: We construct \(\mathfrak{sp}(2n)\)-invariant operators by contraction of the coordinates and corresponding derivatives by using \(\langle\cdot,\cdot\rangle\) or \(\langle\cdot,\cdot\rangle_{s}\). By a paring argument, we expect \(\binom{4}{2}=6\) symplectic invariant operators. They are given by: \(\langle\underline{u},\underline{x}\rangle_{s}\), \(\langle\underline{u},\underline{\partial}_{x}\rangle\), \(\langle\underline{x},\underline{\partial}_{u}\rangle\), \(\langle\underline{\partial}_{x},\underline{\partial}_{u}\rangle_{s}\), \(\langle\underline{u},\underline{\partial}_{u}\rangle+n\) and \(\langle\underline{x},\underline{\partial}_{x}\rangle+n\). Note that for the mixed contractions we use the usual Euclidean inner product, because otherwise the resulting operator is not symplectic invariant. These six \(\mathfrak{sp}(2n)\)-invariant operators span a copy of the orthogonal algebra \(\mathfrak{so}(4)\) (see for example Wallach [21], Theorem 5.6.14). Recall that Cartan algebra \(\mathfrak{h}\subseteq\mathfrak{so}(4)\) is given by
\[\mathfrak{h}=\begin{cases}H_{1}=\langle\underline{u},\underline{\partial}_{u} \rangle+n\\ H_{2}=\langle\underline{x},\underline{\partial}_{x}\rangle+n\end{cases}\]
We now claim that the highest weight vector is given by
\[w_{\lambda_{1},\lambda_{2}}(\underline{x},\underline{u})=x_{1}^{\lambda_{1}- \lambda_{2}}(x_{1}u_{2}-x_{2}u_{1})^{\lambda_{2}}.\]
Note that this highest weight vector is independent of \((\underline{v},\underline{y})\). To confirm this, we check the following:
1. The vector \(w_{\lambda_{1},\lambda_{2}}\) is annihilated by the positive roots of the \(\mathfrak{so}(4)\). We have \[x_{1}\partial_{u_{1}}\Big{(}x_{1}^{\lambda_{1}-\lambda_{2}}\left( u_{2}x_{1}-u_{1}x_{2}\right)^{\lambda_{2}}\Big{)} =-\lambda_{2}x_{2}x_{1}^{\lambda_{1}-\lambda_{2}+1}\left(u_{2}x_{ 1}-u_{1}x_{2}\right)^{\lambda_{2}-1}\] \[x_{2}\partial_{u_{2}}\Big{(}x_{1}^{\lambda_{1}-\lambda_{2}}\left( u_{2}x_{1}-u_{1}x_{2}\right)^{\lambda_{2}}\Big{)} =\lambda_{2}x_{2}x_{1}^{\lambda_{1}-\lambda_{2}+1}\left(u_{2}x_{1} -u_{1}x_{2}\right)^{\lambda_{2}-1}\] together with the fact that \(w_{\lambda_{1},\lambda_{2}}\) is independent of the variables \(u_{3},\ldots,u_{n}\) and \(v_{3},\ldots,v_{n}\) we can conclude that \(\langle\underline{x},\underline{\partial}_{u}\rangle w_{\lambda_{1},\lambda_{2 }}(\underline{x},\underline{u})=0\). One now similarly checks that \(\langle\underline{\partial}_{x},\underline{\partial}_{u}\rangle_{s}w_{ \lambda_{1},\lambda_{2}}(\underline{x},\underline{u})=0\).
2. Moreover, \(w_{\lambda_{1},\lambda_{2}}\) must be annihilated by the positive roots vectors \(\Phi^{+}_{\mathfrak{sp}(2n)}\). A representation of the Lie algebra \(\mathfrak{sp}(2n)\) on the space \(\mathcal{P}(\mathbb{R}^{2n\times 2},\mathbb{C})\) is given by: \[\begin{cases}X_{jk}=x_{j}\partial_{x_{k}}-y_{k}\partial_{y_{j}}+u_{j}\partial_ {u_{k}}-v_{k}\partial_{v_{j}}&j,k=1,\ldots,n\\ Y_{jk}=x_{j}\partial_{y_{k}}+x_{k}\partial_{y_{j}}+u_{j}\partial_{v_{k}}+u_{k} \partial_{v_{j}}&j<k=1,\ldots,n\\ Z_{jk}=y_{j}\partial_{x_{k}}+y_{k}\partial_{x_{j}}+u_{j}\partial_{v_{j}}&j<k=1, \ldots,n\\ Y_{jj}=x_{j}\partial_{y_{j}}+u_{j}\partial_{v_{j}}&j=1,\ldots,n\\ Z_{jj}=y_{j}\partial_{x_{j}}+v_{j}\partial_{x_{j}}&j=1,\ldots,n\end{cases}\] Due to the fact that \(w_{\lambda_{1},\lambda_{2}}\) only depends on the variables \((x_{1},x_{2},u_{1},u_{2})\), we immediately see that \(w_{\lambda_{1},\lambda_{2}}\) is a solution of the differential operators \(Y_{jj}\) for
all \(1\leq j\leq n\). For the operators \(X_{jk}\) with \(1\leq j<k\leq n\) we have \[X_{12}w_{\lambda_{1},\lambda_{2}}(\underline{x},\underline{u}) =(x_{1}\partial_{x_{2}}-y_{2}\partial_{y_{1}}+u_{1}\partial_{u_{2}} -v_{2}\partial_{v_{1}})w_{\lambda_{1},\lambda_{2}}(\underline{x},\underline{u})\] \[=x_{1}\partial_{x_{2}}+u_{1}\partial_{u_{2}}w_{\lambda_{1},2}( \underline{x},\underline{u})\] \[=-u_{1}x_{1}x_{1}^{\lambda_{1}-\lambda_{2}}+u_{1}x_{1}x_{1}^{ \lambda_{1}-\lambda_{2}}\] \[=0\] and completely similar for the other operators \(X_{jk}\). Repeating the strategy from above, we also obtain \(Y_{jk}w_{\lambda_{1},\lambda_{2}}=0\) for all \(1\leq j<k\leq n\).
3. The action of the Cartan algebra \(\mathfrak{h}=\{X_{jj}:j=1,\dots,n\}\) of \(\mathfrak{sp}(2n)\) must give the eigenvalues \((\lambda_{1},\lambda_{2})_{\mathfrak{sp}(2n)}\) (recall that the redundant zeros are omitted here). For \(X_{11}\) we obtain \[x_{1}\left((\lambda_{1}-\lambda_{2})x_{1}^{\lambda_{1}-\lambda_{ 2}-1}(x_{1}u_{2}-x_{2}u_{1})^{\lambda_{2}}+x_{1}^{\lambda_{1}-\lambda_{2}} \lambda_{2}(x_{1}u_{2}-x_{2}u_{1})^{\lambda_{2}-1}u_{2}\right)\] \[+u_{1}\left(x_{1}^{\lambda_{1}-\lambda_{2}}\lambda_{2}(x_{1}u_{2}- x_{2}u_{1})^{\lambda_{2}-1}(-x_{2})\right)\] This can be reduced to \(\lambda_{1}w_{\lambda_{1},\lambda_{2}}(\underline{x},\underline{u})\) as wanted. By computing the action of \(X_{22}\) on \(w_{\lambda_{1},\lambda_{2}}(\underline{x},\underline{u})\) we obtain \[\lambda_{2}u_{2}x_{1}^{\lambda_{1}-\lambda_{2}+1}\left(u_{2}x_{1}-u_{1}x_{2} \right)^{\lambda_{2}-1}-\lambda_{2}u_{1}x_{2}x_{1}^{\lambda_{1}-\lambda_{2}} \left(u_{2}x_{1}-u_{1}x_{2}\right)^{\lambda_{2}-1}=\lambda_{2}w_{\lambda_{1}, \lambda_{2}}.\] Moreover, \(X_{jj}w_{\lambda_{1},\lambda_{2}}=0\) for \(j>2\).
We now come to the notion of simplicial harmonics in the symplectic framework for \(N=2\) (compare with Definition 2.3).
**Definition 3.7**.: For \(\lambda_{1}\geq\lambda_{2}\geq 0\), we define the space of _symplectic simplicial harmonics_ as
\[\mathcal{H}^{s}_{\lambda_{1},\lambda_{2}}(\mathbb{R}^{2n\times 2},\mathbb{C}):= \mathcal{P}_{\lambda_{1},\lambda_{2}}(\mathbb{R}^{2n\times 2},\mathbb{C}) \cap\ker(\langle\underline{x},\underline{\partial}_{u}\rangle,\langle \underline{\partial}_{x},\underline{\partial}_{u}\rangle s).\]
### The general case of \(N\) vector variables
In this subsection, we generalise Proposition 3.6 to the case of \(N\) vector variables. We start by determining the associated Howe dual partner:
**Lemma 3.8**.: _Let \(\underline{U}=(\underline{u}_{1},\dots,\underline{u}_{N})\in\mathbb{R}^{2n \times N}\). There are \(\binom{2N}{2}\) symplectic invariant operators which are obtained by contraction between_
1. _variables:_ \(\langle\underline{u}_{i},\underline{u}_{j}\rangle_{s}\) _with_ \(i,j\in\{1,\dots,N\}\) _and_ \(i\neq j\)_;_
2. _differential operators:_ \(\langle\underline{\partial}_{u_{i}},\underline{\partial}_{u_{j}}\rangle_{s}\) _with_ \(i,j\in\{1,\dots,N\}\) _and_ \(i\neq j\)_;_
3. _variables and differential operators:_ \(\langle\underline{u}_{i},\underline{\partial}_{u_{j}}\rangle\) _with_ \(i,j\in\{1,\dots,N\}\)_._
_These \(\mathfrak{sp}(2n)\)-invariant operators give rise to a copy of the Lie algebra \(\mathfrak{so}(2N)\)._
Proof.: This follows from straightforward computations of commutators. See for instance [21].
We now have the following generalisation of Definition 3.7:
**Definition 3.9**.: We define the space of _symplectic simplicial harmonics_ as
\[\mathcal{H}^{s}_{\lambda_{1},\dots,\lambda_{N}}(\mathbb{R}^{2n\times N}, \mathbb{C}):=\mathcal{P}_{\lambda_{1},\dots,\lambda_{N}}(\mathbb{R}^{2n\times N },\mathbb{C})\cap\ker(\langle\underline{u}_{r},\underline{\partial}_{u_{s}} \rangle,\langle\underline{\partial}_{u_{p}},\underline{\partial}_{u_{q}} \rangle s)\]
for \(1\leq r<s\leq N\) and \(1\leq p\neq q\leq N\).
We now check that the space of symplectic simplicial harmonics provides a polynomial model for the irreducible representations of highest weight \((\lambda_{1},\dots,\lambda_{N})_{\mathfrak{sp}(2n)}\).
**Theorem 3.10** (Scalar polynomial model).: _The spaces \(\mathcal{H}^{s}_{\lambda_{1},\dots,\lambda_{N}}(\mathbb{R}^{2n\times N}, \mathbb{C})\) provide a model for the finite-dimensional representations of \(\mathfrak{sp}(2n)\) of highest weight \((\lambda_{1},\dots,\lambda_{N})_{\mathfrak{sp}(2n)}\)._
Proof.: The proof is completely analogous to the proof of Proposition 3.6. However, we first note that \(\mathfrak{sp}(2n)\) has the following representation (under the regular action) on the space of polynomials \(\mathcal{P}(\mathbb{R}^{2n\times N},\mathbb{C})\):
\[\begin{cases}Y_{ii}=\sum_{N=1}^{n}x_{N,i}\partial_{y_{N,i}}&i=1,\ldots,n\\ Z_{ii}=\sum_{N=1}^{n}y_{N,i}\partial_{x_{N,i}}&i=1,\ldots,n\\ X_{ij}=\sum_{N=1}^{n}x_{N,i}\partial_{x_{N,j}}-y_{N,j}\partial_{y_{N,i}}&i,j=1, \ldots,n\\ Y_{ij}=\sum_{N=1}^{n}x_{N,i}\partial_{y_{N,N}}+x_{N,j}\partial_{y_{N,i}}&i<j=1,\ldots,n\\ Z_{ij}=\sum_{N=1}^{n}y_{N,i}\partial_{x_{N,j}}+y_{N,j}\partial_{x_{N,i}}&i<j=1,\ldots,n\end{cases}\]
We now define
\[\Xi_{N}:=\begin{pmatrix}x_{11}&x_{12}&\cdots&x_{1N}\\ x_{21}&x_{22}&\cdots&x_{2N}\\ \vdots&\vdots&\ddots&\vdots\\ x_{N1}&x_{N2}&\cdots&x_{NN}\end{pmatrix}\in\mathbb{R}^{N\times N}\]
The highest weight vector for \((\lambda_{1},\ldots,\lambda_{N})_{\mathfrak{sp}(2n)}\) is then given by
\[w_{\lambda_{1},\ldots,\lambda_{N}}:\,=\Xi_{1}^{\lambda_{1}-\lambda_{2}}\det( \Xi_{2})^{\lambda_{2}-\lambda_{3}}\cdots\det(\Xi_{N})^{\lambda_{N}}=\prod_{j =1}^{N}\det(\Xi_{j})^{\lambda_{j}-\lambda_{j+1}},\]
where \(\lambda_{j}=0\) for \(j>N\). Proceeding like in Proposition 3.6) now concludes the proof.
## 4. Symplectic simplicial monogenics
Now that we know that the spaces of symplectic simplicial harmonics \(\mathcal{H}^{s}_{\lambda_{1},\ldots,\lambda_{N}}(\mathbb{R}^{2n\times N}, \mathbb{C})\) form a polynomial model for finite-dimensional irreducible \(\mathfrak{sp}(2n)\) representations of highest weight \((\lambda_{1},\ldots,\lambda_{N})_{\mathfrak{sp}(2n)}\) we consider a class of infinite-dimensional \(\mathfrak{sp}(2n)\)-representations. This special class of representations is induced by the symplectic Dirac operator which we will now define. We start with the symplectic version of the Clifford algebra:
**Definition 4.1**.: Let \((V,\omega)\) be a symplectic vector space. The _symplectic Clifford algebra_\(\mathsf{Cl}_{s}(V,\omega)\) is defined as the quotient algebra of the tensor algebra \(T(V)\) of \(V\), by the two-sided ideal
\[\mathcal{I}(V,\omega):=\{v\otimes u-u\otimes v+\omega(v,u):u,v\in V\}.\]
In other words \(\mathsf{Cl}_{s}(V,\omega):=T(V)/\mathcal{I}(V,\omega)\) is the algebra generated by \(V\) in terms of the relation \([v,u]=-\omega(v,u)\), where we have omitted the tensor product symbols.
**Remark 4.2**.: Note that the symplectic Clifford is often introduced as the Weyl algebra generated by \(2n\) commuting variables and their associated partial derivatives where e.g. \([\partial_{z_{i}},z_{j}]=\delta_{ij}\).
### The symplectic Dirac operator
Following Habermann [23] on the flat symplectic space \((\mathbb{R}^{2n},\omega_{0})\), we define the _symplectic Dirac operator_ by means of
\[D_{s}:\mathcal{P}(\mathbb{R}^{2n},\mathbb{C})\otimes\mathbb{S}^{\infty}\to \mathcal{P}(\mathbb{R}^{2n},\mathbb{C})\otimes\mathbb{S}^{\infty},\quad f \mapsto(\langle\underline{z},\underline{\partial}_{y}\rangle-\langle \underline{\partial}_{x},\underline{\partial}_{z}\rangle)f,\]
where \(\langle\cdot,\cdot\rangle\) is the usual Euclidean inner product on \(\mathbb{R}^{n}\). Note that \(\underline{z}\in\mathbb{R}^{n}\) plays a different role than \((\underline{x},\underline{y})\in\mathbb{R}^{2n}\) here (it is used to model the symplectic spinors as polynomials in \(\underline{z}\)). The adjoint (with respect to the symplectic Fischer inner product, see [9]) is given by \(X_{s}=\langle\underline{x},\underline{z}\rangle+\langle\underline{y},\underline {\partial}_{z}\rangle\).
**Lemma 4.3** ([14]).: _The symplectic Lie algebra \(\mathfrak{sp}(2n)\) has the following realisation on the space of symplectic spinor-valued polynomials \(\mathcal{P}(\mathbb{R}^{2n},\mathbb{C})\otimes\mathcal{P}(\mathbb{R}^{n}, \mathbb{C})\):_
\[\begin{cases}X_{jk}=x_{j}\partial_{x_{k}}-y_{k}\partial_{y_{j}}-(z_{k}\partial_ {z_{j}}+\frac{1}{2}\delta_{jk})&j,k=1,\ldots,n&n^{2}\\ Y_{jk}=x_{j}\partial_{y_{k}}+x_{k}\partial_{y_{j}}-\partial_{z_{j}}\partial_{z_{ k}}&j<k=1,\ldots,n&\frac{n(n-1)}{2}\\ Z_{jk}=y_{j}\partial_{x_{k}}+y_{k}\partial_{x_{j}}+z_{j}z_{k}&j<k=1,\ldots,n& \frac{n(n-1)}{2}\\ Y_{jj}=x_{j}\partial_{y_{j}}-\frac{1}{2}\partial_{z_{j}}^{2}&j=1,\ldots,n&n\\ Z_{jj}=y_{j}\partial_{x_{j}}+\frac{1}{2}z_{j}^{2}&j=1,\ldots,n&n\\ \end{cases} \tag{4}\]
_The Cartan algebra \(\mathfrak{h}\) is given by the operators \(X_{jj}\) for \(1\leq j\leq n\) and the positive root vectors are \(X_{jk},Y_{jk}\) and \(Y_{jj}\) for \(1\leq j<k\leq n\)._
**Corollary 4.4**.: _The symplectic Dirac operator \(D_{s}\) is a \(\mathfrak{sp}(2n)\)-invariant operator with respect to the realisation from Lemma 4.3._
We define the space of \(k\)-homogeneous symplectic monogenics as
\[\mathcal{M}_{k}^{s}:=\ker_{k}(D_{s})=\ker(D_{s})\cap\left(\mathcal{P}_{k}( \mathbb{R}^{2n},\mathbb{C})\otimes\mathbb{S}^{\infty}\right).\]
We will see in the next subsection that these spaces corresponds to the (direct sum) of the highest weights:
\[\left(k-\frac{1}{2},\ldots,-\frac{1}{2}\right)_{\mathfrak{sp}(2n)}\oplus \left(k-\frac{1}{2},\ldots,-\frac{3}{2}\right)_{\mathfrak{sp}(2n)}.\]
Our goal will be to associate with the infinite-dimensional representation
\[\left(\lambda_{1}-\frac{1}{2},\ldots,\lambda_{N}-\frac{1}{2}\right)_{ \mathfrak{sp}(2n)}\oplus\left(\lambda_{1}-\frac{1}{2},\ldots,\lambda_{N}- \frac{3}{2}\right)_{\mathfrak{sp}(2n)}\]
a certain space of \((\lambda_{1},\ldots,\lambda_{N})\)-homogeneous polynomials intersected with some \(\mathfrak{sp}(2n)\)-invariant differential operators (associated with the symplectic Dirac operator). In order to obtain a characterisation of this space in terms of a Cartan product of two \(\mathfrak{sp}(2n)\)-irreducible representations, we need some more background. This is provided in the following subsection.
### Completely pointed modules
We will now introduce the algebraic background for computing tensor products of finite-dimensional \(\mathfrak{sp}(2n)\)-representations with the infinite-dimensional \(\mathfrak{sp}(2n)\)-module \(\mathbb{S}^{\infty}\). We first recall some basic terminology considering the root system of \(\mathfrak{sp}(2n)\). We denote by \(\epsilon_{i}=(0,\cdots,1,\cdots,0)\) the standard basis of \(\mathbb{R}^{n}\). The roots of the symplectic algebra \(\mathfrak{sp}(2n)\) fall into two categories: \(2n\) so-called _long roots_\(\pm 2\epsilon_{i}\) and \(2(n^{2}-n)\)_short roots_\(\pm\epsilon_{i}\pm\epsilon_{j}\), where \(1\leq i\leq n\) and \(i\neq j\). This means that the full set of roots \(\Phi\) equals
\[\Phi=\{\pm(\epsilon_{i}\pm\epsilon_{j}):1\leq i<j\leq n\}\cup\{\pm 2\epsilon_{i} :1\leq i\leq n\}.\]
The simple roots are chosen to be
\[\Delta=\{\alpha_{i}:=\epsilon_{i}-\epsilon_{i+1}:1\leq i\leq n-i\}\cup\{ \alpha_{n}:=2\epsilon_{n}\}.\]
The fundamental weights are now given by \(\omega_{j}=\epsilon_{1}+\cdots+\epsilon_{j}.\) The two bases of the dual of the Cartan algebra \(\mathfrak{h}^{*}\) relate as follows
\[\omega_{1}=\epsilon_{1},\quad\omega_{2}=\epsilon_{1}+\epsilon_{2},\quad\ldots \quad\omega_{n}=\epsilon_{1}+\epsilon_{2}+\cdots+\epsilon_{n}.\]
We denote by \(\mathbb{V}_{\lambda}\) to be the irreducible highest weight module with highest weight \(\lambda\). Note that we will sometimes denote this by \(\mathbb{V}(\lambda)\) for some specific value of \(\lambda\). We will now introduce a specific type of module of which the modules \(\mathbb{S}^{\infty}\) will appear to be examples.
**Definition 4.5**.: Let \(\mathfrak{g}\) be a semisimple Lie algebra with Cartan algebra \(\mathfrak{h}\) and let \(\mathbb{V}\) be \(\mathfrak{g}\)-module.
1. We say that \(\mathbb{V}\) is a module with _bounded multiplicities_ if there exists a \(k\in\mathbb{N}_{0}\) such that for every decomposition of the form \(\mathbb{V}=\bigoplus_{\lambda}\mathbb{V}_{\lambda}\), we have that \(\dim\mathbb{V}_{\lambda}\leq k\). The minimal \(k\) for which this upper bound holds is called the _degree of \(\mathbb{V}\)_ and it is denoted by \(\deg(\mathbb{V})=k\).
2. The module \(\mathbb{V}\) is then called _completely pointed_ if \(\deg(\mathbb{V})=1\).
The Lie algebras \(\mathfrak{g}\) which have infinite-dimensional modules with bounded multiplicities are rather limited as they only occur if and only if \(\mathfrak{g}=\mathfrak{sl}(n)\) or \(\mathfrak{g}=\mathfrak{sp}(2n)\) (see for example [4]). The following theorem restricts the possibilities for the completely pointed modules in the case of symplectic algebra.
**Theorem 4.6** (Britten, Hooper & Lemire [4]).: _Let \(\mathbb{V}_{\lambda}\) be a highest weight module of the symplectic Lie algebra \(\mathfrak{sp}(2n)\). Then it is completely pointed if and only if \(\lambda=0,\omega_{1},-\frac{1}{2}\omega_{n}\) or \(\omega_{n-1}-\frac{3}{2}\omega_{n}\) (or \(\omega_{n}\) if \(n=2\) or \(3\))._
1. _If_ \(\lambda=0\) _and_ \(\lambda=\omega_{1}\) _the module is finite-dimensional, more specifically:_ \(\mathbb{V}_{0}=\mathbb{C}v\) _is the trivial representation and_ \(\mathbb{V}_{\omega_{1}}\cong\mathbb{C}^{2n}\) _is the fundamental representation. Moreover, in the special case_ \(n=2,3\) _the module_ \(\mathbb{V}_{\omega_{n}}\) _is also finite-dimensional._
2. \(\mathbb{V}_{\lambda}\) _for_ \(\lambda=-\frac{1}{2}\omega_{n}\) _and_ \(\lambda=\omega_{n-1}-\frac{3}{2}\omega_{n}\) _are infinite-dimensional_ \(\mathfrak{sp}(2n)\)_-modules. Their highest weight are given by_ \[\left(-\frac{1}{2},\ldots,-\frac{1}{2}\right)_{\mathfrak{sp}(2n)}\quad\text{ and }\quad\left(-\frac{1}{2},\ldots,-\frac{3}{2}\right)_{\mathfrak{ sp}(2n)}.\]
These completely pointed modules have the following interpretation:
**Theorem 4.7**.: _The infinite-dimensional symplectic spinor space \(\mathbb{S}^{\infty}\) is a reducible \(\mathfrak{sp}(2n)\)-module, its irreducible components exactly coincide with the two completely pointed weight modules from above. The corresponding highest weight vector is given by \(1\oplus z_{n}\) (when considering the realisation from Lemma 4.3)._
Proof.: Recall from Lemma 4.3 that the \(n\) Cartan elements (we only focus on the spinor part here) of the Lie algebra \(\mathfrak{sp}(2n)\) are given by
\[H_{j}:=X_{jj}=-\left(z_{j}\partial_{z_{j}}+\frac{1}{2}\right)\qquad(1\leq j \leq n).\]
We now directly verify that the completely pointed module \(\mathbb{V}\left(-\frac{1}{2}\omega_{n}\right)\) has highest weight vector \(1\) and \(\mathbb{V}\left(\omega_{n-1}-\frac{3}{2}\omega_{n}\right)\) has highest weight vector \(z_{n}\). This implies that the highest weight vector of \(\mathbb{S}^{\infty}\) is \(w=1\oplus z_{n}\).
The following theorem will allow us to compute the tensor product of finite-dimensional symplectic modules with the infinite-dimensional symplectic modules \(\mathbb{S}^{\infty}_{\pm}\).
**Theorem 4.8** (Britten & Lemire [5]).: _Let \(\mathbb{V}(\lambda)\) be a finite-dimensional \(\mathfrak{sp}(2n)\)-irreducible representation of highest weight \((\lambda_{1},\ldots,\lambda_{n})_{\mathfrak{sp}(2n)}\). Then, the tensor products with the infinite-dimensional completely pointed modules decompose as follows:_
\[\mathbb{V}\left(-\frac{1}{2}\omega_{n}\right)\otimes\mathbb{V}(\lambda) \cong\bigoplus_{\kappa\in T_{\lambda}}\mathbb{V}\left(-\frac{1}{2} \omega_{n}+\kappa\right)\]
_where \(\kappa\) is of the following form_
\[T_{\lambda}=\left\{\kappa\mid\kappa=\lambda-\sum_{i=1}^{n}d_{i}\epsilon_{i}\right\}\]
_subject to the properties:_
1. \(d_{i}\in\mathbb{N}\),
2. \(\sum_{i=1}^{n}d_{i}\in 2\mathbb{N}\),
3. \(0\leq d_{i}\leq\nu_{i}\) _for_ \(i=1,\ldots,n-1\)_,_
4. \(0\leq d_{n}\leq 2\nu_{n}+1\)_._
### The symplectic monogenic Fischer decomposition
As we know from Proposition 3.2 that \((k)_{\mathfrak{sp}(2n)}\leftrightarrow\mathcal{P}_{k}(\mathbb{R}^{n},\mathbb{ C})\) this now gives rise to the following result, which is in fact the symplectic counterpart of Theorem 2.2 proven in [10].
**Theorem 4.9** (Symplectic monogenic FD, [10]).: _Under the joint action of \(\mathsf{Mp}(2n)\times\mathfrak{sl}(2)\) the space of polynomials with values in the space of symplectic spinors decomposes as follows:_
\[\mathcal{P}(\mathbb{R}^{2n},\mathbb{C})\otimes\mathbb{S}^{\infty}=\bigoplus_{k= 0}^{\infty}\bigoplus_{j=0}^{\infty}X_{s}^{j}\mathcal{M}_{k}^{s}, \tag{5}\]
_where \(\mathcal{M}_{k}^{s}=\ker(D_{s})\cap(\mathcal{P}_{k}(\mathbb{R}^{2n},\mathbb{ C})\otimes\mathcal{P}(\mathbb{R}^{n},\mathbb{C}))\)._
**Remark 4.10**.: The module \(\mathcal{M}_{k}^{s}\) is \(\mathfrak{sp}(2n)\)-reducible and decomposes into two irreducible parts (an even and odd part) \(\mathcal{M}_{\ell}^{s}=(\mathcal{M}_{\ell}^{s})^{+}\oplus(\mathcal{M}_{\ell} ^{s})^{-}\), which are irreducible \(\mathfrak{sp}(2n)\)-modules of highest weight
\[(\mathcal{M}_{k}^{s})^{+} \leftrightarrow\left(k-\frac{1}{2},-\frac{1}{2},\ldots,-\frac{1 }{2}\right)_{\mathfrak{sp}(2n)}\] \[(\mathcal{M}_{k}^{s})^{-} \leftrightarrow\left(k-\frac{1}{2},-\frac{1}{2},\ldots,-\frac{3 }{2}\right)_{\mathfrak{sp}(2n)}.\]
As in the orthogonal case, we now see that the space of \(k\)-homogeneous symplectic monogenics can be defined in a purely algebraic way:
**Definition 4.11**.: The space of \(k\)-homogeneous polynomials solutions of the symplectic Dirac operator \(D_{s}\) are called _symplectic monogenics_ and are denote by \(\mathcal{M}_{k}^{s}\). They are algebraically determined by
\[\mathcal{M}_{k}^{s}:=(k)_{\mathfrak{sp}(2n)}\boxtimes\left(\left(-\frac{1}{2},-\frac{1}{2},\ldots,-\frac{1}{2}\right)_{\mathfrak{sp}(2n)}\oplus\left(- \frac{1}{2},-\frac{1}{2},\ldots,-\frac{3}{2}\right)_{\mathfrak{sp}(2n)}\right)\]
**Corollary 4.12**.: _The \(\mathfrak{sp}(2n)\)-module \(\mathcal{M}_{k}^{s}\) has highest weight vector \(x_{1}^{k}\otimes(1\oplus z_{n})\)._
Proof.: This follows from Propositions 3.2 and 4.7.
### The case of \(N\) vector variables
We will now be interested in systems of \(N\) symplectic Dirac operators. We will begin by studying the case of \(N=2\). Recall from Section 3 that the _finite-dimensional_ representations of highest weight \((\lambda_{1},\lambda_{2})_{\mathfrak{sp}(2n)}\) are irreducible and can be explicitly realised as the space of symplectic simplicial harmonics. We now want to gain knowledge on the _infinite-dimensional_ irreducible representations of highest weight
\[(\lambda_{1},\lambda_{2})^{\prime}_{\mathfrak{sp}(2n)}:=\left(\lambda_{1}- \frac{1}{2},\lambda_{2}-\frac{1}{2},\ldots,-\frac{1}{2}\right)_{\mathfrak{sp} (2n)}\oplus\left(\lambda_{1}-\frac{1}{2},\lambda_{2}-\frac{1}{2},\ldots,- \frac{3}{2}\right)_{\mathfrak{sp}(2n)}.\]
As one might intuitively predict, the space \((\lambda_{1},\lambda_{2})^{\prime}_{\mathfrak{sp}(2n)}\) will be defined by means of the Cartan product \((\lambda_{1},\lambda_{2})_{\mathfrak{sp}(2n)}\boxtimes\mathbb{S}^{\infty}\), but the question obviously arises as how to define these spaces as a kernel space for differential operators. We start by proving that the intuitive guess is indeed correct.
**Lemma 4.13**.: _The Cartan product of \((\lambda_{1},\lambda_{2})_{\mathfrak{sp}(2n)}\boxtimes\mathbb{S}^{\infty}\) is given by the direct sum of:_
\[\mathbb{V}\left(-\frac{1}{2}\omega_{n}+\lambda_{1}\epsilon_{1}+ \lambda_{2}\epsilon_{2}\right) =\left(\lambda_{1}-\frac{1}{2},\lambda_{2}-\frac{1}{2},-\frac{1}{ 2},\ldots,-\frac{1}{2}\right)_{\mathfrak{sp}(2n)}\] \[\mathbb{V}\left(\omega_{n-1}-\frac{3}{2}\omega_{n}+\lambda_{1} \epsilon_{1}+\lambda_{2}\epsilon_{2}\right) =\left(\lambda_{1}-\frac{1}{2},\lambda_{2}-\frac{1}{2},-\frac{1}{ 2},\ldots,-\frac{3}{2}\right)_{\mathfrak{sp}(2n)}\]
Proof.: In order to check this, we need to calculate (a part of) the tensor product decomposition of \((\lambda_{1},\lambda_{2})_{\mathfrak{sp}(2n)}\otimes\mathbb{S}^{\infty}\). This can be done using Theorem 4.8 where \(\mathbb{V}(\lambda)=(\lambda_{1},\lambda_{2})_{\mathfrak{sp}(2n)}\). We use the Young convention, so in other words we write \(\lambda=\lambda_{1}\epsilon_{1}+\lambda\epsilon_{2}\). Then \(\kappa\) from Theorem 4.8 is given by
\[\kappa=(\lambda_{1}-d_{1})\epsilon_{1}+(\lambda_{2}-d_{2})\epsilon_{2}-d_{n} \epsilon_{n}\]
Taking \(d_{1}=d_{2}=d_{n}=0\) (as this will give the highest weight) leaves us with
\[\kappa=\lambda\epsilon_{1}+\lambda_{2}\epsilon_{2}.\]
In order words \(\mathbb{V}\left(-\frac{1}{2}\omega_{n}+\kappa\right)\oplus\mathbb{V}\left( \omega_{n-1}-\frac{3}{2}\omega_{n}+\kappa\right)\) is the Cartan product.
We will now characterise the Cartan product \((\lambda_{1},\lambda_{2})_{\mathfrak{sp}(2n)}\boxtimes\mathbb{S}^{\infty}\) as the polynomial kernel space of certain differential operators. This will be done by _extending_ the Howe dual partner \(\mathfrak{so}(2N)\) from Section 3.2. We already know from the symplectic Fischer decomposition is that this dual partner is \(\mathfrak{sl}(2)\) for \(N=1\). As we are working with two vector variables here, we have two symplectic Dirac operators (one for each vector variable) and two adjoint operators which we define as follows:
\[D_{s,x} =\langle\underline{z},\underline{\partial}_{y}\rangle-\langle \underline{\partial}_{z},\underline{\partial}_{x}\rangle\qquad\text{and}\qquad X _{s,x}=\langle\underline{y},\underline{\partial}_{z}\rangle+\langle\underline{ x},\underline{z}\rangle\] \[D_{s,u} =\langle z,\underline{\partial}_{v}\rangle-\langle\underline{ \partial}_{z},\underline{\partial}_{u}\rangle\qquad\text{and}\qquad X_{s,u}= \langle v,\underline{\partial}_{z}\rangle+\langle u,\underline{z}\rangle\]
Recall that \(\langle\cdot,\cdot\rangle\) is the usual Euclidean inner product here. We now turn to the question whether these operators close as a Lie algebra.
**Lemma 4.14**.: _The algebra generated by the Dirac operators \(D_{s,x}\) and \(D_{s,u}\) and their adjoints \(X_{s,x}\) and \(X_{s,u}\) gives rise to a copy of the Lie algebra \(\mathfrak{so}(5)\)._
Proof.: This follows from calculating the commutators between these operators, we have for instance:
\[[X_{s,u},D_{s,u}]=\mathbb{E}_{u}+\mathbb{E}_{v}+n [X_{s,x},D_{s,x}]=\mathbb{E}_{x}+\mathbb{E}_{y}+n [X_{s,u},D_{s,x}]=\langle\underline{u},\underline{\partial}_{ \underline{x}}\rangle\] \[[X_{s,x},D_{s,u}]=\langle\underline{x},\underline{\partial}_{ \underline{u}}\rangle [D_{s,u},D_{s,x}]=\langle\underline{\partial}_{\underline{x}}, \underline{\partial}_{\underline{u}}\rangle_{s} [X_{s,u},X_{s,x}]=\langle\underline{x},\underline{u}\rangle_{s}\]
This now leads to the following generalisation of symplectic monogenics:
**Definition 4.15**.: We define the space of _symplectic simplicial monogenics_ for \(N=2\) as follows:
\[\mathcal{S}^{s}_{\lambda_{1},\lambda_{2}}(\mathbb{R}^{4n},\mathbb{S}^{\infty}) :=\mathcal{P}_{\lambda_{1},\lambda_{2}}(\mathbb{R}^{4n},\mathbb{S}^{\infty}) \cap\ker(D_{s,x},D_{s,u},\langle\underline{x},\underline{\partial}_{u}\rangle,\langle\underline{\partial}_{x},\underline{\partial}_{x}\rangle_{s}).\]
We obtain the following characterisation of the Cartan product \((\lambda_{1},\lambda_{2})_{\mathfrak{sp}(2n)}\boxtimes\mathbb{S}^{\infty}\):
**Theorem 4.16**.: _The space of symplectic simplicial monogenics \(\mathcal{S}^{s}_{\lambda_{1},\lambda_{2}}(\mathbb{R}^{4n},\mathbb{S}^{\infty})\) is the direct sum of two irreducible \(\mathfrak{sp}(2n)\)-module of highest weight_
\[\left(\lambda_{1}-\frac{1}{2},\lambda_{2}-\frac{1}{2},-\frac{1}{2},\ldots,- \frac{1}{2}\right)_{\mathfrak{sp}(2n)}\oplus\left(\lambda_{1}-\frac{1}{2}, \lambda_{2}-\frac{1}{2},-\frac{1}{2},\ldots,-\frac{3}{2}\right)_{\mathfrak{sp}(2 n)}\]
_with highest weight vector \(x_{1}^{\lambda_{1}-\lambda_{2}}(x_{1}u_{2}-x_{2}u_{1})^{\lambda_{2}}\otimes(1 \oplus z_{n})\)._
Proof.: The highest weight vector is obtained as the tensor product of the highest weight vector from Theorem 3.10(iii) (scalar part) and \(w_{\lambda_{1},\lambda_{2}}(\underline{x},\underline{u})\otimes(1\oplus z_{n})\). The rest is a straightforward computation.
**Remark 4.17**.: In order to obtain the full decomposition of \((\lambda_{1},\lambda_{2})_{\mathfrak{sp}(2n)}\otimes\mathbb{S}^{\infty}\) (this will lead to a new Fischer decomposition, which has to be interpreted as the \(N=2\) generalisation of 4.9) we need to take into account the other values for the parameters \(d_{1},d_{2},d_{n}\in\mathbb{N}\) (see Lemma 4.13). One must pay some attention here: in the \(N=1\) case, the space \(\mathcal{P}_{k}(\mathbb{R}^{2n},\mathbb{C})\) is \(\mathfrak{sp}(2n)\)-irreducible and the operator \(X_{s}\) (and its powers) served as the symplectic embedding factors making the isomorphism into an equality. In the case of two vector variables, the space \(\mathcal{P}_{\lambda_{1},\lambda_{2}}(\mathbb{R}^{4n},\mathbb{C})\) is _not_ irreducible (see Example 3.5). By the general theory of Howe dualities, this space further decomposes into the symplectic simplicial harmonics \(\mathcal{H}^{s}_{\lambda_{1},\lambda_{2}}(\mathbb{R}^{4n},\mathbb{C})\). Thus, when \(N>1\) the 'analogy' with the classical (orthogonal case) is restored in a sense: the space \(\mathcal{P}_{\lambda_{1},\ldots,\lambda_{N}}(\mathbb{R}^{2nN},\mathbb{C})\) has a'symplectic harmonic' Fischer decomposition with _symplectic harmonic Howe dual pair_\(\mathfrak{sp}(2n)\times\mathfrak{so}(2N)\). One could then look for a'monogenic refinement' which is then governed by the Howe dual pair \(\mathfrak{sp}(2n)\times\mathfrak{so}(2N+1)\). For example, in the case of \(N=2\) the latter would result in computing the tensor product \(\mathcal{H}^{s}_{\lambda_{1},\lambda_{2}}(\mathbb{R}^{4n},\mathbb{C})\otimes \mathbb{S}^{\infty}\) where we already know that \(\mathcal{H}^{s}_{\lambda_{1},\lambda_{2}}(\mathbb{R}^{4n},\mathbb{C})\boxtimes \mathbb{S}^{\infty}=\mathcal{S}^{s}_{\lambda_{1},\lambda_{2}}(\mathbb{R}^{4n },\mathbb{C})\). This would yield the'symplectic translations' of the results from [31, 26]. We plan to do this in a subsequent paper.
In the case of \(N\) vector variables we obtain:
**Lemma 4.18**.: _The Cartan product of \((\lambda_{1},\lambda_{2},\ldots,\lambda_{N})_{\mathfrak{sp}(2n)}\boxtimes \mathbb{S}^{\infty}\) is given by the direct sum of:_
\[\mathbb{V}\left(-\frac{1}{2}\omega_{n}+\sum_{j=1}^{N}\lambda_{j} \epsilon_{j}\right) =\left(\lambda_{1}-\frac{1}{2},\lambda_{2}-\frac{1}{2},\ldots, \lambda_{N}-\frac{1}{2}\right)_{\mathfrak{sp}(2n)}\] \[\mathbb{V}\left(\omega_{n-1}-\frac{3}{2}\omega_{n}+\sum_{j=1}^{N }\lambda_{j}\right) =\left(\lambda_{1}-\frac{1}{2},\lambda_{2}-\frac{1}{2},\ldots, \lambda_{N}-\frac{3}{2}\right)_{\mathfrak{sp}(2n)}\]
Proof.: This is again a consequence of Theorem 4.8 where one now takes \(\mathbb{V}(\lambda)=(\lambda_{1},\lambda_{2},\ldots,\lambda_{N})_{\mathfrak{ sp}(2n)}\) and \(\kappa=\sum_{j=1}^{N}\lambda_{j}\epsilon_{j}\).
**Theorem 4.19**.: _Considering a system of \(N\) symplectic Dirac operators and their (Fischer) adjoints. These operators give rise to a copy of the orthogonal Lie algebra \(\mathfrak{so}(2N+1)\) of dimension \(N(1+2N)\)._
Proof.: We use the short-hand notation \(D_{a}=D_{s,u_{a}}\) and \(X_{a}=X_{s,u_{a}}\) for \(1\leq a\leq N\). Then, the following commutation relations hold:
\[[[D_{a},X_{b}],X_{c}] =-2\delta_{ac}X_{b}\] \[=2\delta_{bc}X_{a}\] \[=2\delta_{bc}D_{a}-2\delta_{ac}D_{b}\] \[=2\delta_{bc}X_{a}-2\delta_{ac}X_{b}\] \[=2\delta_{bc}X_{a}-2\delta_{ac}X_{b}\] \[=0\]
for all \(a,b,c\in\{1,\ldots,N\}\).
**Remark 4.20**.: The theorem above allows us to link the system of \(N\) symplectic Dirac operators and their adjoints to the theory of so-called parastatistic algebras from physics in the following way. The parafermion algebra was introduced by Green [22]
by a system of \(N\) parafermion creation and annihilation operators \(f_{j}^{\pm}\) for \(j=1,\ldots,N\) satisfying
\[[[f_{j}^{\xi},f_{j}^{\eta}],f_{\ell}^{\epsilon}]=|\epsilon-\eta|\delta_{k\ell}f_ {j}^{\xi}-|\epsilon-\xi|\delta_{j\ell}f_{k}^{\eta},\]
where \(j,k,\ell\in\{1,\ldots,N\}\) and \(\xi,\epsilon,\eta\in\{+,-\}\). It is well-established that this algebra is in fact isomorphic to the orthogonal Lie algebra \(\mathfrak{so}(2N+1)\) (see for instance [28]). In other words, the operators \(D_{a}\) and \(X_{a}\) for \(a\in\{1,\ldots,N\}\) can be interpreted as parafermion creation and annihilation operators.
The Cartan product from Lemma 4.18 now gets the following interpretation:
**Definition 4.21**.: We define the space of _symplectic simplicial monogenics_ for \(N\) vector variables as follows:
\[\mathcal{S}^{s}_{\lambda_{1},\ldots,\lambda_{N}}(\mathbb{R}^{2n\times N}, \mathbb{S}^{\infty}):=\mathcal{P}_{\lambda_{1},\ldots,\lambda_{N}}(\mathbb{R} ^{2n\times N},\mathbb{S}^{\infty})\cap\ker(D_{s,u_{a}},\langle\underline{u}_{ r},\underline{\partial}_{u_{s}}\rangle,\langle\underline{\partial}_{u_{p}}, \underline{\partial}_{u_{q}}\rangle_{s}).\]
for \(1\leq r<s\leq N\) and \(1\leq p\neq q\leq N\).
## 5. The symplectic Rarita-Schwinger operator
### Transvector algebras
Let \(\mathfrak{g}\) be a Lie algebra and let \(\mathfrak{s}\subseteq\mathfrak{g}\) be a reductive subalgebra. We then have the decomposition \(\mathfrak{g}=\mathfrak{s}\oplus\mathfrak{t}\), where \(\mathfrak{t}\) carries an \(\mathfrak{s}\)-action for the commutator (i.e. \([\mathfrak{s},\mathfrak{t}]\subseteq\mathfrak{t}\)). Fix a Cartan subalgebra \(\mathfrak{h}\) for \(\mathfrak{s}\) and a triangular decomposition \(\mathfrak{s}=\mathfrak{s}^{-}\oplus\mathfrak{h}\oplus\mathfrak{s}^{+}\), where \(\mathfrak{s}^{\pm}\) consists of the positive (resp. negative) roots \(e_{\alpha}\) (resp. \(e_{-\alpha}\)) with \(\alpha\in\Phi^{+}(\mathfrak{s})\). Define a left ideal \(J\subseteq\mathcal{U}(\mathfrak{g})\) in the universal enveloping algebra \(\mathcal{U}(\mathfrak{g})\) by means of \(\mathcal{U}(\mathfrak{g})\mathfrak{s}^{+}\) and further define the _normaliser_ as
\[\operatorname{Norm}(J):=\{u\in\mathcal{U}(\mathfrak{g})\mid Ju\subseteq J\}.\]
We now have that \(J\) is a two-sided ideal of \(\operatorname{Norm}(J)\) so that the quotient algebra \(\mathcal{S}(\mathfrak{g},\mathfrak{s})=\operatorname{Norm}(J)/J\) which is known as the _Mickelsson algebra_.
Let us now consider an extension of \(\mathcal{U}(\mathfrak{g})\) to a suitable localisation \(\mathcal{U}^{\prime}(\mathfrak{g})\) given by
\[\mathcal{U}^{\prime}(\mathfrak{g})=\mathcal{U}(\mathfrak{g})\otimes_{ \mathcal{U}(\mathfrak{h})}\operatorname{Frac}(\mathcal{U}(\mathfrak{h}))\,\]
where \(\operatorname{Frac}(\mathcal{U}(\mathfrak{h}))\) is the field of fractions in the (universal enveloping algebra of the) Cartan algebra. The ideal \(J^{\prime}\) can be introduced for this extension too (in a completely similar way) and the quotient algebra \(\mathcal{Z}(\mathfrak{g},\mathfrak{s}):=\operatorname{Norm}(J^{\prime})/J^{\prime}\) is the _Mickelsson-Zhelobenko algebra_. These two algebras are naturally identified, since one has
\[\mathcal{Z}(\mathfrak{g},\mathfrak{s})=\mathcal{S}(\mathfrak{g},\mathfrak{s} )\otimes_{\mathcal{U}(\mathfrak{h})}\operatorname{Frac}(\mathcal{U}( \mathfrak{h})).\]
We refer to this algebra as a 'transvector algebra'.
**Definition 5.1**.: The _extremal projector_ for the Lie algebra \(\mathfrak{sl}(2)=\operatorname{Alg}\{X,Y,H\}\) is the idempotent operator \(\pi\) (meaning that \(\pi^{2}=\pi\)) given by the expression
\[\pi:=1+\sum_{j=1}^{\infty}\frac{(-1)^{j}}{j!}\frac{\Gamma(H+2)}{\Gamma(H+2+j)} Y^{j}X^{j}, \tag{6}\]
where
\[\frac{\Gamma(H+2)}{\Gamma(H+2+j)}=\frac{(H+1)!}{(H+j+1)!(H+j)!}\]
is defined using the gamma function \(\Gamma\). Note that we will sometimes write \(\pi_{\mathfrak{sl}(2)}\) in order to refer to the underlying Lie algebra \(\mathfrak{sl}(2)\).
**Remark 5.2**.: This operator satisfies \(X\pi=\pi Y=0\) and can therefore be used a natural object to project on the kernel \(\ker(X)\). The operator is defined on an extension \(\mathcal{U}^{\prime}(\mathfrak{sl}(2))\) of the universal enveloping algebra so that formal series containing the operator \(H\) in the denominator are well-defined.
### Higher spin Dirac operators
The Rarita-Schwinger operator was first introduced in theoretical physics by W. Rarita and J. Schwinger [27] in 1941 to describe fermionic particles of spin-\(\frac{3}{2}\), as a generalisation of the Dirac operator which describes particles of spin-\(\frac{1}{2}\). As a matter of fact, this operator is an example of a so-called _higher spin Dirac operator_ (HSD, [10]) given by
\[\mathcal{Q}_{\lambda}:\mathcal{C}^{\infty}(\mathbb{R}^{n},\mathbb{V}_{\lambda })\to\mathcal{C}^{\infty}(\mathbb{R}^{n},\mathbb{V}_{\lambda}),\]
where \(\mathbb{V}_{\lambda}\) is an irreducible \(\mathsf{Spin}(n)\)-representation (we assume that \(n\) is odd here to avoid the parity of the spinors \(\mathbb{S}\) and write \(m:=\lfloor\frac{n}{2}\rfloor\) for notational convenience) with highest weight
\[\lambda=\left(\lambda_{1}+\frac{1}{2},\ldots,\lambda_{m-1}+\frac{1}{2},\frac{ 1}{2}\right)_{\mathfrak{so}(n)}.\]
These are often called _higher spin representations_. Recall from Theorem 2.5 that these higher spin representations can be explicitly realised as the vector space of \(\mathbb{S}\)-valued polynomials in several vector variables intersected with the kernel of some differential operators (the simplicial monogenics).
* We note that the Dirac operator \[\underline{\partial}_{x}:\mathcal{C}^{\infty}(\mathbb{R}^{n},\mathbb{S})\to \mathcal{C}^{\infty}(\mathbb{R}^{n},\mathbb{S})\] is the easiest example of the operator \(\mathcal{Q}_{\lambda}\) where we take \(\lambda_{1}=\cdots=\lambda_{m-1}=0\), so that we have \(\mathbb{V}_{\lambda}=\mathbb{S}\).
* The next type of operator is obtained when taking the highest weight \(\lambda=\left(k+\frac{1}{2},\ldots,\frac{1}{2}\right)\), such that \(\mathbb{V}_{\lambda}=\mathcal{M}_{k}(\mathbb{R}^{n},\mathbb{S})\) (see Section 2.2). The corresponding higher spin Dirac operator is called the _Rarita-Schwinger operator_ \[\mathcal{R}_{k}:\mathcal{C}^{\infty}(\mathbb{R}^{n},\mathcal{M}_{k}(\mathbb{ R}^{2n},\mathbb{S}))\to\mathcal{C}^{\infty}(\mathbb{R}^{n},\mathcal{M}_{k}( \mathbb{R}^{2n},\mathbb{S})).\] Note that the classical Rarita-Schwinger operator from theoretical physics corresponds to the operator \(\mathcal{R}_{1}\).
The monogenic refinement of the FD (see Theorem 2.1) in the vector variable \(\underline{u}\in\mathbb{R}^{n}\) can be used to obtain an explicit expression for the first-order differential operator \(\mathcal{R}_{k}\). The decomposition
\[H_{k}(\underline{u})=M_{k}(\underline{u})+\underline{u}M_{k-1}(\underline{u}).\]
gives rise to the following diagram: Note that there are other arrows that one could study in this scheme (giving rise to e.g. the twistor operator) [10]. However, we will not consider these here.
**Remark 5.3**.: Every function \(f\in\mathcal{C}^{\infty}(\mathbb{R}^{n},\mathcal{M}_{k})\) with values in \(\mathcal{M}_{k}\) can be thought of as a function depending on two vector variables \(f(\underline{x};\underline{u})\) such that fixed \(\underline{x}\in\mathbb{R}^{n}\) the function \(f(\underline{x};\underline{u})=f_{\underline{x}}(\underline{u})\in\mathcal{M} _{k}(\mathbb{R}^{n},\mathbb{S})\), i.e. \(f(\underline{x};\underline{u})\in\ker\underline{\partial}_{\underline{u}}\). One often calls the variable \(\underline{u}\) a _dummy variable_. This illustrates the importance of the polynomial models for \(\mathsf{Spin}(n)\) (see Theorem 2.5).
Figure 1. The Rarita-Schwinger operator arising from the Fischer decomposition.
We summarise these results as follows:
**Theorem 5.4** (Bures et. al., [7]).: _Let \(f(\underline{x};\underline{u})\in\mathcal{C}^{\infty}(\mathbb{R}^{n},\mathcal{M} _{k})\). The Rarita-Schwinger operator is the unique \(\mathsf{Spin}(n)\)-invariant first-order differential operator_
\[\mathcal{R}_{k}:\mathcal{C}^{\infty}(\mathbb{R}^{n},\mathcal{M} _{k}(\mathbb{R}^{2n},\mathbb{S})) \to\mathcal{C}^{\infty}(\mathbb{R}^{n},\mathcal{M}_{k}(\mathbb{R} ^{2n},\mathbb{S}))\] \[f(\underline{x};\underline{u}) \mapsto\left(1+\frac{\underline{u}\,\underline{\partial}_{u}}{n+ 2k-2}\right)\underline{\partial}_{x}f.\]
**Remark 5.5**.: Note that the more general higher spin Dirac operators can be defined using a transvector algebra (see the work of Eelbode & Raeymaekers [17]). We will follow a similar approach in the symplectic setting instead of the more ad hoc approach in Figure 1.
### Higher metaplectic Dirac operators
Inspired by the definition of higher spin Dirac operators \(\mathcal{Q}_{\lambda}\), we now investigate operators
\[\mathcal{Q}_{k}^{s}:\mathcal{C}^{\infty}(\mathbb{R}^{2n},\mathbb{V}_{\lambda })\to\mathcal{C}^{\infty}(\mathbb{R}^{2n},\mathbb{V}_{\lambda}),\]
with values in the (infinite-dimensional) \(\mathsf{Mp}(2n)\)-representations \(\mathbb{V}_{\lambda}\) where
\[\lambda=\left(\lambda_{1}-\frac{1}{2},\ldots,\lambda_{n}-\frac{1}{2}\right)_{ \mathfrak{sp}(2n)}\oplus\left(\lambda_{1}-\frac{1}{2},\ldots,\lambda_{n}- \frac{3}{2}\right)_{\mathfrak{sp}(2n)}.\]
We refer to the operator \(\mathcal{Q}_{k}^{s}\) as a _higher metaplectic Dirac operator_ (or HMD in short). To our knowledge, these operators are not yet studied in the literature.
**Example 5.6**.: By taking \(\lambda_{1}=0=\cdots=\lambda_{n}=0\) we have \(\mathbb{V}_{\lambda}=\mathbb{S}^{\infty}\) and we recover the symplectic Dirac operator \(D_{s}:\mathcal{C}^{\infty}(\mathbb{R}^{2n},\mathbb{S}^{\infty})\to\mathcal{C}^ {\infty}(\mathbb{R}^{2n},\mathbb{S}^{\infty})\).
We now take \(\lambda_{1}=k\) and \(\lambda_{2}=\cdots=\lambda_{n}=0\). Then the corresponding highest weight is given by
\[\lambda=\left(k-\frac{1}{2},\ldots,-\frac{1}{2}\right)_{\mathfrak{sp}(2n)} \oplus\left(k-\frac{1}{2},\ldots,-\frac{3}{2}\right)_{\mathfrak{sp}(2n)}.\]
In this case, we know that the space of \(k\)-homogeneous symplectic monogenics \(\mathcal{M}_{k}^{s}(\mathbb{R}^{2n},\mathbb{S})\) is a model for \(\mathbb{V}_{\lambda}\). We consider the elements of \(\mathcal{C}^{\infty}(\mathbb{R}^{2n},\mathcal{M}_{k}^{s})\) as functions in two vector variables \(f(\underline{x};\underline{u})\) which will allow us to use the results obtained in Section 4.4, in which we defined the symplectic Dirac operators \(D_{s,x}\) and \(D_{s,u}\). Recall from Lemma 4.14 that the Lie algebra generated by the symplectic Dirac operators \(D_{s,x}\) and \(D_{s,u}\) and their adjoints \(X_{s,x}\) and \(X_{s,u}\) is isomorphic to \(\mathfrak{so}(5)\). Straightforward computation of commutators leads to:
**Lemma 5.7**.: _The Lie algebra \(\mathfrak{sl}(2)=\mathrm{Alg}(D_{s,u},X_{s,u})\) is reductive in_
\[\mathfrak{so}(5)=\mathrm{Alg}(D_{s,x},D_{s,u},X_{s,x},X_{s,u}).\]
_This means that we can write \(\mathfrak{so}(5)=\mathfrak{sl}(2)\oplus\mathfrak{t}\) with \(\mathfrak{t}\) a subspace satisfying \([\mathfrak{sl}(2),\mathfrak{t}]\subseteq\mathfrak{t}\)._
This means that we can construct the transvector algebra \(\mathcal{Z}(\mathfrak{so}(5),\mathfrak{sl}(2))\). Taking into account the suitable rescaling the Lie algebra \(\mathfrak{sl}(2)\cong\mathrm{Alg}(X,Y,H)\) can be realised as follows (note that this has no effect on the lemma above or the related commutators):
\[X=\sqrt{2}D_{s,u},\quad Y=\sqrt{2}X_{s,u},\quad H=-(\mathbb{E}_{u}+\mathbb{E}_ {v}+n)\]
Recall that the extremal projector \(\pi_{\mathfrak{sl}(2)}\) of the Lie algebra \(\mathfrak{sl}(2)\) is formally given by the expression
\[\pi_{\mathfrak{sl}(2)}=1+\sum_{j=1}^{+\infty}\frac{(-1)^{j}}{j!}\frac{\Gamma(H +2)}{\Gamma(H+2j)}Y^{j}X^{j}.\]
**Lemma 5.8**.: _The transvector projection of the operator \(D_{s,x}\) is given by_
\[\pi_{\mathfrak{sl}(2)}D_{s,x}=\left(1-\frac{YX}{(H+2)}\right)D_{s,x}.\]
Proof.: We easily find that \([D_{s,x},D_{s,u}]=-\langle\underline{\partial}_{x},\underline{\partial}_{u} \rangle_{s}\) and \([\langle\underline{\partial}_{x},\underline{\partial}_{u}\rangle_{s},D_{s,x}]=0\) so that infinite sum reduces to the finite sum from the lemma.
We now come to the following definition of the symplectic Rarita-Schwinger operator:
**Definition 5.9**.: We define the _symplectic Rarita-Schwinger_ operator \(\mathcal{R}^{s}_{k}\) as
\[\mathcal{R}^{s}_{k}:\mathcal{C}^{\infty}(\mathbb{R}^{2n},\mathcal{ M}^{s}_{k}) \to\mathcal{C}^{\infty}(\mathbb{R}^{2n},\mathcal{M}^{s}_{k})\] \[f(\underline{x},\underline{u}) \mapsto\left(1+\frac{2X_{s,u}D_{s,u}}{k+n+2}\right)D_{s,x}f( \underline{x},\underline{u}).\]
**Remark 5.10**.: Note that the name'symplectic Rarita-Schwinger operator' already appears in the literature; see for example the work of Krysl [25] and the references therein. However, their symplectic Rarita-Schwinger operator is defined on symplectic spinor-valued _forms_ whereas our operator acts on symplectic spinor-valued _polynomials_.
## 6. Conclusion and further research
In this paper we introduced the notion of symplectic simplicial harmonics and monogenics and related them to the irreducible representations of the symplectic Lie algebra. These results could also be useful for constructing solutions for the symplectic Dirac operator. There are two useful mechanisms in classical Clifford analysis to construct monogenic functions: first of all, there is the Fueter theorem, and secondly the Cauchy-Kowalewskaya extension (CK-extension in short). In the symplectic framework, the Fueter theorem was proven in [13]. These new polynomial models might be useful to get a grasp on the _symplectic CK-extension_ (which has not been proven yet). Let us focus on the orthogonal case to illustrate what we mean. The association between the irreducible representation \(\mathbb{V}_{\lambda}\) and the polynomial model as a kernel space of certain differential operators corresponds with the first dotted horizontal line in Figure 2.
The second step, visualised as the left downwards arrow, is to study the behaviour of the \(\mathfrak{so}(n)\)-irreducible representation \(\mathbb{V}_{\lambda}\) as a representation for the subalgebra \(\mathfrak{so}(n-1)\). The representation \(\mathbb{V}_{\lambda}\) will ofcourse no longer be irreducible, but will decompose in lower dimensional irreducible representations (which may occur multiple times) for the Lie algebra \(\mathfrak{so}(n-1)\). These lower dimensional summands of irreducible representations will also admit polynomial models (this gives rise to the horizontal dotted arrow at the bottom). This means that we are now left with two polynomial models. The problem
Figure 2. Branching, polynomials models and the CK-theorem.
how these two models interact is then solved by the CK-extension. In other words: the CK-extension can be viewed as an _inverse branching_ procedure.
Moreover, one could develop the function theory for the symplectic Rarita-Schwinger operator (and for the more general HMD) like it was done in the orthogonal case (see e.g. [29]). Last of all, we wish to mention that the connection between Dirac-type operators and parastatistic algebras has recently emerged in [1]. In this paper, the authors consider a system of \(N\) Grassmann Dirac operators and their duals. Recall that the classical Dirac operator \(\underline{\partial}_{x}=\sum_{j=1}^{n}e_{j}\partial_{x_{j}}\) is a contraction between Clifford algebra generators and partial derivatives. When one replaces the derivatives \(\partial_{x_{j}}\) by Grassmann derivatives (partial derivatives with respect to an anti-commuting or Grassmann variable) \(\partial_{\theta_{j}}\) the resulting Dirac-type operator \(\underline{\partial}_{\theta}=\sum_{j=1}^{n}e_{j}\partial_{\theta_{j}}\) is called the Grassmann Dirac operator and acts on spinor-valued _forms_. The striking difference with the usual Dirac operator \(\underline{\partial}_{x}\) is that \(\underline{\partial}_{\theta}\) and its dual give rise to a copy of the Lie algebra \(\mathfrak{sl}(2)\) instead of the Lie super algebra \(\mathfrak{osp}(1|2)\), we refer to [30] for the details. When considering \(N\) Grassmann Dirac operators and their duals, the authors of [1] show that the algebra closes as \(\mathfrak{so}(2N+1)\). Note that we found the same algebra in Theorem 4.19 but in terms of symplectic Dirac operators acting on symplectic spinor-valued _polynomials_. This yields a nice application of supersymmetry, in which both the orthogonal and symplectic framework can be combined. This is planned in future work.
|
2309.15140 | A Review on AI Algorithms for Energy Management in E-Mobility Services | E-mobility, or electric mobility, has emerged as a pivotal solution to
address pressing environmental and sustainability concerns in the
transportation sector. The depletion of fossil fuels, escalating greenhouse gas
emissions, and the imperative to combat climate change underscore the
significance of transitioning to electric vehicles (EVs). This paper seeks to
explore the potential of artificial intelligence (AI) in addressing various
challenges related to effective energy management in e-mobility systems (EMS).
These challenges encompass critical factors such as range anxiety, charge rate
optimization, and the longevity of energy storage in EVs. By analyzing existing
literature, we delve into the role that AI can play in tackling these
challenges and enabling efficient energy management in EMS. Our objectives are
twofold: to provide an overview of the current state-of-the-art in this
research domain and propose effective avenues for future investigations.
Through this analysis, we aim to contribute to the advancement of sustainable
and efficient e-mobility solutions, shaping a greener and more sustainable
future for transportation. | Sen Yan, Maqsood Hussain Shah, Ji Li, Noel O'Connor, Mingming Liu | 2023-09-26T16:34:35Z | http://arxiv.org/abs/2309.15140v1 | # A Review on AI Algorithms for Energy Management in E-Mobility Services
###### Abstract
E-mobility, or electric mobility, has emerged as a pivotal solution to address pressing environmental and sustainability concerns in the transportation sector. The depletion of fossil fuels, escalating greenhouse gas emissions, and the imperative to combat climate change underscore the significance of transitioning to electric vehicles (EVs). This paper seeks to explore the potential of artificial intelligence (AI) in addressing various challenges related to effective energy management in e-mobility systems (EMS). These challenges encompass critical factors such as range anxiety, charge rate optimization, and the longevity of energy storage in EVs. By analyzing existing literature, we delve into the role that AI can play in tackling these challenges and enabling efficient energy management in EMS. Our objectives are twofold: to provide an overview of the current state-of-the-art in this research domain and propose effective avenues for future investigations. Through this analysis, we aim to contribute to the advancement of sustainable and efficient e-mobility solutions, shaping a greener and more sustainable future for transportation.
Electric Mobility, Energy Management, Energy Consumption Estimation, Artificial Intelligent, Machine Learning
## I Introduction
Electric Mobility Service (EMS) refers to the use of electric-powered vehicles, including E-bikes, E-scooters, Hybrid Electric Vehicle (HEV), and Plug-in Hybrid Electric Vehicle (PHEV), for transportation needs. EMS has rapidly transformed the transportation landscape, offering sustainable alternatives to traditional combustion engine vehicles. These Electric Vehicle (EV)s not only address environmental concerns but also contribute to the development of an interconnected transportation ecosystem, advancing intelligent transportation systems (ITS). By embracing EMS, we promote a future where interconnected vehicles, advanced data analytics, and smart infrastructure combine to create a safer, more efficient, and sustainable transportation network. Energy management is crucial in EMS to ensure the efficient operation of electric vehicles and their charging infrastructure. It involves controlling and optimizing energy flow to meet specific requirements. Three key concerns in EMS energy management include ensuring a reliable range (often referred to as range anxiety), optimizing charging rates, and maximizing energy storage lifespan. Achieving this requires coordinating electrical energy resources like charging stations, renewable energy sources, and energy storage systems to facilitate electric vehicle charging.
Effective energy management is crucial for multiple reasons. One important aspect is ensuring the availability of charging infrastructure to meet the rising demand for electric vehicle charging [1]. As the number of EVs continues to grow, the charging load on the power grid can become substantial. Therefore, meticulous management is essential to prevent overloading the grid and potential blackouts. Another key benefit of energy management is optimizing the utilization of energy resources, minimizing wastage, and maximizing the efficiency of the charging process. This not only helps reduce operational costs but also enhances the overall sustainability of EMS systems. Additionally, energy management facilitates grid integration and empowers EVs to contribute to the grid by providing ancillary services or participating in vehicle-to-grid systems, thus strengthening the grid's stability and responsiveness.
Artificial Intelligence (AI) technologies offer a transformtive solution to the limitations of traditional energy management techniques in EMS [2]. Conventional methods, which are primarily based on predetermined charging schedules and basic load balancing algorithms, struggle to meet the dynamic optimization requirements and growing complexity of modern EMS [3]. In contrast, AI leverages advanced algorithms and real-time data analysis to optimize charging strategies intelligently. By adapting to changing conditions, utilizing predictive modeling, and employing multi-objective optimization, AI enables more efficient and effective energy management in EMS, addressing the demand for optimal charging solutions.
The potential of AI in transforming energy management for EMS lies in its computational techniques, including Machine Learning (ML) and Deep Learning (DL). AI algorithms and data-driven approaches enable intelligent systems to adapt to varying conditions, optimize charging operations, predict user behavior, and manage energy resources in real time. AI facilitates dynamic load balancing, efficient energy allocation, and demand-response strategies, resulting in improved charging infrastructure utilization, reduced energy costs, and enhanced grid integration. This paper comprehensively reviews AI technologies and techniques for energy management in EMS, covering energy consumption modeling, estimation, and prediction. It also discusses current challenges and proposes a research roadmap for future advancements. By assessing the state of AI-based energy management, this paper contributes to the development of effective and sustainable EMS solutions.
The paper is organized as follows. Section II presents the methodology used in our paper, proposes the research questions we plan to investigate and summarizes and compares other existing surveys. Section III provides an overview of conventional energy management systems, discussing their advantages and limitations. Section IV focuses on AI approaches for energy management, delving into the current state of affairs in this domain. Section V provides some discussion and introduces challenges to AI-based energy management methods. Finally, Section VI offers a brief conclusion summarizing the entire paper and presents future research directions.
## II Review Methodology
The literature survey process was executed meticulously in five distinct phases, namely planning, literature research, reporting and interpretation of findings, and the synthesis of challenges and potential research directions for the future. This section provides a comprehensive account of the pivotal research questions to be explored and expounds upon the systematic methodology employed in conducting the literature search.
### _Research Questions_
This paper aims at answering the following questions in relation to the application of AI methods in EMS:
1. What are the existing AI technologies and techniques used for energy management in EMS?
2. How are AI-based approaches employed in energy consumption modeling, estimation, and prediction in EMS?
3. What are the current challenges and limitations of AI methods in energy management for EMS?
4. What is the future research roadmap for advancements in AI-based energy management for EMS?
5. How does the use and focus of AI approaches vary among different EMS?
### _Literature Retrieval_
We conducted a systematic search of peer-reviewed research publications to collect studies that employed AI approaches to address issues related to energy management in EMS. Our screening process involved a thorough review of the literature to identify papers that addressed the structural challenges of EMS energy management and utilized AI methods. We utilized reputable online databases, including Google Scholar, ACM Digital Library, Springer, MDPI, IEEE, and Science Direct, which index a wide range of computer science and technology research, to ensure comprehensive coverage of relevant studies.
The literature search process was conducted using a set of specific keywords, including "energy management", "electric mobility service", "machine learning", "EV", "e-bike", "e-scooter" and "energy consumption prediction". Only research papers written in English were included in the search. As a result of this comprehensive search, a total of approximately 30 papers were retrieved for review. Among these papers, 1 of them specifically focused on E-scooters [4], 1 paper focused on E-bikes [5], and the remaining papers centered on EVs.
The results show that few existing survey papers in the literature focused on the applications of AI methods for energy management in E-micromobility systems, such as E-bikes and E-scooters. All selected papers for review are relevant in our context, which highlights the applicability of AI-based approaches in dealing with energy consumption prediction problems in different aspects.
### _Existing Survey_
In this section, we provide a comprehensive summary and comparison of existing surveys pertaining to energy management, e.g., the estimation of battery State of Charge (SoC), in EMS. It is evident that the field commonly accepts the use of three main categories of estimation approaches: electrochemical models, equivalent circuit models, and data-driven models. However, in recent years (2019 to 2022), there has been a notable emphasis on "data-driven methods" (such as AI approaches) and "connected environments" in the future direction section of these surveys. This highlights the growing importance and attention given by researchers to these areas in energy management systems.
Given this context, our work primarily aims to summarize the various modeling, estimation, and prediction approaches utilized in this domain. The objective is to provide readers with a concise understanding of the available models or algorithms and offer suggestions for their appropriate selection based on
different cases and scenarios. By offering this overview, we aim to assist researchers and practitioners in making informed decisions regarding the most suitable approaches for their specific energy management requirements within EMS.
## III Conventional Approaches
Based on the literature, conventional energy management methods for EVs (HEVsor PHEVs) can be classified into two main categories: rule-based and optimization-based. A brief summary of the advantages and limitations of conventional methods is provided in Table II, and Fig. 1 shows the hierarchical categorization of classical energy management systems.
Rule-based methods have been widely employed in early HEVs due to their simplicity and feasibility [12]. These methods focus on coordinating the operation of the internal combustion engine to improve fuel economy and emission performance by transferring the working points of the engine from low to high-efficiency zones [12]. Deterministic rule-based methods utilize heuristics, intuition, and mathematical models to develop control rules based on prior knowledge of the driving cycle [13]. Fuzzy rule-based EMS, on the other hand, incorporates fuzzy logic control to enhance adaptability and robustness [14].
Optimization-based methods are categorized as global and real-time optimization methods. Various global optimization methods have been employed, including dynamic programming, Pontryagin's Minimum Principle (PMP), Evolutionary Algorithms, and Game Theory. Dynamic programming breaks down the decision process into discrete steps and has been used to solve the optimization problem of multi-step decision processes [15]. PMP finds optimal control signals for time-varying non-linear systems subject to constraints [16]. Evolutionary Algorithms encompass swarm-based algorithms such as Particle Swarm Optimization and Genetic Algorithms [17]. Game Theory treats the energy management problem as a game among decision-makers [18].
Real-time optimization methods aim at minimizing energy consumption dynamically and include methods such as Equivalent Consumption Minimization Strategy and Model Predictive Control. ECMS converts electric energy into equivalent fuel consumption, allowing for compromise optimization of the vehicle's dynamic performance, fuel economy, and emission performance [19]. Model Predictive Control utilizes a prediction horizon and rolling optimization to determine optimal control actions in real-time [20].
As the complexity of energy management systems continue to rise, conventional approaches are being surpassed by more advanced AI methods, offering enhanced energy management capabilities.
## IV Artificial Intelligent Approaches
In the realm of energy management for EMS, AI-based approaches emerge as markedly superior to conventional methods [2]. This distinction arises from the inherent attributes of AI algorithms, which encompass dynamic adaptability, data-driven precision, and the capacity for continuous learning. Consequently, these algorithms possess the capability to process substantial real-time data and promptly adapt to changing circumstances, leading to effective energy consumption optimization [21]. In the subsequent discussion, we review AI strategies deployed within EMS energy management, examining them from two vantage points: traditional ML methods and DL methods.
### _Traditional Machine Learning Methods_
ML methods leverage the inherent patterns present within the data to facilitate the learning and adaptation of the system, thereby enabling accurate predictions for previously unseen data. Various ML approaches, including Linear Regression (LR) [22], Multiple Linear Regression (MLR) [23, 24, 25], Support Vector Machine (SVM) or Support Vector Regression (SVR) [25, 26, 27], Decision Tree (DT) [25, 26], Random Forest (RF) [4, 26, 28], eXtreme Gradient Boosting (XGB) [1, 23], Light Gradient Boosting Machine (LGBM) [23], k-Nearest Neighbor (kNN) [4, 26, 28], and Artificial Neural Networks (ANN) [23, 25, 26, 27], have been widely employed to address the challenges associated with energy consumption modeling or prediction for EMS. A brief summary of the advantages and limitations of traditional ML methods is provided in Table III.
These ML algorithms or models were individually applied and compared in various case studies. For example, MLR, DT, SVM and other neural network-based models were implemented on the data collected from electric buses in [25]. Furthermore, in [23], the authors employed MLR, ANN, XGB and LGBM to predict EV energy consumption using a dataset collected in Japan. The results demonstrate the superiority of XGB and LGBM compared to other selected algorithms based on lower mean absolute error.
On the other hand, combining ML models may lead to improved performance. For instance, based on the combination of DT, RF and kNN, the authors designed a new method named Ensemble Stacked Generalization [26] to predict the energy consumption of EVs, and evaluated its performance on the same dataset collected in Japan. The results showed that, despite longer running times, the proposed method outperformed the baselines (i.e., DT, RF and kNN). Therefore, the authors concluded that adopting stacking techniques can enhance the accuracy of predictive models for EV energy consumption.
### _Deep Learning Methods_
DL methods, a subset of ML, possess the capability to extract intricate patterns from transportation data. In contrast to conventional ML models, such as RF and SVM, DL models leverage neural network architectures comprising multiple hidden layers to capture intricate relationships within traffic big data. These DL models excel at learning high-level representations of data, surpassing the limitations of human-designed features [29].
To illustrate the significance of DL techniques, one can consider a representative scenario involving dynamic range
optimization in electric fleet management. Traditional methods used to estimate the remaining operational range in EV fleet management often rely on simplistic rules that struggle to accommodate real-world factors like traffic patterns and atmospheric conditions [30]. In contrast, DL methods offer a more advanced solution, leveraging their capacity to assimilate diverse features such as GPS data, weather conditions, and driver behaviors. This synthesis of data enables real-time adaptation of EV range predictions [31]. Moreover, DL can not only predict range more accurately and optimize it by suggesting efficient routes and driving modes but also personalize estimates for each driver to continuously improve the prediction performance based on collected data.
To provide further specific examples of DL models from the literature, researchers in [32], for instance, who employed various depths of Deep Neural Networks (DNN) models to estimate the SoC of EV batteries. These models utilized open-circuit voltage measurements at different ambient temperatures as input variables. Moreover, Recurrent Neural Networks (RNN) models, including the Long Short-Term Memory
(LSTM) variant, have found extensive utility in EMS due to their aptitude for capturing temporal dependencies in data. As presented in [33], LSTM was applied to predict multiple targets, e.g., voltage, temperature, and SoC, based on time series data, showcasing precise online prediction and robustness.
Furthermore, Convolutional Neural Networks (CNN) models have found application in similar research areas by converting time series data into image representations. In particular, [34] used the Gramian Angular Field approach to convert time series data into images, which were then fed into a CNN model to estimate EV energy consumption. The CNN model's performance was evaluated against other baseline models such as ANN and MLR.
In the context of RL, [6] provided a comprehensive overview of various RL methods. Two notable methods include Deep Deterministic Policy Gradient (DDPG) and a novel approach combining deep RL with the PMP method. In [35], authors introduced a DDPG-based following model designed for connected and autonomous EVs, aimed at mitigating traffic fluctuations caused by human drivers, known as stop-and-go traffic waves, while optimizing electrical energy consumption. Moreover, [21] developed an electric management system that integrates deep RL and the PMP algorithm, demonstrating substantial performance improvements compared to traditional PMP-based electric management systems.
A brief summary of the advantages and limitations of DL methods is provided in Table IV below.
## V Discussion & Challenges
The future trajectory of energy consumption modeling, estimation, and prediction in the context of e-mobility unfolds a multitude of challenges that necessitate meticulous attention and innovative solutions to further propel this domain. These challenges encompass diverse dimensions, encapsulating data availability, model complexity, real-time prediction capabilities, integration with renewable energy sources, and the management of uncertainty and risk factors.
### _Data Availability_
The availability of high-quality, real-world data plays a pivotal role in e-mobility energy consumption modeling. On the one hand, achieving the convergence of either the ML model or the control policy is a protracted undertaking necessitating the acquisition of data from numerous iterative simulations [36, 37]. On the other hand, many researchers have predominantly employed small-scaled datasets derived from conventional standardized driving cycles or constrained real-world driving scenarios to train predictive models. Such an approach potentially compromises the precision of these models when applied to authentic real-world driving contexts [37, 38]. Therefore, it is imperative to acquire diverse and comprehensive datasets that encompass a wide range of driving conditions, vehicle types (corresponding to the extreme imbalance in E-bike, E-scooter and EV systems), and user behaviors to ensure the accuracy and reliability of the models and predictions. Thus, efforts should be made to collect large-scale datasets with high granularity, including vehicle parameters (e.g., battery capacity and efficiency), driving patterns (e.g., speed profiles and acceleration patterns), and environmental factors (e.g., temperature and road conditions). Collaboration among researchers, industry partners, and policymakers can help overcome data accessibility and privacy concerns, allowing for the development of robust models.
### _Model Complexity& Interpretability_
With the increasing complexity of e-mobility systems, it is imperative that the models employed for energy consumption estimation and prediction possess the necessary capability to capture the dynamic nature and intricate interactions within the system. These models must be designed to be scalable, capable
Fig. 1: Hierarchical categorization of classical EMS.
of handling large-scale deployments of EVs, and adaptable to accommodate diverse vehicle types, environment factors, and user preferences.
Advanced AI methodologies, including DL and RL, offer avenues for the creation of intricate models adept at capturing nuanced interdependencies and nonlinear dynamics inherent to the system. Nevertheless, it is important to acknowledge that the elevated intricacy of these models will likely result in computational requisites that surpass the capabilities of the electronic control unit embedded within an operational vehicle powertrain, particularly when utilized as an online controller [36, 37, 39].
However, it should be noted that the interpretability of intricate models is relatively lower compared to physics-based methods. This is because data-driven methods rely on black-box models where detailed information is not known [37, 40]. Thus, exploring hybrid models merging physics-based and data-driven techniques becomes relevant. These hybrids integrate fundamental EV principles with data-driven methods, capturing real-world intricacies. Balancing accuracy and efficiency, they offer the potential for valuable insights.
### _Real-time Prediction_
Real-time prediction of energy consumption is crucial for optimizing charging and discharging strategies, managing grid integration, and providing accurate range estimation to EV users. However, achieving real-time predictions while considering dynamic factors such as traffic conditions, weather, and user behavior poses a significant challenge.
Real-time prediction models can leverage techniques such as online learning, adaptive control, and model-based reinforcement learning. These approaches enable continuous learning from new data and allow for dynamic adaptation to changing conditions. Integration with real-time data sources, such as traffic information, weather forecasts, and vehicle-to-grid communication, can further enhance the accuracy of real-time predictions.
### _Integration with Renewable Energy Sources_
Integrating e-mobility systems with renewable energy sources adds complexity to energy modeling and prediction. Renewable energy's intermittent nature and the need to balance supply and demand require precise predictions and optimization. Models must factor in the availability and variability of sources like solar and wind power, considering energy consumption patterns. We can employ techniques like probabilistic forecasting, optimization algorithms, and energy management systems to optimize renewable energy use, minimize grid strain, and reduce carbon emissions.
### _Uncertainty & Risk Management_
Uncertainties tied to factors like user behavior, charging infrastructure, and battery wear and tear present hurdles in accurately estimating and predicting energy consumption in e-mobility systems [37].
To tackle these uncertainties and their associated risks, we can employ probabilistic models, uncertainty quantification techniques, and risk analysis frameworks. Among these approaches, AI, particularly reinforcement learning (RL), stands out as a suitable solution. RL algorithms enable adaptive decision-making by utilizing real-time feedback, empowering e-mobility systems to optimize their energy management. For instance, RL can model user behavior to find optimal charging schedules based on preferences and past patterns, address battery wear by optimizing charging and discharging profiles, and optimize the use of charging infrastructure by allocating resources intelligently. By incorporating RL models, e-mobility systems can effectively handle uncertainties, boost operational efficiency, and ensure reliable performance.
In summary, future challenges in e-mobility energy management include data availability and quality, model complexity and scalability, real-time prediction, renewable energy integration, and uncertainty management. Addressing these challenges demands interdisciplinary collaboration, advanced machine learning techniques, solid data infrastructure, and policy support to foster the development of dependable and efficient e-mobility systems.
## VI Conclusion & Future Directions
In this survey paper, we meticulously examined the methods for modeling, estimating, and predicting energy consumption in electric mobility. We categorized these approaches into two main groups: conventional methods and AI-based algorithms. We synthesized insights from relevant surveys in this domain. Our analysis reveals significant progress in understanding energy consumption dynamics. Conventional methods provide foundational insights, while traditional machine learning algorithms excel in capturing patterns to make accurate predictions. Deep learning algorithms, on the other hand, excel in addressing intricate, non-linear dynamics. However, we also identify several challenges in this research area, including but not limited to the acquisition of diverse and high-quality datasets, addressing model complexity, achieving real-time predictions, integrating renewable energy sources, and effectively managing uncertainty.
For future work, we recommend further exploring data-driven methods and real-time data integration to boost accuracy and performance. Additionally, the limited research in micro-mobility areas like E-bikes and E-scooters highlights the need for a thorough investigation in this domain.
## Acknowledgment
This work has emanated from research supported in part by Science Foundation Ireland under Grant Number _21/FFP-P/10266_ and _SFI/12/RC/2289_P2_ (Insight SFI Research Centre for Data Analytics), co-funded by the European Regional Development Fund in collaboration with the SFI Insight Centre for Data Analytics at Dublin City University.
|
2309.05353 | Applied design thinking in urban air mobility: creating the airtaxi
cabin design of the future from a user perspective | In the course of developing digital and future aviation cabin concepts at the
German Aerospace Center, the exploration of user-centered and
acceptance-enhancing methods plays a central role. The challenge here is to
identify the flexible range of requirements of different user groups for a
previously non-existent transport concept, to translate these into a concept
and to generate a rapid evaluation process by the user groups. Therefore, this
paper aims to demonstrate the application of the user-centered Design Thinking
method in the design of cabin for future air taxis. Based on the Design
Thinking approach and its iterative process steps, the direct implementation is
described on the combined airport shuttle and intracity UAM concept. The main
focus is on the identification of key user requirements by means of a focus
group study and the evaluation of initial cabin designs and key ideas by means
of an online survey. Consequently, the creative design process of a digital
prototype will be presented. In addition to an increased awareness and
acceptance among the population towards a novel mode of transportation, the
application of the Design Thinking methodology offers a flexible and
user-centered approach for further testing and simulation scenarios. | F. Reimer, J. Herzig, L. Winkler, J. Biedermann, F. Meller, B. Nagel | 2023-09-11T09:53:44Z | http://arxiv.org/abs/2309.05353v1 | Applied Design Thinking in Urban Air Mobility: Creating the Airtaxi Cabin Design of the Future From a User Perspective
###### Abstract
Design thinking is essential for user-centered cabin design concepts in future transportation vehicles, as it facilitates the identification of user needs, creative problem-solving and iterative development to ensure optimal user experiences and satisfaction. In the exploration of future air taxi cabins, user acceptance is widely recognized as a critical factor. To ensure a high level of acceptance for such concepts, the direct involvement of potential user groups in the early design process through user-centered design approaches, offers a highly effective solution to provide a time efficient and requirement-based concept development process for novel transportation concepts.
In the course of developing digital and future aviation cabin concepts at the German Aerospace Center, the exploration of user-centered and acceptance-enhancing methods plays a central role. The challenge here is to identify the flexible range of requirements of different user groups for a previously non-existent transport concept, to translate these into a concept and to generate a rapid evaluation process by the user groups. Therefore, this paper aims to demonstrate the application of the user-centered Design Thinking method in the design of cabin for future air taxis. Based on the Design Thinking approach and its iterative process steps, the direct implementation is described on the combined airport shuttle and intracity UAM concept. The main focus is on the identification of key user requirements by means of a focus group study and the evaluation of initial cabin designs and key ideas by means of an online survey. Consequently, the creative design process of a digital prototype will be presented. In addition to an increased awareness and acceptance among the population towards a novel mode of transportation, the application of the Design Thinking methodology offers a flexible and user-centered approach for further testing and simulation scenarios.
Design Thinking, User Centered, Cabin Design, Airtaxi, Urban Air Mobility, Acceptance
## 1 Introduction
Urban air mobility (UAM) is an emerging transportation concept that involves the use of aerial vehicles (air taxis) in urban environments. While UAM has the potential to revolutionize the way people and goods are transported, its successful implementation depends on public acceptance. As the interface between air taxi and the passengers, the cabin and the interior design play a crucial role in shaping passenger's perception and the attitude towards the safety, comfort, usability, flight experience and might have a substantial impact on people's acceptance towards future air taxis. However, as transportation concepts like air taxis have not existed before, the low range of user experiences and the identification of potential requirements and needs are particularly challenging.
Despite the advanced technological development of air taxis and expected flights in the coming years by Volocopter [1]-, the German Aerospace Center (DLR) has conducted various studies to explore the question of societal acceptance
Consequently, it can be displayed, that the public acceptance plays a crucial role in the development and realization of drones in the future as stated by Eifield et. al., investigating the acceptance of Civil Drones in Germany in 2020 [2].
As part of a study involving 832 participants, the societal acceptance towards drones was examined. While 38% of the subjects had a predominantly negative attitude towards drones, 53% demonstrated a predominantly positive attitude.
The disagreement and rather sceptical attitude among the general population towards air taxis were further demonstrated in a subsequent study by End et al. [3]. In this study, 19% of the participants exhibited a very negative attitude, while 8% held a positive stance.
In addition, the literature highlights the factors of safety and privacy as particular significant factors contributing to the sceptical attitude of the public towards air taxis [4], [5], [6].
Another reason for the potential rejection of air taxis, as stated by the German Aerospace Industries Association (BDLI). Within an acceptance study about drones and aitrais in 2022 and 2.055 testsubjects in Germany, the limited information and knowledge about air taxis among the general population was discovered to be a major factor [7].
This shows parallels to the literature review by Kellermann and Fischer (2020), where the provision of information as well as the transparency towards the public play an essential role in terms of an increased level of acceptance [8].
In terms of providing a high level of acceptance of future innovations by the general public, user-centered methods such as Design Thinking have proven to be particulary effective. n the development of user-centered innovations, the Design Thinking methodology has demonstrated its effectiveness. It was initially developed by Professors Kelley and Winograd at Stanford University and later established in Europe by co-founder Hasso Plattner from 2007 Design Thinking is an approach that emphasizes comprehensive understanding and deep, empathetic insight into user behaviours, desires, fears, and environments [9]. According to Meinel et. al., Design Thinking combines a focus on the end user with various multidisciplinary approaches and iterative and continuous optimization [10].
Therefore, it is crucial to understand users' needs and integrate them into the early design process of future aitraxi cabin concepts to ensure a high level of acceptance for this new transportation mode. At the German Aerospace Center (DLR), aeronautical cabin research is focused on developing and digitally assessing cabin systems and designs for future user groups and their needs. The DLR project Horizon UAM aims to develop a holistic, digital, and user-centered UAM concept. Following the "inside Out" approach, the comprehensive development and description of the UAM system are based on cabin design. In addition to technical considerations, certification requirements, and infrastructure factors, acceptance, the development of appropriate user testing scenarios, and the inclusion of different user groups in the digital development process play a central role.
This paper intends to close the gap between user and designers adapting the user-centered design method Design Thinking on the cabin design approach for future Aitraxi cabin design concepts.
A central focus lies in the step-by-step description and implementation of the methodological pillars of the classic Design Thinking approach in the cabin design process of future air taxi cabins. The initial process steps prioritize the identification of key requirements, fears, needs, and experiences of potential user groups. Another focus is on describing the conceptual design outcomes and the parallel evaluation process involving user groups. The goal of these endeavors is to present an applied and methodological framework for user-centered design of future air taxi cabins. This involves emphasizing conceptual development while actively involving the population in the development process of this novel mode of transportation to enhance transparency and acceptance.
## 2 Fundamentals
This section provides a description of the Design Thinking methodology in the context of Human Centered Design, along with the general definition of the six characteristic process steps and terminologies as the foundation for practical application in the design process of the UAM cabin concept. Furthermore, it positions the conceptual development of a user-centered cabin concept within the background of the DLR project Horizon UAM and the overall system and vehicle development of an air taxi concept.
### _Design Thinking Method_
Design Thinking (DT) is a human-centered design approach aimed at generating innovative concepts based on a deep understanding of human needs [11].
Within the spectrum of human centered design methods, Design Thinking offers a creative and effective approach that emphasizes users' emotions, enabling more effective solutions for various stakeholder requirements [12].
The high potential of this method for developing complex systems is visible by the increasing number of scientific publications related to Design Thinking [13].
Fundamentally, Design Thinking follows a user-centered, empathetic, and analytical approach to solving complex problems. However, since there are no strictly defined or fixed process steps, different interpretations of the process steps can be observed in industry and research. IBM for example follows the DT approach called "The Loop," which emphasizes an iterative and continuous cycle consisting of the process steps of observing, reflecting, and making (FIG 1) [14]. All three steps cover on particular part within an infinite loop, which displays the continuous design and and feedback process throughout a new design.
Fig 1: IBM Design Thinking process “The Loop” [15]
A further approach is being provided by the so called,Double Diamond" process (FIG 2). The official Double Diamond design model has four stages: **Discovery, Definition, Development and Delivery** and incorporates a divergent and a convergent design stage [16].
In Design Thinking, the individual phases and the iterative process between them play a crucial role. In the empathy phase, the focus is on observing and understanding the users, capturing their habits and behaviors in their environment. In the Define phase, the central problems of the user are filtered and defined collaboratively. In the Ideate phase, different ideas and radical designs are generated. Quantity is emphasized over quality, allowing for a wide range of ideas for initial decision-making. In the prototype and test phase, selected ideas are typically created as rapid prototypes and tested, while always keeping a loop back to the initial problem statement. In the evaluation phase, users and other participants can provide feedback as experts or recipients, assisting in the assessment process. Key characteristics include a free and creative approach and the constant iterative testing and evaluation of ideas following the "Fail-Fast" principle [19].
This variant, with its six process steps, forms the methodological foundation for the user-centered and iterative design process of future air taxi cabin concepts within the scope of this paper.
### _Horizon UAM Project and Operation Scenarios_
Since 2020, the German Aerospace Center (DLR) has been conducting research on Urban Air Mobility as part of the Horizon UAM project, focusing on factors such as efficiency, safety, feasibility, sustainability, and more (FIG 4). All areas are part of the overall air taxi system and have an influence on the acceptance of the population.In collaboration with ten DLR institutes and external partners like NASA and Bauhaus Luffdaht, the collaborative research of these subsystems with a focus on acceptance is the essential core of DLR's exploration and research of future air taxi technologies.
As a starting point for the concept development, various UAM use cases and technology scenarios were designed, including Intra-City, Mega-City, Airport Shuttle, Suburban-Commuter, and Intercity. The characteristics of the scenarios were defined in the course of a joint workshop and on the basis of technically feasible parameters.The main focus for the present work is the combination of the following use cases:
**Intra-City Scenario:**
* Transport range: up to 50 km
* Speed: up to 100 km/h
* Seats: up to four
**Airportshuttle Scenario**
* Transport range: up to 30 km
* Speed: up to 150 km/h
* Seats: up to four
To develop a collaborative concept based on the Design Thinking method, a combination of both use cases was chosen to create a user-centered cabin design for a multifunctional and versatile short-range air taxi concept (FIG 5). The figure shows a graphical representation of
Fig 4: DLR project Horizon UAM framework [20]
Fig 2: Design Thinking "Double Diamond" [17]
Nevertheless, the most well-known approach is the Stanford d.school Design Thinking approach with its six steps ‘Empathize’, ‘Define’, ‘Ideate’, ‘Prototype’, ‘Test’ and ‘Asses’ (FIG 3).
Fig 3: Design Thinking model proposed by the Hasso-Plattner Institute of Design at Stanford (d.school) [18]
different and self-contained scenarios that could be connected in perspective.
## 3 Methodical Development of an Intracity/Airportshuttle Cabin Concept
The following section describes the particular process steps of Design Thinking (Stanford d.school) in course of the design process of an airport shuttle and intracity air taxi cabin concept. Based on the characteristic steps of Design Thinking, the main outcomes of the design process are listed and explained.
### Empathy
In the empathy phase, two factors are particularly important. Firstly, to create an overview of the current state of the art in interior design of air taxis, and secondly, to understand and observe participants in a focus group study. Alongside the state-of-the-art analysis, the execution of the focus group study is described.
### State of the Art:
Similar to the research on air taxis, autonomous transportation of people plays a significant role in the automotive industry. Unlike in air taxi research, there are already advanced innovations in the field of cabin design here and thus provide a suitable basis for initial analyses of the state of the art. For instance, innovative models often feature strong color contrasts, such as dark brown/grey tones for a luxurious look and white/cream tones for cleanliness and high quality. Green tones and wood optics are used to convey durability and environmental consciousness. Bionic forms in storage compartments or ceiling columns imitate nature, emphasizing the connection to sustainable design. Large windows provide passengers with a clear view of their surroundings, fostering a connection to nature. First impressions are crucial in automotive innovation, leading to unique door concepts that aim to captivate potential customers. Vehicle designs serve as essential indicators for feasible innovations in the UAM domain. Leveraging the familiarity of automobile design is important for creating recognition value and a sense of familiarity and security [22].
Research on the existing UAM vehicles shows that this branch has learned from the innovation in the automotive sector. A lot of the companies developing a UAM vehicle is still in the early stages of the design process and has not revealed the cabin concepts yet. Those who have, show similarities with design approaches of most modern cars. The interiors are based on strong colour contrasts and minimalistic design, conveying a sense of connection to the automotive sector. Clarity in the design is here as well achieved through bionic window shapes and large windows, meant to enhance the flight experience. Seats are most often arranged according the automotive standard, creating recognition value. An exemplary comparison between a modern car interior design (Mia taxi) and the Lilium eVTOL cabin design is shown in Figure 6.
In course of the state of the art process, different future concepts of the automotive and eVTOL industry have been investigated to collect a basis overview for the following design process steps.
### Focus Group Study on passenger requirements for future air taxis
In December 2020, a focus group study was conducted to gain a fundamental understanding of potential user groups' attitudes towards UAM and gather information about the spectrum of requirements for cabin design [25]. Four focus groups have been investigated in December 2020 with a total of 16 participants from Germany. The studies were divided into three parts, and the execution of each step is described below. Additionally, a summary of the key findings for the conceptual development of a cabin concept within the Design Thinking approach is provided.
* Part 1: Public Transport Preferences In the first part of the focus group study, participants were asked about their preferences for public transportation. Alongside naming their favorite modes of transportation, participants were also requested to identify both positive and negative aspects. This part was conducted to gather individual experiential insights from public transportation and raise participants' awareness of air taxis as a transportation mode.
* Part 2: Public Transport Interior Preferences In the following step, the participants were asked about their general preferences regarding the interior of public transportation. The results were recorded and divided into four levels according to Sorrel Brown's importance Likert
Figure 5: Horizon UAM Use Cases [21]
Figure 6: Volocopter (left) [23] and Moia (right) [24] interior examples
scale [26]. The results can be seen in the following table (TAB 1).
Requirements with high or very high priority include comfort-related parameters such as seat comfort, thermal comfort, and low noise levels. Easy-to-understand operation, individual comfort settings, and the option to adjust privacy levels were also emphasized.
Less prioritized were aviation-specific features like in-flight dining, toilets, or on-board entertainment. Availability of laptop storage, seat heating, or pleasant floor temperature were also considered less important [27].
The task was conducted using the collaborative platform Mural and the communication platform Skype for Business. An exemplary board with solution approaches is shown in the following figure (FIG 7).
Based on the first two parts and the collaborative cabin design process, various solution approaches were developed, leading to four main areas of focus for the requirements: Confort & Experience, Safety & Security, Luggage Storage, and Seating & Configuration.
### _Define phase_
In the Define phase, the focus was set on summarizing and organizing the insights gained previously. Typically, the central task was to consolidate key requirements using personas. Additionally, and as part of a complete air taxi system, the definition of architectural space parameters and central weight parameters played a crucial role.
### _Persona Definition_
To make the diverse range of requirements from different user groups more tangible, personas were developed within the framework of the Design Thinking method. Personas serve as representative profiles for a broad range of users, facilitating a more targeted design process based on defined requirements [28]. To cover a wide range of age groups, three personas were created.
* Greta Hermann, female, 62 years old, small town, teacher, generation _Boomer_
* Tim Klaussen, male, 35 years old, suburban, consuter, generation \(Y\)
* Clara Meyer, female, 19 years old, metropolitan, student, generation \(Z\)
The exemplary definition of the persona profile for "Clara Meyer" can be seen in the following figure (FIG 8). In addition to demographic data on the right, persona characteristics related to travel behavior (bottom), biographical data (middle, top), and character traits (right). In addition, collected key requirements for air cabs are bundled on the basis of this persona (middle, bottom).
Within the definition of a cabin design, ergonomic considerations play an important role. For the integration process of the cabin into the overall vehicle system, central requirement parameters have been defined as a first baseline:
* Payload weight: 90 kg (+20 kg optional and additional luggage weight)
* Piloted vehicle with option for autonomous flight option
* Four Seats
* Usability for PRM (storage of wheelchair)
For the detailed design and definition of an initial cabin layout, the previously defined requirements, as well as ergonomic and anthropometric standards, were considered. Comfort parameters related to seat spacing and width were derived from common dimensions found in business class parameters in commercial aviation. The
Fig 8: Exemplary Persona Definition; (Credits:DLR)
Fig 7: Exemplary Overview: Red (Dreamer), Yellow (Critics), Green (Realist); (Credits:DLR)
necessary storage space was defined based on common carry-on luggage dimensions travel suitcase measurements and dimensions for standard wheelchairs.
Basically, a layout was designed that meets the basic anthropometric requirements of different passengers with the smallest and largest possible physical dimensions. Due to the use case airport shuttle it was determined that at least four pieces of luggage should be carried in the cabin. In addition, a seat layout with two rows and two seats per row have been specified. The described layout can be seen in the following figure (FIG 9). The figure shows the schematic representation of the central and anthropometric limit parameters for the different areas of the cabin.
A combination of ergonomic requirements of different passengers' types and technical requirements of the overall vehicle and system development process formed a final layout with defined dimensions for further design processes. Due to the use case airport shuttle and up to four potential passengers it was determined that at least four pieces of luggage should be carried in the cabin. In addition, a seat layout with two rows with two seats per row was specified. The described layout can be seen in the following figure (FIG 10).
### Ideation phase
#### Idea development process
In the Ideate phase, various ideas and concept focuses were developed based on the defined constraints from the Empathize and Define phases. In addition to different group seating scenarios, ideas for customizable areas through the strategic use of partitions were designed. Seat designs with recognizable characteristics from the automotive and aviation sectors were also created as exemplary ideas for the seating arrangement in a future air taxi cabin scenario. Given the significant role of the Airport Shuttle and Intracity use case, different options for accommodating large luggage were developed. The thematic breakdown and overview of the design ideas can be seen in the following diagram (FIG 11).
#### Onlinesurvey: Definition of air taxi cabin design parameters and the evaluation of design ideas.
The process of ideation, involving potential user groups and central requirements for overall system dimensioning, is complex. To involve as many potential user groups as possible directly in the ideation and decision-making process, an online survey was conducted in Germany in 2021.
The onlinesurvey resulted in 202 valid datasets of participants from various demographic groups in Germany. Within the study, six UAM cabin designs were presented and evaluated [29].
Within the context of the previously defined priority areas in the Focus Group Study (see chapter 3.1), idea approaches related to Safety & Security, Comfort & Experience, Luggage, and Seating & Configuration were evaluated. With regard to the defined short-haul use case, participants were presented with a scenario of a fully occupied air taxi with a travel time of ten to 15 minutes.
The key findings of this study are described as follows:
* Seating & Configuration:
In the first part of the study, eight different ideas for the arrangement of a four-seat configuration were presented. In the front area, a pilot seat was placed, which was not included in the seat selection evaluation. The respondents were asked to rate the concepts based on two perspectives: First, from the perspective of a flight without an accompanying person and then from the perspective of traveling with an accompanying person. Responses were given using a single-choice format with six options:'very bad', 'bad', 'neutral', 'good','very good', and 'no answer'.
The key results are depicted in the following figure (FIG 12).
Figure 11: Ideation breakdown overview; (Credits:DLR)
Figure 10: Final and usable space for cabin integration; (Credits:DLR)
Figure 9: Cabin sizing based on ergonomic parameters; (Credits:DLR)
Overall, it was demonstrated that communication and proximity with eye contact play an important role in the scenario of traveling with an accompanying person. In the scenario without an accompanying person, configurations with direct eye contact with strangers or significant distance from accompanying persons were rated the least favorably.
* Safety & Security:
Based on the previously described focus group study, it became evident that travelers value protection from fellow passengers and privacy during the flight. In the second part of this study, participants were presented with six different ideas for physical separation from fellow passengers for evaluation. The scenario and evaluation criteria were consistent with the "Seating & Configuration" scenario assessment (FIG 13).
In the scenario with an accompanying person, it became evident once again that communication and proximity to the accompanying person in the adjacent seat play an important role. Complete separation through partitions from fellow passengers was rejected in this scenario. In the scenario without an accompanying person, separate areas with a stranger and direct eye contact were strongly rejected. Isolation from the rest of the cabin, constant eye contact, and sharing a common foot space with a stranger were cited as key reasons for the negative ratings.
* Comfort & Experience:
To assess different seating ideas and the associated seating comfort, five different seat concepts inspired by known concepts from the aviation industry (business class seat, first-class seat), the automotive industry, as well as novel seat concepts were presented for evaluation by the participants. The key findings are illustrated in the following figure (FIG 14).
The seats inspired by higher-class airline seats were particularly positively rated. Reasons for this included expected seating comfort, presence of armsets, and especially the U-shaped modern headtests for increased privacy. Less familiar seat models with novel shapes and no arm rests were negatively rated.
* Luggage & Storage:
For the evaluation of different luggage storage concepts, seven different approaches were presented. Participants could choose between the response options 'impractical', 'rather impractical', 'partly practical', 'rather practical', 'and 'no answer'. The approaches with the highest approval are shown in the following figure (FIG 15).
Key factors for a positive rating included sufficient storage space, easy accessibility of luggage during the journey, individual storage options, and secure attachment of the luggage.
### _Prototype phase_
Prototyping is an experimental process used to create digital or physical prototypes based on the insights gained from previous stages. The following section illustrates and describes the creation of a prototype using digital sketches including 3D designs within the Design Thinking process step 'Prototype'. The results obtained from this stage form the basis for the final evaluation process. The prototyping process is described below, based on the previously defined priority areas.
**Seating & Configuration**
Figure 14: Results Comfort & Experience (Credits:DLR)
Figure 12: Results Seating & Configuration (Credits:DLR)
Figure 15: Results Luggage & Storage (Credits:DLR)
In response to the high demand and user preference for a more traditional layout, a configuration with two seats per row positioned in the direction of flight was chosen (FIG 16).
In addition to the adequate seat pitch, seat width and additional space between the seats, a cockpit was also integrated in the front-left area. In the course of the overall concept within the main work package, three passengers and one pilot were previously defined as concept payload. The detailed design of the cockpit was not the main focus here, which is why only a control panel was integrated as a placeholder. Accordingly, possible requirements with regard to protective devices, control modules and screens were not considered.
### Safety & Security
Looking at the acceptance aspect of future air taxis, the area of safety and security plays a fundamental role [3]. Based on the user requirements, numerous concerns and wishes were expressed to ensure protection from fellow passengers (e.g. violence by fellow passengers) and individualizable protection concepts for increased privacy when traveling with strangers. To ensure this, three protection concepts were developed for the overall concept.
A first approach is shown by the novel positioning of the seats based on a rotated arrangement is shown below by means of an example (FIG 17).
The novel position offers various advantages for passengers. On the one hand, the decision for this seating position was made to differentiate from previous and familiar seat layouts and to convey the novelty and modernity of this means of transportation. On the other hand, the novel seating position guides passengers to look out the window during the flight and experience the flight more and ensures an easier boarding and deboarding process. At the same time, this position necessarily provides additional distance from fellow passengers, which can lead to increased levels of safety and privacy.
Another protection option is offered by the U-shaped headrest, which is integrated into all seats. This form of headrest is currently already being used in a variety of cabin seats or public transportation systems to create a sense of spatial, visual, and acoustic separation from fellow passengers in the simplest way possible. The following figure (FIG 18) shows a first draft for this concept.
As a further protection and separation option, a central separation module was accessed between the rows of seats (FIG 19). This type of separation is already used in the automotive sector and serves as an additional separation option between passengers in the course of the air cab concept. In addition, the use of this separation module creates an individual area for each passenger, which could have a positive influence on the individual perception of comfort.
### Comfort & Experience
In the course of the user research phase, the area of comfort & experience was identified as a key requirement and plays a decisive role in the acceptance of future air taxi cabin concepts. The following figure (FIG 20) shows an idea sketch for the area of seat comfort.
Figure 16: Conceptual Seat Layout (Credits:DLR)
Figure 19: Separation module (Credits:DLR)
Figure 17: Example of rotated seat idea (Credits:DLR)
Figure 18: Headrest and seat idea (Credits:DLR)
Since passengers will spend most of their travel time in a seat during flight, the seat is a key element and a main interface between the user and the overall cabin. As a reference for the design of the seat position, a minimum seat pitch of 31 inches was defined as a basis here. A minimum seat pitch of 17 inches was defined for the seat width. The dimensions were taken from common positioning dimensions of aircraft cabins with increased comfort standards. To ensure sufficient headroom at a cabin height of 1.60m, no storage compartments or displays were placed above the seating areas. In the detailed design of the passenger seats, a separate armrest and headrest for increased privacy and optimum seating comfort also play a special role. The following figure (FIG 21) shows an idea sketch for the positioning and storage of folded standard wheelchairs.
To ensure a barrier-free cabin design, it must be possible to carry one's own wheelchair. For the overall concept, the storage space in the rear area was defined first. With the help of a ground crew and/or a pilot, the wheelchair could be stowed in the rear area of an air taxi and reached via a side flap from outside of the vehicle.
Another concept focus has been set on the flight experience (FIG 22). The provision of a positive passenger experience plays a special role when it comes to acceptance of new types of transportation and must be transferred precisely to the requirements, especially in cabin design [5]. In the course of the focus group study and the survey, the desire for a good view was particularly emphasized, so that a good view for all passengers must be guaranteed for the sub concept.
A further aspect for the flight experience is a good balance between privacy, safety and group travel options including a possibility to share this new experience with other passengers. To create a basic distance between passengers, the division of the cabin into two rows of seats with two seats already provides a basic level of distance to other passengers. Depending on the individual wishes (group travel or singles travel), seats in one row or one behind the other can be selected individually. In the respective seat rows, the armrest as well as the headrests serve as elements that automatically create a distance between the passengers.
### Luggage
Baggage stowage was defined as a central comfort parameter for user groups and can have a decisive influence on the travel experience and comfort. Since accessibility during the flight was named as an important aspect, the sub concepts could already be designed for stowage concepts in front of the passenger seats when defining the ideas. The four sub-concepts are shown and described below.
* Luggage Sub Concept 1 (Fixation Bars):
In this concept, curved bars with two-point fixation were integrated in the floor in front of each seat. The shape of the bend here makes it possible to place standard travel suitcases. This simple holding device makes it possible to fix the suitcase simplified and directly in front of the seat (FIG 23).
Figure 21: Wheelchair accessanbility idea (Credits:DLR)
Figure 20: Comfort seat arrangement example (Credits:DLR)
Figure 22: Outside view and experience (Credits:DLR)
* Luggage Sub Concept 2: Fixation Belt
Additionally, suitcases can be fixed with the help of a belt mechanism (FIG 24). Since the mechanism might already be familiar to most passengers from the automotive sector, this mechanism is particularly promising due to its simple and intuitive operability. In addition, the belt function offers a particularly simple and intuitive option for fixing folded standard wheelchairs or strollers.
* Luggage Sub Concept 3: Protection Option
Due to the expected and frequent passenger changes in the course of the short-haul vehicle operation, a high degree of wear and tear of components caused by suitcases during loading is to be expected (FIG 25). For this reason, an impact protection device was designed as a sub concept.
### Integrated and final concept
Figure 26 shows the overall view of the integrated concept in the 3D model. In the overall concept, three seats have been rotated in the direction of the windows, while the seat in the front part on the right-hand side remains unchanged. In this scenario, a pilot is included, but this concept should already be usable for autonomous operation cabin concept designs. Accordingly, the installation space for the pilot seat was chosen to provide an easily adaptable seating scenario. The cockpit has not been furtherly detailed in the course of the development, but a modular control panel was added as a control unit.
In addition to the seat's rotatability, the u-shaped headrest is a distinctive component of the concept. The rotating headrest allows passengers to determine their own level of privacy and. Finally, this creates further options for an increased level of privacy. A central module between the seats is used to store smaller items or charge cell phones via a small surface on top.
Figure 24: Luggage Sub Concept 2 (Credits:DLR)
Figure 25: Luggage Sub Concept 3 (Credits:DLR)
Figure 23: Luggage Sub Concept 1 (Credits:DLR)
Figure 26: Perspective view final UAM Cabin Design Concept (Credits:DLR)
Figure 25: Luggage Sub Concept 3 (Credits:DLR)
This ensures an easy stowage and removal of luggage (FIG 27). At the same time, a simplified accessibility during the flight is possible. In the rear area, sufficient storage space is provided for additional luggage as well as the stowage of a folded standard wheelchair. The wheelchair is secured by a belt system in the back end of the cabin (FIG 28).
The central component of the overall concept is the seat. This is a lightweight seat consisting of a padded aluminum frame and a natural fiber fabric. Functionally, the principle was adopted from camping chairs, so that the upholstery fabric with light padding is already sufficient to support the back and buttocks.
In addition to the cabin design, an exemplary integration was integrated into a tiltrotor vehicle concept, which was developed in the course of an overall fleet development and system integration in the Horizon UAM project (FIG 29) [30].
### Contribution and benefits for overall UAM system
Within the Horizon UAM project, various factors were considered in the examination and development of the overall UAM system. By applying the Design Thinking approach to the cabin design process for future air taxis, important advantages for the development process of the air taxi system as a whole were identified. The key findings are described below:
### Potential for weight reduction
Initial estimates based on commercial aircraft interior masses suggest a possible total weight of 766 kg (for the Airport Shuttle concept) and 639 kg (for the Intracity concept). By applying DT and by using lightweight components, a minimalist seat design and a simplified luggage storage solution, the cabin mass could be reduced substantially to 380 kg to 579 kg. The weight reduction can have a positive impact on the sizing of the battery and the required power for air taxi transportation.
In addition, the user-centered design process enables further potential for cost savings. The early feedback process as well as the direct translation of main requirements into a concept can help to avoid cost-intensive adjustments in the late development process.
### Acceptance & Safety
The direct involvement of different user groups in the design process offers great potential for creating awareness about UAM, disseminating information, and addressing concerns and fears. Particularly during a Focus Group Study, an increased acceptance of this novel mode of transportation was observed. The passengers influence on the design can have a positive impact on acceptance and the perception of safety, leading to a higher willingness to use air taxis among the general population. Moreover, by addressing fears, desires, and concerns directly and incorporating them into the design concept in collaboration with user groups, the development process of autonomously operated air taxis can lead to increased acceptance in the next step.
### Flexibility & Convenience
By combining the Airportshuttle and Intracity scenario, a multifunctional cabin concept has been developed covering two different application scenarios. The versatility of the cabin concept leads to lower development costs compared to designing separate cabins for each use case. At the same time, the recognition value of the concept increases when used in multiple scenarios, which can positively impact the perceived safety and usability of the cabin
Figure 28: Back/perspective view (Credits:DLR)
Figure 29: Final cabin concept integrated in DLR Tiltrotor concept (Credits:DLR)
Figure 27: Side view (Credits:DLR)
features. With its minimalist and interchangeable cockpit design, the cabin can also be changed into a fully autonomous scenario with four passengers in the future.
In addition to the improved seating comfort, optimized storage compartments, minimalist design and customizable privacy features, the cabin design incorporates various comfort parameters based on the feedback from potential user groups. The deliberate combination of minimalist and easily understandable functions with futuristic and complex design elements enhances the overall comfort and user experience.
## 4 Conclusion and Outlook
This study demonstrated how future user groups can be actively involved in the cabin design process for future air taxis using the creative method of Design Thinking. The Design Thinking variant of the d.school was utilized and the process steps were tailored to the specific design process. In the initial process step (Empathize), different user groups were specifically involved from the beginning to gain an understanding of their experiences in public transportation and to identify their requirements and desires for the design of an air taxi cabin. Further engagement with user groups was conducted through an online study, where ideas based on the Focus Group Study and the identified priority areas were evaluated. Following the Empathize, Define, and Ideate process steps, a digital prototype was subsequently developed. This concept serves as a basis for the development of a complete vehicle and the related disciplines within the Horizon UAM project, such as integration into the overall system (Airportshutte & Intracity concept), maintenance, battery system design, rotor concept development, and other areas. The next step focuses on the final process step, Assess. By employing immersive testing and simulation scenarios through mixed reality and physical mockups, different cabin designs and functional elements can be tested by user groups. An effective evaluation process can increase the maturity of the overall airtaxi concept and optimize acceptance investigations of various cabin scenarios. Initial studies have already been conducted within the project, indicating a high potential for further exploration and development of air taxi concepts [31][32][33].
## Declarations
### Competing interests
The authors have no competing interests to declare that are relevant to the content of this article.
|
2309.13531 | Robust Principal Component Analysis using Density Power Divergence | Principal component analysis (PCA) is a widely employed statistical tool used
primarily for dimensionality reduction. However, it is known to be adversely
affected by the presence of outlying observations in the sample, which is quite
common. Robust PCA methods using M-estimators have theoretical benefits, but
their robustness drop substantially for high dimensional data. On the other end
of the spectrum, robust PCA algorithms solving principal component pursuit or
similar optimization problems have high breakdown, but lack theoretical
richness and demand high computational power compared to the M-estimators. We
introduce a novel robust PCA estimator based on the minimum density power
divergence estimator. This combines the theoretical strength of the
M-estimators and the minimum divergence estimators with a high breakdown
guarantee regardless of data dimension. We present a computationally efficient
algorithm for this estimate. Our theoretical findings are supported by
extensive simulations and comparisons with existing robust PCA methods. We also
showcase the proposed algorithm's applicability on two benchmark datasets and a
credit card transactions dataset for fraud detection. | Subhrajyoty Roy, Ayanendranath Basu, Abhik Ghosh | 2023-09-24T02:59:39Z | http://arxiv.org/abs/2309.13531v1 | # Robust Principal Component Analysis using Density Power Divergence
###### Abstract
Principal component analysis (PCA) is a widely employed statistical tool used primarily for dimensionality reduction. However, it is known to be adversely affected by the presence of outlying observations in the sample, which is quite common. Robust PCA methods using M-estimators have theoretical benefits, but their robustness drop substantially for high dimensional data. On the other end of the spectrum, robust PCA algorithms solving principal component pursuit or similar optimization problems have high breakdown, but lack theoretical richness and demand high computational power compared to the M-estimators. We introduce a novel robust PCA estimator based on the minimum density power divergence estimator. This combines the theoretical strength of the M-estimators and the minimum divergence estimators with a high breakdown guarantee regardless of data dimension. We present a computationally efficient algorithm for this estimate. Our theoretical findings are supported by extensive simulations and comparisons with existing robust PCA methods. We also showcase the proposed algorithm's applicability on two benchmark datasets and a credit card transactions dataset for fraud detection.
_Keywords:_ Robust PCA, Eigen Decomposition, Matrix Factorization, Density Power Divergence, Breakdown Point
## 1 Introduction
The classical problem of finding the principal components aims to approximate the covariance structure of a high dimensional sample of many features by the covariance structure of a lower dimensional sample of "principal components", obtained as linear combinations of the original feature variables. Mathematically, starting with an independent and identically distributed (i.i.d.) sample \(\mathbf{X}_{1},\mathbf{X}_{2},\ldots\mathbf{X}_{n}\), where each \(\mathbf{X}_{i}\in\mathbb{R}^{p}\), and a scale measure \(S_{n}(y_{1},\ldots y_{n})\) to measure the dispersion in a univariate sample \(\{y_{1},\ldots y_{n}\}\), the first eigenvector associated with the principal components is defined as the unit length vector maximizing the function
\[\mathbf{v}\to S_{n}(\mathbf{v}^{\intercal}\mathbf{X}_{1},\ldots\mathbf{v}^{\intercal}\mathbf{X}_{n });\ \mathbf{v}\in\mathbb{R}^{p}. \tag{1.1}\]
Similarly, assuming that the first \((k-1)\) eigenvectors \(\widehat{\mathbf{v}}_{1},\widehat{\mathbf{v}}_{2},\ldots\widehat{\mathbf{v}}_{k-1}\) has already been found, one can obtain the subsequent \(k\)-th eigenvector as the unit vector maximizing the same function given in Eq. (1.1), but under the set of restrictions \(\mathbf{v}^{\intercal}\widehat{\mathbf{v}}_{i}=0\) for all \(i=1,\ldots(k-1)\). The corresponding eigenvalues are defined as the maximum values of the scale function, i.e.,
\[\widehat{\lambda}_{k}=S_{n}(\widehat{\mathbf{v}}_{k}^{\intercal}\mathbf{X}_{1},\ldots \widehat{\mathbf{v}}_{k}^{\intercal}\mathbf{X}_{n}).\]
In essence, principal component analysis (PCA) takes input \(n\) observations of dimension \(p\), where \(p\) is presumably very large, and outputs a set of pairs \(\{(\widehat{\lambda}_{k},\widehat{\mathbf{v}}_{k}):k=1,2,\ldots r\}\) where \(r\) is a pre-specified number of components, generally much smaller compared to both \(n\) and \(p\). For each \(k\), the former of the pair \(\widehat{\lambda}_{k}\) denotes the maximum variability expressed by the \(k\)-th principal component, and the latter of the pair \(\widehat{\mathbf{v}}_{k}\) denotes the direction along which this maximum variability can be found in the given i.i.d. sample. The \(k\)-th principal component is then defined by the variable obtained from projecting the observations along the \(k\)-th eigenvector scaled by the \(k\)-th eigenvalue, i.e., \(\{\widehat{\lambda}_{k}\widehat{\mathbf{v}}_{k}^{\intercal}\mathbf{X}_{i}:i=1,\ldots n\}\).
Since a small number of principal components can explain most of the variation present in the random sample, it is primarily used for the purpose of dimensionality reduction. PCA provides a simple method of visualizing any high-dimensional data by plotting the first two or three principal components, and subsequently one can identify potential outliers (Locantore et al., 1999). Jolliffe (2002) also provides an application of PCA for variable selection in the regression context. In machine learning and pattern recognition, PCA has been used abundantly for both supervised and unsupervised paradigms (Vathy-Fogarassy and Abonyi, 2013). PCA has also found its applications across many disciplines ranging from multi-sensor data fusion (Lock et al., 2013), signal processing, image compression (Bouwmans et al., 2018), video event detection (Roy et al., 2021) to material and chemical sciences (Smilde et al., 2005). The readers are referred to see Sanguansat (2012) and the references therein for further details on the multitude of applications of PCA.
In the classical PCA, the scale estimator \(S_{n}(y_{1},y_{2},\ldots y_{n})\) is chosen to be the square root of the sample variance. As a result, the eigenvalues and the eigenvectors of the sample covariance matrix of \(\mathbf{X}_{1},\ldots\mathbf{X}_{n}\) become the solution to the aforementioned principal components problem. It is well known that the sample covariance matrix is very sensitive to outliers, hence the principal components resulting from classical PCA also suffer from the presence of outlying observations in the data (Hubert et al., 2005; Candes et al., 2011). In the context of the high dimensional datasets pertaining to the above applications, it is very challenging to locate these outlying observations beforehand in order to discard them. Thus, any practitioner relying solely on the classical PCA to interpret multivariate data may end up with a distorted visualization of the data, false detection of outliers, and a wrong conclusion about the data. Several robustified versions of PCA have been proposed to date to provide reliable estimates of the principal components even under the presence of outlying observations (Jolliffe, 2002). A brief discussion of the existing literature in this area is provided in the following subsection.
### Existing Literature
Most of the early literature to derive a robust principal component analysis (RPCA) followed one of the two primary approaches. The first class of estimators estimated the principal components robustly from the eigenvalues and the eigenvectors of a robust covariance matrix of the sample. Notable among this class of estimators are those due to Maronna (1976) and Campbell (1980), where the authors create affine-equivariant principal component estimates from robust M-estimators of the covariance matrix. Devlin et al. (1981) proposed to use minimum covariance determinant (MCD)
estimator and minimum volume ellipsoid (MVE) estimator (Rousseeuw, 1985) for this purpose due to their high breakdown compared to the M-estimators.
The other approach considered robustifying PCA by using a robust scale function \(S_{n}\) in Eq. (1.1). This idea was first presented by Li and Chen (1985) and was further developed later by Croux and Ruiz-Gazen (1996) where they considered the median absolute deviation about sample median as the scale function. Various theoretical properties like the influence function, asymptotic distribution and the breakdown point of this estimator have also been established in the literature (Croux and Haesbroeck, 2000; Croux and Ruiz-Gazen, 2005). These estimators and their variants primarily restricted their attention to the elliptically symmetric family of distributional models, i.e., the random observations \(\mathbf{X}_{i}\) for \(i=1,2\ldots n\) were assumed to follow a density function of the form
\[f(\mathbf{x})\propto g\left((\mathbf{x}-\mathbf{\mu})^{\intercal}\mathbf{\Sigma}^{-1}(\mathbf{x}- \mathbf{\mu})\right), \tag{1.2}\]
where \(g:\mathbb{R}^{+}\to\mathbb{R}\) is a known function governing the shape of the density. It turns out that under this model, \(\mathbb{E}(\mathbf{X}_{i})=\mathbf{\mu}\) and \(\mathbb{E}\left((\mathbf{X}_{i}-\mathbf{\mu})(\mathbf{X}_{i}-\mathbf{\mu})^{\intercal}\right)= \mathbf{\Sigma}\) (Based on the usual notation for elliptically symmetric family, the variance of \(\mathbf{X}_{i}\) is \(k_{g}\mathbf{\Sigma}\) where \(k_{g}\) is a constant depending on the function \(g\), but we assume that such \(k_{g}\) is included in the dispersion matrix \(\mathbf{\Sigma}\) itself by modifying the function \(g\) appropriately). Even though these statistical RPCA approaches guarantee the highest possible asymptotic breakdown point of \(1/2\), they show low asymptotic efficiency and sometimes large bias even at considerably lower levels of contaminations than their breakdowns (Fishbone and Mili, 2023).
Recent advances in the area of RPCA view the estimation of the principal components in a different light through the guise of the factor model. Wright et al. (2009) define the RPCA problem as the problem of recovering \(\mathbf{L}\) from the unknown decomposition of the data matrix \(\mathbf{X}=\mathbf{L}+\mathbf{S}\), where \(\mathbf{L}\) is a low rank matrix and \(\mathbf{S}\) is a sparse noise component. The direct solution to this problem would consider the optimization problem
\[\min_{\mathbf{L},\mathbf{S}}\operatorname{Rank}\left(\mathbf{L}\right)+\gamma\|\mathbf{S}\|_{ 0}, \tag{1.3}\]
subject to the restriction that \(\|\mathbf{S}\|_{0}\leq k\) and \(\mathbf{X}=\mathbf{L}+\mathbf{S}\), for a predetermined value of \(k\). Here, \(\|\mathbf{A}\|_{0}\) denotes the \(L_{0}\)-norm of the matrix \(\mathbf{A}\), i.e., the sum of the nonzero entries of \(\mathbf{A}\) and \(\gamma\) is a tuning parameter to control the balance between the rank of \(\mathbf{L}\) and the sparsity of \(\mathbf{S}\). As noted in Candes et al. (2011), the classical PCA seeks the best low-rank component \(\mathbf{L}\) in terms of minimizing the usual Euclidean \(L_{2}\) norm, i.e., it is related to the optimization problem \(\min_{\mathbf{L}}\|\mathbf{X}-\mathbf{L}\|_{2}\) subject to the restriction that \(\operatorname{Rank}\left(\mathbf{L}\right)\leq k\). However, the problem in Eq. (1.3) is notoriously difficult to solve, hence Wright et al. (2009) and Candes et al. (2011) considered the convex optimization problem \(\min_{\mathbf{L},\mathbf{S}}\|\mathbf{L}\|_{*}+\gamma\|\mathbf{S}\|_{1}\) where \(\|\mathbf{L}\|_{*}\) is the nuclear norm of the matrix \(\mathbf{L}\), i.e., the sum of its singular values and \(\|\mathbf{S}\|_{1}\) is the \(L_{1}\) norm of the matrix \(\mathbf{S}\). Various algorithmic techniques like principal component pursuit (PCP) method (Candes et al., 2011), augmented Lagrange multiplier (ALM) method (Lin et al., 2010) and alternating projection (AltProj) algorithm (Cai et al., 2019) have been developed to solve this optimization problem efficiently. This new approach radically differs from the traditional statistical methods: these methods are non-parametric in nature and assume that the data matrix \(\mathbf{X}\) is non-stochastic, rather the only source of randomness comes from the positions of the nonzero entries of the sparse matrix \(\mathbf{S}\). The convergence and correctness guarantees of these methods are then provided based on the bounds on the entries of these matrices \(\mathbf{L}\) and \(\mathbf{S}\) directly. This exact decomposition is often far from practical applications as every entry of the data matrix \(\mathbf{X}\) is subject to measurement errors. To mitigate this, Zhou et al. (2010) considered the decomposition
\[\mathbf{X}=\mathbf{L}+\mathbf{S}+\mathbf{E}, \tag{1.4}\]
where \(\mathbf{E}\) is a dense perturbation matrix (such as matrix with i.i.d. mean zero and homoscedastic entries). Although such a decomposition is considered, the analysis of the algorithm still assumed \(\mathbf{X}\) to be deterministic and considered \(\|\mathbf{E}\|_{2}\leq\delta\), a prespecified level of noise variance to maintain a high signal-to-noise ratio.
### Connection between RPCA Approaches and Our Contributions
The two existing RPCA approaches, one based on the maximization of the scale function as in Eq. (1.1) and another based on the minimization of objective function Eq. (1.3) with matrix decomposition, are usually not equivalent except for the trivial cases of classical PCA. In this paper, we consider a combination of both approaches by taking the decomposition given in Eq. (1.4) but with stochastic modelling of the data matrix \(\mathbf{X}\). We assume that the rows of the data matrix \(\mathbf{X}\), namely \(\mathbf{X}_{1},\ldots\mathbf{X}_{n}\) are i.i.d. observations generated from an elliptically symmetric family of distributions having a density function of the form as in Eq. (1.2). Clearly, the sample observations can be expressed as \(\mathbf{X}_{i}=\mathbf{\mu}+\mathbf{\Sigma}^{1/2}\mathbf{Z}_{i}\), for \(i=1,2,\ldots n\), where \(\mathbf{Z}_{i}\) are i.i.d. random variables with \(\mathbb{E}(\mathbf{Z}_{i})=0\) and \(\mathbb{E}(\mathbf{Z}_{i}\mathbf{Z}_{i}^{\intercal})=\mathbf{I}_{p}\), the identity matrix of order \(p\). The density function of the random variable \(\mathbf{Z}_{i}\) depends on the specific form of the \(g\) function. Then, incorporating the eigendecomposition of \(\mathbf{\Sigma}=\sum_{k=1}^{p}\gamma_{k}\mathbf{v}_{k}\mathbf{v}_{k}^{\intercal}\) (with \(\gamma_{1}\geq\gamma_{2}\cdots\geq\gamma_{p}\)), we can rewrite the data matrix as
\[\mathbf{X}=\mathbf{1}_{n}\mathbf{\mu}^{\intercal}+\sum_{k=1}^{p}\sqrt{\gamma_{k}}\mathbf{Z}\bm {v}_{k}\mathbf{v}_{k}^{\intercal}, \tag{1.5}\]
where \(\mathbf{1}_{n}\) denotes the \(n\)-length column vector with all elements equal to \(1\) and \(\mathbf{Z}\) is the \(n\times p\) matrix whose \(i\)-th row is equal to \(\mathbf{Z}_{i}^{\intercal}\). Denoting \(\mathbf{u}_{k}=\mathbf{Z}\mathbf{v}_{k}/\sqrt{n}\) for \(k=1,2,\ldots p\), one can easily see that \(\mathbf{u}_{k}\)s form a set of orthonormal vectors in expectation, i.e., \(\mathbb{E}(\mathbf{u}_{k}^{\intercal}\mathbf{u}_{l})=\delta_{kl}\), the Kronecker delta function. This enables us to rewrite Eq. (1.5) as
\[\mathbf{X}=\mathbf{1}_{n}\mathbf{\mu}^{\intercal}+\sum_{k=1}^{r}\sqrt{n\gamma_{k}}\mathbf{u}_ {k}\mathbf{v}_{k}^{\intercal}+\sum_{k=(r+1)}^{p}\sqrt{n\gamma_{k}}\mathbf{u}_{k}\mathbf{ v}_{k}^{\intercal}, \tag{1.6}\]
for some prespecified rank \(r\). Ignoring the location, the rest of the decomposition is \(\mathbf{X}=\mathbf{L}+\mathbf{E}\) which is a subset of the model given in Eq. (1.4) without any sparse component. In the presence of outlying observations in the data matrix \(\mathbf{X}\), the resulting error matrix \(\mathbf{E}\) will contain occasional spikes which can be separated into the sparse component \(\mathbf{S}\) giving rise to the decomposition in Eq. (1.4). Connecting the low rank matrix \(\mathbf{L}\) in Eq. (1.4) to the sum \(\sum_{k=1}^{r}\sqrt{n\gamma_{k}}\mathbf{u}_{k}\mathbf{v}_{k}^{\intercal}\) in Eq. (1.6), it is now evident that maximizing the scale function of Eq. (1.1) would result in the eigenvectors \(\mathbf{v}_{k}\)s which are the right singular vectors of the \(\mathbf{L}\) matrix. This provides a connection between the two approaches when the rows of the data matrix are i.i.d. observations from an elliptically symmetric family of distributions. Thus, in this paper, we propose a fast, scalable novel robust PCA algorithm based on the popular minimum density power divergence estimation (MDPDE) approach [22] for the aforementioned setup along with a decomposition as in Eq. (1.6). The major contributions of this paper are as follows:
1. We propose a novel robust PCA estimator (to be henceforth called rPCAdpd) based on the popular MDPDE, which allows balancing the robustness and efficiency in estimation by simply tuning a robustness parameter \(\alpha\) and is able to work under a general decomposition model as in Eq. (1.4).
2. We propose a fast, parallelizable iterative algorithm to obtain the rPCAdpd estimate based on alternating regression; this contrasts with the existing robust PCA algorithms which do not scale well due to large matrix inversion steps.
3. We also derive various theoretical properties such as equivariance, \(\sqrt{n}\)-consistency and asymptotic distribution of the proposed rPCAdpd estimator akin to the widely used robust M-estimators. There exists little literature on the theoretical behaviour of the existing PCP methods and often the asymptotic distributions of these estimators are non-Gaussian (Bickel et al., 2018).
4. We also theoretically demonstrate the robustness of the proposed rPCAdpd estimator by demonstrating that its influence function is bounded, and by deriving a lower bound of its asymptotic breakdown point which is independent of the data dimension \(p\) but only a function of the robustness tuning parameter \(\alpha\). This ensures the scalability of the proposed rPCAdpd estimator for arbitrarily high dimensional random samples.
5. We corroborate our theoretical findings with extensive simulations. For all the simulation setups considered, rPCAdpd performs better (and sometimes closely on par) than the existing RPCA algorithms.
6. We also compare the performances of the existing robust PCA algorithms with the rPCAdpd for a few benchmark datasets, and demonstrate how the estimator can be used to detect fraudulent transactions for a credit card transactions dataset.
The rest of the paper is structured as follows: our proposed rPCAdpd estimator is described in detail in Section 2.1 when the model family is elliptically symmetric. In Section 2.2, we derive a computationally efficient iterative technique to obtain the rPCAdpd estimator using the solution to an alternating regression problem. Section 3 describes the necessary theoretical results regarding the convergence of the algorithm, equivariance properties, consistency and asymptotic distribution of the estimator. All of these theoretical results are then corroborated by extensive simulation studies in Section 4, where we compare the performance of the rPCAdpd estimator with several existing robust PCA algorithms. Finally, in Section 5, we demonstrate the practical applicability of the proposed estimator for two popular benchmark datasets (namely the Car dataset and Octane dataset in Hubert et al. (2005)) and a Credit Card Fraud Detection dataset. For streamlining the presentation, the proofs of all of the theoretical results are deferred till the Appendix.
## 2 The rPCAdpd Estimator
Before proceeding with the description of the proposed rPCAdpd estimator, we introduce some notations to be used throughout the paper unless otherwise specified. Let, for a matrix \(\mathbf{A}\), \(\text{Diag}\left(\mathbf{A}\right)\) denote the vector comprising the diagonal elements of \(\mathbf{A}\). The notations \(\mathbf{I}_{n}\) and \(\mathbf{1}_{n}\) denote the \(n\times n\)-size identity matrix and \(n\)-length vector of \(1\)s respectively. The transpose, rank and the trace of a matrix \(\mathbf{A}\) will be denoted as \(\mathbf{A}^{\intercal}\), \(\text{Rank}\left(\mathbf{A}\right)\) and \(\text{Trace}\left(\mathbf{A}\right)\). For any two matrices \(\mathbf{A}\) and \(\mathbf{B}\), their usual matrix product will be denoted as \(\mathbf{A}\mathbf{B}\) and the Kronecker product will be denoted as \(\mathbf{A}\otimes\mathbf{B}\). We shall use the symbol \(\|\mathbf{x}\|_{2}\) and \(\|\mathbf{A}\|_{2}\) to denote the usual Euclidean norm of a vector \(\mathbf{x}\) and the Frobenius norm of the matrix \(\mathbf{A}\) respectively. The notation \(f_{\mathbf{\theta}}(\mathbf{x})\) will denote a generic symbol of the probability density function of a random variable \(\mathbf{X}\) following a distribution parametrized by \(\mathbf{\theta}\) and evaluated at a point \(\mathbf{x}\). The expectation and the covariance operator will be denoted by \(\mathbb{E}(\cdot)\) and \(\text{Var}(\cdot)\) respectively.
### Description of the rPCAdpd Estimator
Let \(\mathbf{X}_{1},\ldots\mathbf{X}_{n}\) be a \(p\)-variate sample such that each of the observations \(\mathbf{X}_{i}\) follows an elliptically symmetric family of distributions with a density function of the form
\[f_{\mathbf{\theta}}(\mathbf{x})=c_{g}^{-1}\det(\mathbf{\Sigma})^{-1/2}\exp\left[g\left(\mathbf{ x}^{\intercal}\sum_{k=1}^{p}\gamma_{k}^{-1}\mathbf{v}_{k}\mathbf{v}_{k}^{\intercal}\mathbf{x} \right)\right], \tag{2.1}\]
where \(\mathbf{\Sigma}=\sum_{k=1}^{p}\gamma_{k}\mathbf{v}_{k}\mathbf{v}_{k}^{\intercal}\) is the eigendecomposition of the dispersion matrix. The parameter \(\mathbf{\theta}=(\gamma_{1},\ldots\gamma_{p},\mathbf{\eta})\) in Eq. (2.1) consists of the eigenvalues \(\gamma_{1},\ldots\gamma_{p}\) of the dispersion matrix \(\mathbf{\Sigma}_{p\times p}\) and the parameter \(\mathbf{\eta}\) parametrizing the eigenvectors \(\mathbf{v}_{1},\ldots\mathbf{v}_{k}\) residing in the Stiefel manifold \(S_{(p-1)}^{p}\), i.e., the space of all \(p\times p\) orthogonal matrices. Here, \(g:\mathbb{R}^{+}\to\mathbb{R}\) is a scalar function that parametrizes the family of distribution and is assumed to be known. For instance, the multivariate Gaussian family of distributions corresponds to \(g(x)=(-x/2)\). Note that, since the principal components primarily deal with the variance structure of the data, the location parameter \(\mathbf{\mu}=\mathbb{E}(\mathbf{X}_{i})\) is a nuisance parameter, hence it is assumed to be a known constant. Without the loss of generality, we take this known location parameter equal to \(\mathbf{0}\), otherwise, one may treat \(\mathbf{Y}_{i}=\mathbf{X}_{i}-\mathbf{\mu}\) as the i.i.d sample under consideration. However, for all practical purposes when it is unknown, one can substitute \(\mathbf{\mu}\) by any consistent robust estimate of the location parameter (some choices will be described later in Section 2.3). We shall show later in Section 3 that the choice of this location estimator does not affect the asymptotic properties of the robust estimator of \(\mathbf{\theta}\) we will propose.
Based on the above formulation, we shall use the popular minimum density power divergence estimator (MDPDE) to estimate these parameters in \(\mathbf{\theta}\). As shown in several studies (Basu et al., 1998; Ghosh and Basu, 2013), the MDPDE is robust and highly efficient in inference and provides a smooth bridge between the efficient yet non-robust maximum likelihood estimator and the robust but less efficient minimum \(L_{2}\) distance estimator. Basu et al. (1998) introduced the density power divergence between two densities \(g\) and \(f\) as
\[d_{\alpha}(h,f)=\int f^{1+\alpha}dx-\left(1+\frac{1}{\alpha}\right)\int f^{ \alpha}hdx+\frac{1}{\alpha}\int h^{1+\alpha}dx,\ \alpha>0 \tag{2.2}\]
which provides a smooth bridge between the Kullback Leibler divergence and the \(L_{2}\) distance between \(h\) and \(f\) via the robustness tuning parameter \(\alpha\). Given the true distribution \(H\) with density \(h\) and a parametric model family of distributions \(\mathcal{F}=\{F_{\theta}:\theta\in\Theta\}\) with corresponding densities \(f_{\theta}\), the MDPD functional \(T(H)\) is defined as the value of the parameter \(\theta\in\Theta\) such that \(d_{\alpha}(h,f_{\theta})\) is minimized. Using the same objective function for MDPDE and substituting the empirical measure of the sample observations instead of the true distribution \(H\), our proposed estimator of robust principal components turns out to be the solution to the optimization problem
\[\widehat{\mathbf{\theta}}=\operatorname*{arg\,min}_{\mathbf{\theta}\in\mathbf{\Theta}} \int f_{\mathbf{\theta}}^{1+\alpha}(\mathbf{x})d\mathbf{x}-\left(1+\frac{1}{\alpha}\right) \frac{1}{n}\sum_{i=1}^{n}f_{\mathbf{\theta}}^{\alpha}(\mathbf{X}_{i}), \tag{2.3}\]
where \(f_{\mathbf{\theta}}(\mathbf{x})\) is as given in Eq. (2.1) and the parameter space \(\mathbf{\Theta}=(\mathbb{R}^{+})^{p}\times S\) where \(S\) is the parameter space for \(\mathbf{\eta}\). Combining Eq. (2.1) and Eq. (2.3), we can recover MDPDE as
\[\widehat{\mathbf{\theta}}=\operatorname*{arg\,min}_{\mathbf{\theta}\in\mathbf{\Theta}}c_{ g}^{-\alpha}\prod_{k=1}^{p}\gamma_{k}^{-\alpha/2}\left[\frac{c_{(1+\alpha)g}}{c_{g}} \,-\left(1+\frac{1}{\alpha}\right)\frac{1}{n}\sum_{i=1}^{n}e^{\alpha g}(X_{i}^ {\intercal}\sum_{k=1}^{p}\gamma_{k}^{-1}\mathbf{v}_{k}(\mathbf{\eta})\mathbf{v}_{k}^{ \intercal}(\mathbf{\eta})\mathbf{X}_{i})\right]. \tag{2.4}\]
We refer to this as the rPCAdpd estimator of the principal components under the general elliptically symmetric family of distributions. This estimator assumes the description of the model family
through the specification of the completely known function \(g(\cdot)\). In particular, when \(g(x)=(-x/2)\), i.e., the model family is a \(p\)-variate Gaussian distribution, then the corresponding optimization problem in Eq. (2.4) becomes
\[\widehat{\mathbf{\theta}}=\operatorname*{arg\,min}_{\mathbf{\theta}\in \mathbf{\Theta}}(2\pi)^{-\alpha p/2}\prod_{k=1}^{p}\gamma_{k}^{-\alpha/2}\left[(1+ \alpha)^{-p/2}-\right.\\ \left.\left(1+\frac{1}{\alpha}\right)\frac{1}{n}\sum_{i=1}^{n}e^{ -\frac{\alpha}{2}\left(\mathbf{X}_{i}^{\intercal}\sum_{k=1}^{p}\gamma_{k}^{-1} \mathbf{v}_{k}(\mathbf{\eta})\mathbf{v}_{k}^{\intercal}(\mathbf{\eta})\mathbf{X}_{i}\right)} \right]. \tag{2.5}\]
### Algorithm for Efficient Computation of the rPCAdpd Estimator
Clearly, if the minimization given in Eq. (2.4) was to be performed on the entries of the dispersion matrix to obtain a robust estimate of covariance directly, it would be difficult to restrict the optimization space to the space of all positive definite matrices. Thus, the optimization is deliberately made with respect to the eigenvectors and the eigenvalues of the dispersion matrix to ensure that the estimated dispersion matrix remains positive definite and symmetric. While it is easy to optimize the objective function in Eq. (2.4) with respect to the eigenvalues, it still remains computationally expensive to solve it for the eigenvectors since one has to perform an optimization over the non-convex Steifel manifold \(S_{p-1}^{p}\). Although there exist some efficient optimization algorithms on the Riemannian manifold as proposed by Wen and Yin (2013); Jiang and Dai (2015); Li et al. (2020), these general-purpose optimization techniques require complicated iteration steps via Cayley transformation and curvilinear searches. To circumvent this direct optimization, we apply a procedure similar to the alternating regression approach of the rSVDdpd algorithm by Roy et al. (2021).
We start by assuming that the unknown location parameter \(\mathbf{\mu}\) is already estimated using a robust consistent estimator of the location. For our purpose, we use the \(L_{1}\)-median as the location estimator; however, in Section 2.3, we shall describe some alternative choices that may be used. In the decomposition of Eq. (1.4), we assume that elements of the error matrix \(\mathbf{E}\) are independent and identically distributed. For instance, when the model densities \(f_{\mathbf{\theta}}\) follow a multivariate Gaussian distribution (or multivariate \(t\)-distribution), the entries of \(\mathbf{E}\) follow approximately univariate Gaussian distribution (or univariate \(t\)-distribution) respectively. The sparse matrix \(\mathbf{S}\) has a few nonzero entries, which may be regarded as outlying observations in the original data matrix \(\mathbf{X}\) at the corresponding places. This is a classic setup for robust statistical inference, hence the MDPDE approach can be directly used to tackle this estimation problem. For ease of explanation, in the following text, we develop the proposed algorithm assuming the particular model of Gaussian distribution as in Eq. (2.5). However, the same algorithm can be modified to fit any choice of \(g(\cdot)\) in Eq. (2.4) using its univariate analogous distribution.
To estimate the principal components robustly, we perform a robust singular value decomposition of the centred data matrix using an iterative algorithm rSVDdpd (Roy et al., 2021). To illustrate the approach, we rewrite the decomposition model of Eq. (1.4) as
\[X_{ij}=\mu_{j}+\sum_{k=1}^{r}u_{ki}\beta_{kj}+\epsilon_{ij}=\mu_{j}+\sum_{k=1 }^{r}\alpha_{ki}v_{kj}+\epsilon_{ij},\ i=1,\ldots n;j=1,\ldots p, \tag{2.6}\]
where \(\beta_{kj}=\lambda_{k}v_{kj}\), \(\alpha_{ki}=\lambda_{k}u_{ki}\), \(u_{ki}\) is the \(i\)-th coordinate of \(\mathbf{u}_{k}\) and \(v_{kj}\) is the \(j\)-th coordinate of \(\mathbf{v}_{k}\). For a fixed choice of \(j\) and known value of \(r\) and \(\mathbf{u}_{k}\)s (for \(k=1,\ldots r\)), Eq. (2.6) simply denotes a linear regression problem with intercept \(\mu_{j}\) and \(r\) slope coefficients \(\beta_{1j},\ldots\beta_{rj}\). Let, \(\widehat{\mu}_{j}\)
be the robust consistent estimator of \(\mu_{j}\). Also, let \((\widehat{u}_{ki}^{(t)},\widehat{v}_{kj}^{(t)},\widehat{\lambda}_{k}^{(t)},( \widehat{\sigma}^{2})^{(t)})\) be the estimates at the \(t\)-th iteration of the algorithm and \(\widehat{\beta}_{kj}^{(t)}\) and \(\widehat{\alpha}_{ki}^{(t)}\) be defined accordingly. The iteration rule for the rSVDdpd algorithm is then defined by the system of equations
\[\begin{split}\left(\widehat{\beta}_{1j}^{(t+1)},\ldots\widehat{ \beta}_{rj}^{(t+1)}\right)&=\operatorname*{arg\,min}_{\beta_{1j },\ldots\beta_{rj}}\frac{1}{n}\sum_{i=1}^{n}V\left(Z_{ij};\widehat{u}_{ki}^{(t) },\beta_{kj},(\widehat{\sigma}^{2})^{(t)}\right),\\ \left(\widehat{\alpha}_{1i}^{(t+1)},\ldots\widehat{\alpha}_{ri}^ {(t+1)}\right)&=\operatorname*{arg\,min}_{\alpha_{1i},\ldots \alpha_{ri}}\frac{1}{p}\sum_{j=1}^{p}V\left(Z_{ij};\alpha_{ki},\tilde{v}_{kj}^ {(t+1)},(\widehat{\sigma}^{2})^{(t)}\right),\\ (\widehat{\sigma}^{2})^{(t+1)}&=\operatorname*{arg \,min}_{\sigma^{2}}\frac{1}{np}\sum_{i=1}^{n}\sum_{j=1}^{p}V\left(Z_{ij}; \widehat{\alpha}_{ki}^{(t)},\tilde{v}_{kj}^{(t+1)},\sigma^{2}\right).\end{split} \tag{2.7}\]
where
\[V(y;c,d,\sigma^{2})=\frac{1}{(2\pi)^{\alpha/2}\sigma^{\alpha}}\left[\frac{1}{ \sqrt{1+\alpha}}-\left(\frac{1+\alpha}{\alpha}\right)\exp\left\{-\alpha\frac{ (y-cd)^{2}}{2\sigma^{2}}\right\}\right],\]
with \(\alpha\) being the robustness tuning parameter as in Eq. (2.4). In between these steps the vectors \((\widehat{\alpha}_{k1}^{(t)},\ldots\widehat{\alpha}_{kn}^{(t)})^{\intercal}\) and \((\widehat{\beta}_{k1}^{(t)},\ldots\widehat{\beta}_{kp}^{(t)})^{\intercal}\) are normalized accordingly to produce unit vectors \(\widehat{\boldsymbol{u}}_{k}^{(t)}=(\widehat{u}_{k1}^{(t)},\ldots,\widehat{ u}_{kn}^{(t)})^{\intercal}\) and \(\widehat{\boldsymbol{v}}_{k}^{(t)}=(\widehat{v}_{k1}^{(t)},\ldots,\widehat{ v}_{kr}^{(t)})^{\intercal}\), and the norm of the \(\beta\)-vector is regarded as the estimate \(\widehat{\lambda}_{k}^{(t)}\), the \(k\)-th singular value at \(t\)-th step of the iteration.
We repeat these alternating steps until convergence. Using the converged estimates from the aforementioned rSVDdpd procedure, the unit vector \(\widehat{\boldsymbol{v}}_{k}^{(\infty)}\) and the quantity \((\widehat{\lambda}_{k}^{(\infty)})^{2}/n\) are outputted as the \(k\)-th eigenvector and \(k\)-th eigenvalue corresponding to the principal components of the i.i.d. sample \(\boldsymbol{X}_{1},\ldots\boldsymbol{X}_{n}\) respectively. We shall call this entire procedure as the robust principal component analysis using the density power divergence (rPCAdpd) algorithm.
### Choice of the Robust Location Estimator
There are several choices for the robust estimators of the location for the rPCAdpd algorithm. We shall discuss only a few of these estimators which are quick and simple since the primary focus is to estimate the principal components. As we will show later in Section 3, the asymptotic properties of the estimated principal components are free of the choice of this location estimator, as long as the location estimator is robust and asymptotically consistent.
Naturally, we may want to use the MDPDE (Basu et al., 1998) for a normal location model family, extended to a multivariate setup. However, estimating the location parameter in this way would force us to estimate the unknown dispersion matrix \(\boldsymbol{\Sigma}\) as well, which is already taken care of using the rPCAdpd algorithm. Also, as will be discussed later in Section 3.3, this multivariate MDPDE does not satisfy the desirable orthogonal equivariance property, and in particular, the permutation equivariance property. So instead, we can resort to a coordinatewise MDPDE under the normal location model family. In this case, the coordinates of the estimated location vector satisfy
\[\widehat{\mu}_{j}=\operatorname*{arg\,min}_{\mu}\min_{\sigma}\frac{1}{(2\pi)^ {\alpha/2}\sigma^{\alpha}}\left[\frac{1}{\sqrt{1+\alpha}}-\left(\frac{1+ \alpha}{\alpha}\right)\frac{1}{n}\sum_{i=1}^{n}\exp\left\{-\alpha\frac{(X_{ij} -\mu)^{2}}{2\sigma^{2}}\right\}\right],\ j=1,\ldots p,\]
where \(\alpha\) is the robustness parameter lying between \(0\) and \(1\), \(X_{ij}\) is the \(j\)-th coordinate of \(\boldsymbol{X}_{i}\). This coordinatewise MDPDE still retains its robustness properties while being permutation and scale equivariant, but it still does not satisfy orthogonal equivariance for general orthogonal matrices.
Alternative choices of a robust and consistent estimator of the location parameter would include the \(L_{1}\) median (Vardi and Zhang, 2000), coordinatewise median or any \(M\)-estimator for location (Huber, 1964). The \(L_{1}\) median possesses the desirable orthogonal equivariance property. Based on extensive simulation studies, we have found that \(L_{1}\) median fits our purpose and provides a desirable balance between speed (computational advantage) and accuracy (robustness and efficiency), and hence it is chosen to be used as a robust location estimator during the rPCAdpd algorithm for all our subsequent studies.
### Choices of Hyperparameters
The two hyperparameters associated with the rPCAdpd estimator are the rank of the \(\mathbf{L}\) matrix, i.e., the number of significant eigenvalues or the number of principal components to output, and the robustness parameter \(\alpha\) in the objective function (2.3).
To determine the rank of the matrix \(\mathbf{L}\), we robustly estimate all the \(\min(n,p)\) eigenvalues and the corresponding eigenvectors using the rPCAdpd algorithm. Subsequently, we select a rank \(r\leq\min(n,p)\), ensuring that the first \(r\) eigenvalues and corresponding eigenvectors can account for a proportion of variation of at least \((1-\delta)\). Common choices for \(\delta\) are typically \(0.1\) or \(0.25\). Thus, the rank of the matrix \(\mathbf{L}\) is estimated as
\[\widehat{r}=\min\left\{1\leq r\leq\min(n,p):\frac{\sum_{k=1}^{r}\widehat{ \gamma}_{k}^{(\alpha)}}{\sum_{k=1}^{\min(n,p)}\widehat{\gamma}_{k}^{(\alpha)} }>(1-\delta)\right\},\]
where \(\widehat{\gamma}_{k}^{(\alpha)}\) is the \(k\)-th eigenvalue as estimated by rPCAdpd method with robustness parameter \(\alpha\). Similar criteria have been used to determine the number of significant principal components by many authors (He et al., 2012; Xu et al., 2012).
Applying the general result pertaining to the asymptotic breakdown of the MDPDE as in Roy et al. (2023), the asymptotic breakdown of the rPCAdpd estimator turns out to be at least \(\alpha/(1+\alpha)\). We discuss this in detail later in Section 3.6. Clearly, as \(\alpha\) increases to \(1\), one approaches the highest possible breakdown \(1/2\), by sacrificing some efficiency in estimation. On the other hand, the efficiency is most when \(\alpha\to 0\), but the breakdown becomes unacceptably low for the rPCAdpd algorithm to be of any use as a robust PCA estimator in that case. Therefore, there must be a balance between robustness and efficiency with an adaptive optimal choice of \(\alpha\in[0,1]\). Since we use the rSVDdpd procedure to obtain the estimate of the singular values from which we obtain the robust estimates of the principal components, we follow the same criterion as introduced by Roy et al. (2021). The authors consider that the optimal choice of the robustness parameter is the minimizer of a conditional MSE criterion
\[(n\,+\,p)(\widehat{\sigma}^{(\alpha)})^{2}\left(1+\frac{\alpha^{2}}{1+2\alpha }\right)^{3/2}\,+\,\frac{1}{r}\sum_{k=1}^{r}\|\widehat{\lambda}_{k}^{(\alpha) }\widehat{\mathbf{a}}_{k}^{(\alpha)}\,-\,\widehat{\lambda}_{k}^{(1)}\widehat{\bm {a}}_{k}^{(1)}\|_{2}^{2}\,+\,\frac{1}{r}\sum_{k=1}^{r}\|\widehat{\lambda}_{k}^ {(\alpha)}\widehat{\mathbf{b}}_{k}^{(\alpha)}\,-\,\widehat{\lambda}_{k}^{(1)} \widehat{\mathbf{b}}_{k}^{(1)}\|_{2}^{2},\]
where \(\widehat{\lambda}_{k}^{(\alpha)},\widehat{\mathbf{a}}_{k}^{(\alpha)},\widehat{ \mathbf{b}}_{k}^{(\alpha)}\) are the estimates of \(k\)-th singular value and vectors as obtained by the rSVDdpd procedure with robustness parameter \(\alpha\).
## 3 Theoretical Properties
In this section, we explore various theoretical properties of the rPCAdpd estimator. First, we show the existence and the uniqueness of the estimator and that the proposed iterative algorithm
converges to the estimator for any finite \(n\). Next, we prove various equivariance properties, and asymptotic consistency, following which we derive the asymptotic distribution of the robust eigenvalues and eigenvectors estimated by the rPCAdpd estimator. Finally, we derive the influence function and asymptotic breakdown point of the estimator to demonstrate its robustness properties. All of these theoretical results hold for any location estimator that is robust, asymptotically consistent and equivariant under the orthogonal transformation (like \(L_{1}\)-median), used in the rPCAdpd algorithm.
### Existence of the Estimator
We start by writing the objective function in Eq. (2.4) as a function of the individual term of the parameter vector \(\mathbf{\theta}\) as
\[Q(\gamma_{1},\ldots,\gamma_{p},\mathbf{\eta})=\prod_{k=1}^{p}\gamma_{ k}^{-\alpha/2}\left[\frac{c_{(1+\alpha)g}}{c_{g}}\right.\\ \left.-\left(1+\frac{1}{\alpha}\right)\frac{1}{n}\sum_{i=1}^{n} \exp\left\{\alpha g\left((\mathbf{X}_{i}-\widehat{\mathbf{\mu}})^{\intercal}\sum_{k=1} ^{p}\gamma_{k}^{-1}\mathbf{v}_{k}(\mathbf{\eta})\mathbf{v}_{k}(\mathbf{\eta})^{\intercal}(\mathbf{ X}_{i}-\widehat{\mathbf{\mu}})\right)\right\}\right]. \tag{3.1}\]
where \(\widehat{\mathbf{\mu}}\) is a robust consistent estimate of the location including those described in Section 2.3. The following result establishes the existence of the rPCAdpd estimator.
**Theorem 3.1**.: If the generating function \(g:[0,\infty)\to\mathbb{R}\) of the elliptically symmetric family of distributions is a decreasing continuous function, then for a sufficiently large number of sample observations \(n\), there exists a minimum of the objective function \(Q(\cdot)\) given in Eq. (3.1) with probability tending to \(1\).
For instance, when the model family is \(p\)-variate \(t\)-distribution with \(\nu\) degrees of freedom, then \(g(x)\) turns out to be \(-\frac{\nu+p}{2}\log(1+x/\nu)\), which is a decreasing continuous function, hence the rPCAdpd estimator exists for the multivariate \(t\)-distribution family.
### Convergence of the Algorithm
Once the existence of the rPCAdpd estimator is established, the convergence of the algorithm follows directly from the convergence of the rSVDdpd procedure as presented in Roy et al. (2021). Observe that, the iterations in Eq. (2.7) monotonically decrease the value of objective function \(Q(\gamma_{1},\ldots,\gamma_{p},\mathbf{\eta})\), which is also continuous in its arguments. Since Theorem 3.1 asserts the existence of the minimizer, it means that the sequence \(Q(\widehat{\gamma}_{1}^{(t)},\ldots,\widehat{\gamma}_{p}^{(t)},\widehat{\mathbf{ \eta}}^{(t)})\) (where \(\widehat{\gamma}_{1}^{(t)}\) and \(\widehat{\mathbf{\eta}}^{(t)}\) denote the estimated parameters at \(t\)-th iteration) is bounded below. Then an application of the monotone convergence theorem combined with the uniqueness of the rSVDdpd estimator asserts the convergence of the rPCAdpd estimator.
### Orthogonal Equivariance
As mentioned in Rousseeuw (1985), orthogonal equivariance is one of the fundamental properties that an estimator of the principal components should possess. Let, \(\mathbf{Y}_{1},\ldots\mathbf{Y}_{n}\) be a transformed sample \(\mathbf{Y}_{i}=a\mathbf{P}\mathbf{X}_{i}+\mathbf{b}\) for \(i=1,2,\ldots n\), where \(\mathbf{P}_{p\times p}\) is an orthogonal matrix, \(a\in(0,\infty)\) and \(\mathbf{b}\) is a \(p\)-length vector. Then, an orthogonally equivariant estimator \(T_{\lambda}(\mathbf{X}_{1},\ldots\mathbf{X}_{n})\) of an eigenvalue should satisfy \(T_{\lambda}(\mathbf{Y}_{1},\ldots\mathbf{Y}_{n})=a^{2}T_{\lambda}(\mathbf{X}_{1},\ldots\bm {X}_{n})\). Similarly, for an orthogonally equivariant
estimate \(T_{\mathbf{v}}(\mathbf{X}_{1},\ldots\mathbf{X}_{n})\) of the corresponding eigenvector, it satisfies \(T_{\mathbf{v}}(\mathbf{Y}_{1},\ldots\mathbf{Y}_{n})=\mathbf{PT}_{\mathbf{v}}(\mathbf{X}_{1},\ldots\mathbf{X }_{n})\). For any orthogonal equivariant estimate of the principal components, both of these two conditions should hold for all eigenvalues and their corresponding eigenvectors.
Since our primary focus is on the principal components, we will assume that the robust estimator of the location parameter is orthogonally equivariant. The choice of \(L_{1}\)-median as a robust estimator of location satisfies this property. Given the orthogonal equivariance property of the location estimator, it follows that the resulting rPCAdpd estimator also satisfies the same.
**Theorem 3.2**.: The rPCAdpd estimators of the eigenvalues and eigenvectors are equivariant under the transformation
\[\mathbf{Y}_{i}=a\mathbf{P}\mathbf{X}_{i}+\mathbf{b},\ i=1,2,\ldots n, \tag{3.2}\]
where \(\mathbf{P}_{p\times p}\) is an orthogonal matrix, \(a\in(0,\infty)\) and \(\mathbf{b}\) is a \(p\)-length vector provided that the location estimator used in the rPCAdpd procedure also satisfy same equivariance property.
**Corollary 3.1**.: As in the case of the rSVDpd estimator discussed in Roy et al. (2021), the rPCAdpd estimator also satisfies scale and permutation equivariance. This follows from the observation that both are special cases of the transformation mentioned in Eq. (3.2). In particular, with \(\mathbf{P}=\mathbf{I}_{p}\), we get scale equivariance. If \(a=1\) and \(\mathbf{P}\) is a permutation matrix, then permutation equivariance follows.
### Consistency and Asymptotic Distribution
One of the integral components of the proposed rPCAdpd estimator is the MDPDE. As shown in Basu et al. (1998), the MDPDE, being an M-estimator and a minimum distance estimator, enjoys a vast set of nice asymptotic properties including consistency and asymptotic normality. In this subsection, we will investigate how these properties carry over to the special scenario of principal component estimation under elliptically symmetric models. Thus, throughout this entire subsection, unless otherwise specified, we will consider the setup that the sample observations \(\mathbf{X}_{1},\ldots\mathbf{X}_{n}\) are i.i.d. random variables from a \(p\)-variate elliptically symmetric distribution with unknown mean \(\mathbf{\mu}^{*}\) and unknown dispersion matrix \(\mathbf{\Sigma}^{*}\), having density function
\[f_{\mathbf{\theta}^{*}}(\mathbf{x})=c_{g}^{-1}\text{det}(\mathbf{\Sigma}^{*})^{-1/2}e^{g \left((\mathbf{x}-\mathbf{\mu}^{*})^{\intercal}(\mathbf{\Sigma}^{*})^{-1}(\mathbf{x}-\mathbf{\mu} ^{*})\right)},\ \mathbf{x}\in\mathbb{R}^{p}, \tag{3.3}\]
where \(g\) is the characterizing function of the elliptically symmetric family of distributions. The covariance matrix \(\mathbf{\Sigma}^{*}\) is assumed to have an eigendecomposition \(\mathbf{\Sigma}^{*}=\sum_{k=1}^{p}\gamma_{k}^{*}\mathbf{v}_{k}^{*}(\mathbf{v}_{k}^{*})^{\intercal}\) where \(\gamma_{k}^{*}\geq 0\) are eigenvalues and \(\mathbf{v}_{k}^{*}\)s are the corresponding eigenvectors of the covariance matrix. We wish to estimate the parameter of interest \(\mathbf{\theta}^{*}=(\gamma_{1}^{*},\ldots\gamma_{p}^{*},\mathbf{\eta}^{*})\), comprising of the eigenvalue \(\gamma_{1}^{*},\ldots\gamma_{p}^{*}\) and the natural parameter \(\mathbf{\eta}^{*}\) parametrizing the eigenvectors in the Stiefel manifold \(S_{(p-1)}^{p}\). The location parameter \(\mathbf{\mu}^{*}\) is a nuisance parameter in this setup.
Following the footsteps of Basu et al. (1998), we consider the following quantities
\[\mathbf{\xi}_{\mathbf{\theta}}=\int u_{\mathbf{\theta}}(\mathbf{x})f_{\mathbf{\theta}}^{(1+\alpha)} dx,\ \mathbf{J}_{\mathbf{\theta}}=\int u_{\mathbf{\theta}}(\mathbf{x})u_{\mathbf{\theta}}(\mathbf{x})^{ \intercal}f_{\mathbf{\theta}}^{(1+\alpha)}dx,\ \mathbf{K}_{\mathbf{\theta}}=\int u_{\mathbf{\theta}}(\mathbf{x})u_{\mathbf{\theta}}(\mathbf{x})^{ \intercal}f_{\mathbf{\theta}}^{(1+2\alpha)}dx-\mathbf{\xi}_{\mathbf{\theta}}\mathbf{\xi}_{\bm {\theta}}^{\intercal},\]
which are essential for obtaining different asymptotic properties of the MDPDE. Here, \(f_{\mathbf{\theta}}(\mathbf{x})\) denotes the same family of distributions as in Eq. (3.3) at parameter \(\mathbf{\theta}\) and corresponding score function is denoted by \(u_{\mathbf{\theta}}(\mathbf{x})=\frac{\partial}{\partial\mathbf{\theta}}\log(f_{\mathbf{\theta} }(\mathbf{x}))\). To calculate all of these quantities, we will resort to the following assumptions.
1. The generating function \(g(\cdot)\) for the elliptically symmetric family of distributions is thrice differentiable and the third order derivative is continuous.
2. The true eigenvalues \(\gamma_{1}^{*},\ldots,\gamma_{p}^{*}\) are distinct.
3. The functions \(s^{2}g^{\prime}(s)e^{g(s)},s^{4}(g^{\prime}(s))^{2}e^{g(s)},\)\(s^{4}g^{\prime\prime}(s)e^{g(s)}\) and \(s^{4}g^{\prime\prime\prime}(s)e^{g(s)}\) are uniformly bounded above by some constant \(M^{*}\) for any \(s\geq 0\), where \(g^{\prime}(s),g^{\prime\prime}(s)\) and \(g^{\prime\prime\prime}(s)\) denotes the first, second and third order derivatives of \(g\).
Assumptions 1 and 3 are similar in spirit to the assumptions 1 and 2 of Ghosh and Basu (2013), which in turn imply the assumptions 1-3 of Basu et al. (1998). One of the standard regularity conditions for such asymptotic results is the exchangeability of the differentiation and integral signs, i.e., the integral \(\int f_{\mathbf{\theta}}^{(1+\alpha)}(\mathbf{z})d\mathbf{z}\) should be differentiable with respect to \(\mathbf{\theta}\) for any \(\alpha\in[0,1]\) and the derivative can be taken under the integral sign. However, this fact follows as a consequence of assumption 1 for the elliptically symmetric family of distributions. Assumption 2 makes the calculation simpler, but it is not strictly necessary to establish the asymptotic properties of the proposed estimator. However, it is also known that the set of random matrices with i.i.d. entries with a repeated eigenvalue is negligible (Tao, 2012). Kumar and Ahmed (2017) verify similar conclusions for a broader range of distribution of random matrices using numerical simulations. Thus, assumption 2 holds for almost all positive definite matrices \(\mathbf{\Sigma}^{*}\).
We begin with two generic lemmas describing the quantity \(\mathbf{\xi}_{\mathbf{\theta}}\) and \(\mathbf{J}_{\mathbf{\theta}}\) as a function of the integral of the model density function and its derivatives. These lemmas are generic; they are applicable in any MDPDE setup, not only in particular to RPCA.
**Lemma 3.1**.: Let, \(c_{\alpha}(\mathbf{\theta})=\int f_{\mathbf{\theta}}^{(1+\alpha)}(\mathbf{x})d\mathbf{x}\). Then under the assumption of thrice differentiability of \(f_{\mathbf{\theta}}(x)\) and the exchangeability of the differentiation and integral signs,
\[\mathbf{\xi}_{\mathbf{\theta}}=(1+\alpha)^{-1}c_{\alpha}(\mathbf{\theta})\frac{\partial}{ \partial\mathbf{\theta}}\log(c_{\alpha}(\mathbf{\theta})).\]
**Lemma 3.2**.: Under the assumption of thrice differentiability of \(f_{\mathbf{\theta}}(x)\) and the exchangeability of the differentiation and integral signs,
\[\mathbf{J}_{\mathbf{\theta}}=\frac{c_{\alpha}(\mathbf{\theta})}{(1+\alpha)^{2}}\left(i^{ h}(\mathbf{\theta})+\left(\frac{\partial}{\partial\mathbf{\theta}}\log(c_{\alpha}( \mathbf{\theta}))\right)\left(\frac{\partial}{\partial\mathbf{\theta}}\log(c_{\alpha} (\mathbf{\theta}))\right)^{\mathsf{T}}\right) \tag{3.4}\]
where \(i^{h}(\mathbf{\theta})\) is the expected Fisher information matrix for a single observation \(\mathbf{x}\) following the density function \(h_{\mathbf{\theta}}(\mathbf{x})=c_{\alpha}^{-1}(\mathbf{\theta})f_{\mathbf{\theta}}^{(1+ \alpha)}(\mathbf{x})\).
Before proceeding with the computation of these quantities \(\mathbf{\xi}_{\mathbf{\theta}},\mathbf{J}_{\mathbf{\theta}}\) and \(\mathbf{K}_{\mathbf{\theta}}\) for the particular setup of the rPCAdpd estimator, we recognize that the estimation of the principal components is essentially a two-step procedure. In the first step, we use a consistent robust estimator \(\widehat{\mathbf{\mu}}\) to estimate the location parameter \(\mathbf{\mu}^{*}\). In the next step, the rSVDdpd procedure was used to obtain the MDPDE of \(\mathbf{\theta}\) using the model family densities \(f_{\mathbf{\theta}}(x)\) as in Eq. (3.3) by replacing \(\mathbf{\mu}^{*}\) with its estimate \(\widehat{\mathbf{\mu}}\) from the first step. Therefore, in the following, we compute the quantities \(\mathbf{\xi}_{\mathbf{\theta}},\mathbf{J}_{\mathbf{\theta}}\) and \(\mathbf{K}_{\mathbf{\theta}}\) conditional on the value of \(\widehat{\mathbf{\mu}}\), which will lead to the conditional asymptotic distribution of \(\widehat{\mathbf{\theta}}\) (The proof of which is described in Appendix A.8). However, as we shall show later in Theorem 3.4, this conditional distribution turns out to be free of \(\widehat{\mathbf{\mu}}\), hence the unconditional asymptotic distribution of \(\widehat{\mathbf{\theta}}\) will also remain the same.
We start by using Lemma 3.1 in combination with Assumption 2 for our specific use case. To compactly write \(\mathbf{\xi}_{\mathbf{\theta}^{*}}\), we introduce the diagonal matrix \(\mathbf{\Gamma}_{p\times p}\) with nonzero entries \(\gamma_{1}^{*},\ldots\gamma_{p}^{*}\).
**Corollary 3.2**.: If \(f_{\mathbf{\theta}}(\mathbf{x})\) is a density function belonging to an elliptically symmetric family of distributions with generating function \(g(\cdot)\) as given in Eq. (3.3), then under assumptions (A1)-(A3) when the location parameter \(\mu^{*}\) is a fixed quantity,
\[\mathbf{\xi}_{\mathbf{\theta}^{*}}=\frac{c_{(1+\alpha)g}}{(1+\alpha)(c_{g})^{(1+\alpha) }}\prod_{k=1}^{p}(\gamma_{k}^{*})^{-\alpha/2}\begin{bmatrix}-\frac{\alpha}{2} \mathrm{Diag}\left(\mathbf{\Gamma}^{-1}\right)\\ 0\end{bmatrix}.\]
The quantity \(\mathbf{J}_{\mathbf{\theta}^{*}}\) for the current setup can be expressed similarly.
**Corollary 3.3**.: If \(f_{\mathbf{\theta}}(\mathbf{x})\) is a density function belonging to an elliptically symmetric family of distributions with generating function \(g(\cdot)\) as given in Eq. (3.3), then under assumptions (A1)-(A3) when the location parameter \(\mu^{*}\) is a fixed quantity,
\[\mathbf{J}_{\mathbf{\theta}^{*}}=\frac{c_{(1+\alpha)g}}{(1+\alpha)^{2}c_{g}^{(1+ \alpha)}}\prod_{k=1}^{p}(\gamma_{k}^{*})^{-1/2}\begin{bmatrix}i^{h}(\mathbf{ \gamma},\mathbf{\gamma})+\frac{\alpha^{2}}{4}\begin{pmatrix}\mathrm{Diag}\left( \mathbf{\Gamma}^{-1}\right)\end{pmatrix}\begin{pmatrix}\mathrm{Diag}\left(\mathbf{ \Gamma}^{-1}\right)\end{pmatrix}^{\intercal}&i^{h}(\mathbf{\gamma},\mathbf{\eta})\\ i^{h}(\mathbf{\gamma},\mathbf{\eta})^{\intercal}&i^{h}(\mathbf{\eta},\mathbf{\eta})\end{bmatrix}.\]
The quantities \(i^{h}(\cdot,\cdot)\) are given by the following formulae
\[i^{h}(\mathbf{\gamma},\mathbf{\gamma}) =-\frac{1}{4}\left(\mathrm{Diag}\left(\mathbf{\Gamma}^{-1}\right) \right)\left(\mathrm{Diag}\left(\mathbf{\Gamma}^{-1}\right)\right)^{\intercal}+ \mathbf{\Gamma}^{-2}\mathbf{V}^{\intercal}A_{4}((1+\alpha)g)\mathbf{V}\mathbf{\Gamma}^{-2},\] \[i^{h}(\mathbf{\gamma},\mathbf{\eta}) =-2\mathbf{\Gamma}^{-2}\mathbf{V}^{\intercal}(\mathbf{I}_{p}\otimes\mathbf{ \Gamma}^{-1})A_{4}((1+\alpha)g)\mathbf{G}^{\intercal},\] \[i^{h}(\mathbf{\eta},\mathbf{\eta}) =4\mathbf{G}(\mathbf{I}_{p}\otimes\mathbf{\Gamma}^{-1})A_{4}((1+\alpha)g)(\bm {I}_{p}\otimes\mathbf{\Gamma}^{-1})^{\intercal}\mathbf{G}^{\intercal}.\]
where
\[Q(\mathbf{x}) =\mathbf{x}^{\intercal}\sum_{k=1}^{p}(\gamma_{k}^{*})^{-1}\mathbf{v}_{k}^ {*}(\mathbf{v}_{k}^{*})^{\intercal}\mathbf{x},\] \[\mathbf{V}_{p^{2}\times p} =\begin{bmatrix}\mathbf{v}_{1}^{*}&0&\ldots&0\\ 0&\mathbf{v}_{2}^{*}&\ldots&0\\ \vdots&\vdots&\ddots&\vdots\\ 0&0&\ldots&\mathbf{v}_{p}^{*}\end{bmatrix},\] \[\mathbf{G}_{p(p+1)/2\times p^{2}} =\begin{bmatrix}\frac{\partial\mathbf{v}_{1}}{\partial\mathbf{\eta}}\mid _{\mathbf{\eta}=\mathbf{\eta}^{*}}&\frac{\partial\mathbf{v}_{2}}{\partial\mathbf{\eta}}\mid _{\mathbf{\eta}=\mathbf{\eta}^{*}}&\ldots&\frac{\partial\mathbf{v}_{p}}{\partial\mathbf{\eta}} \mid_{\mathbf{\eta}=\mathbf{\eta}^{*}}\end{bmatrix}^{\intercal},\]
and \(A_{4}(g)\) be the \(p^{2}\times p^{2}\) matrix comprising of the partitions \(A_{4}(g;\mathbf{v}_{i}^{*},\mathbf{v}_{j}^{*})\) for \(i,j=1,2,\ldots p\), where
\[A_{4}(g;\mathbf{u},\mathbf{v})=\int\left(g^{\prime}(Q(\mathbf{x}))\right)^{2}\mathbf{x}\mathbf{x}^{ \intercal}\mathbf{u}\mathbf{v}^{\intercal}\mathbf{x}\mathbf{x}^{\intercal}\mathcal{C}_{g}^{-1 }\exp(g(Q(\mathbf{x})))d\mathbf{x}.\]
For the particular setup of principal components for the elliptically symmetric family, the assumptions (A1)-(A3) indicate all the necessary assumptions (A1)-(A5) of Basu et al. (1998). Thus, we can readily use Theorem 2.2 of the same to establish the asymptotic properties such as consistency and the asymptotic normality of the converged rPCAdpd estimator of the principal components. However, since the quantities \(\mathbf{\xi}_{\mathbf{\theta}},\mathbf{J}_{\mathbf{\theta}}\) are obtained for a fixed value of \(\widehat{\mathbf{\mu}}\), the resulting asymptotic normal distribution is also obtained conditional on the values of \(\widehat{\mathbf{\mu}}\). However, the conditional asymptotic distribution is independent of \(\widehat{\mathbf{\mu}}\), hence the unconditional distribution also turns out to be the same. For the technical details, one may refer to Appendix A.8.
**Theorem 3.3**.: Under the Assumptions (A1)-(A3), for any \(\alpha\in[0,1]\), the converged rPCAdpd estimator \(\widehat{\mathbf{\theta}}=(\widehat{\gamma_{1}},\ldots\widehat{\gamma_{p}},\widehat {\mathbf{\eta}})\) as in Eq. (2.4) satisfy the following as the sample size \(n\to\infty\), provided that the location estimator \(\widehat{\mathbf{\mu}}\) is consistent for \(\mu^{*}\):
1. The estimated eigenvalue \(\widehat{\gamma_{j}}\) is \(\sqrt{n}\)-consistent for \(\gamma_{j}^{*}\) for \(j=1,2,\ldots p\).
2. Similarly, the corresponding estimated eigenvector \(\widehat{\mathbf{v}}_{j}\) is also \(\sqrt{n}\)-consistent for the true eigenvector \(\mathbf{v}_{j}^{*}\) for \(j=1,2,\ldots p\).
**Remark 3.1**.: The consistency of \(\widehat{\mathbf{v}}_{j}\) for \(\mathbf{v}_{j}^{*}\) follows from the fact that \(\widehat{\mathbf{\eta}}\) is consistent for \(\mathbf{\eta}^{*}\) and the parameter \(\mathbf{\eta}\) is simply a parametrization of the Stiefel manifold, hence each of \(\mathbf{v}_{1},\ldots\mathbf{v}_{p}\) is a continuous and smooth function of \(\mathbf{\eta}\).
**Theorem 3.4**.: Under the Assumptions (A1)-(A3), the converged rPCAdpd estimator \(\widehat{\mathbf{\theta}}=(\widehat{\mathbf{\mu}},\widehat{\gamma_{1}},\ldots\widehat {\gamma_{p}},\mathbf{\eta})\) as defined in Eq. (2.4) for the general elliptically symmetric family has an asymptotic normal distribution as \(n\to\infty\) after proper centering and scaling, provided that the location estimator \(\widehat{\mathbf{\mu}}\) is consistent for \(\mu^{*}\). In particular,
\[\sqrt{n}\mathbf{J}_{\mathbf{\theta}^{*}}\mathbf{K}_{\mathbf{\theta}^{*}}^{-1/2}\left(\widehat {\mathbf{\theta}}-\mathbf{\theta}^{*}\right)\]
converges in distribution to a standard normal random variable as \(n\to\infty\). Here,
\[\mathbf{J}_{\mathbf{\theta}^{*}} =\frac{c_{(1+\alpha)g}}{(1+\alpha)^{2}c_{g}^{(1+\alpha)}}\begin{bmatrix} \mathbf{J}_{11}&\mathbf{J}_{12}\\ \mathbf{J}_{12}^{\intercal}&\mathbf{J}_{22}\end{bmatrix},\] \[\mathbf{J}_{11} =\frac{(\alpha^{2}-1)}{4}\left(\operatorname{Diag}\left(\mathbf{ \Gamma}^{-1}\right)\right)\left(\operatorname{Diag}\left(\mathbf{\Gamma}^{-1} \right)\right)^{\intercal}+\mathbf{\Gamma}^{-2}\mathbf{V}^{\intercal}\mathbf{A}_{4}((1+ \alpha)g)\mathbf{V}\mathbf{\Gamma}^{-2},\] \[\mathbf{J}_{12} =-2\mathbf{\Gamma}^{-2}\mathbf{V}^{\intercal}\left(\mathbf{I}_{p}\otimes\mathbf{ \Gamma}^{-1}\right)A_{4}((1+\alpha)g)\mathbf{G}^{\intercal},\] \[\mathbf{J}_{22} =4\mathbf{G}\left(\mathbf{I}_{p}\otimes\mathbf{\Gamma}^{-1}\right)A_{4}((1+ \alpha)g)\left(\mathbf{I}_{p}\otimes\mathbf{\Gamma}^{-1}\right)^{\intercal}\mathbf{G}^{ \intercal},\]
and
\[\mathbf{K}_{\mathbf{\theta}^{*}} =\frac{c_{(1+2\alpha)g}}{(1+2\alpha)^{2}c_{g}^{(1+2\alpha)}} \begin{bmatrix}\mathbf{K}_{11}&\mathbf{K}_{12}\\ \mathbf{K}_{12}^{\intercal}&\mathbf{K}_{22}\end{bmatrix}-\frac{c_{(1+\alpha)g}^{2}}{( 1+\alpha)^{2}c_{g}^{(2+2\alpha)}}\begin{bmatrix}\frac{\alpha^{2}}{4} \operatorname{Diag}\left(\mathbf{\Gamma}^{-1}\right)\operatorname{Diag}\left(\mathbf{ \Gamma}^{-1}\right)^{\intercal}&0\\ 0&0\end{bmatrix},\] \[\mathbf{K}_{11} =\frac{(4\alpha^{2}-1)}{4}\left(\operatorname{Diag}\left(\mathbf{ \Gamma}^{-1}\right)\right)\left(\operatorname{Diag}\left(\mathbf{\Gamma}^{-1} \right)\right)^{\intercal}+\mathbf{\Gamma}^{-2}\mathbf{V}^{\intercal}\mathbf{A}_{4}((1+2 \alpha)g)\mathbf{V}\mathbf{\Gamma}^{-2},\] \[\mathbf{K}_{12} =-2\mathbf{\Gamma}^{-2}\mathbf{V}^{\intercal}\left(\mathbf{I}_{p}\otimes\mathbf{ \Gamma}^{-1}\right)A_{4}((1+2\alpha)g)\mathbf{G}^{\intercal},\] \[\mathbf{K}_{22} =4\mathbf{G}\left(\mathbf{I}_{p}\otimes\mathbf{\Gamma}^{-1}\right)A_{4}((1+2 \alpha)g)\left(\mathbf{I}_{p}\otimes\mathbf{\Gamma}^{-1}\right)^{\intercal}\mathbf{G}^{ \intercal}.\]
One may also be interested in the special case when the underlying elliptically symmetric distribution is assumed to be Gaussian. Formally, if we consider that the sample observations \(\mathbf{X}_{1},\ldots\mathbf{X}_{n}\) are distributed according to a \(p\)-variate normal distribution with unknown mean \(\mathbf{\mu}^{*}\) and unknown dispersion matrix \(\Sigma^{*}=\sum_{k=1}^{p}\gamma_{k}^{*}\mathbf{v}_{k}^{*}(\mathbf{v}_{k}^{*})^{\intercal}\), then it follows that under the same set of assumptions, one can establish the following corollary.
**Corollary 3.4**.: Under the Assumptions (A1)-(A3) and that the location estimator \(\widehat{\mathbf{\mu}}\) is consistent for \(\mu^{*}\), the converged rPCAdpd estimator \(\widehat{\mathbf{\theta}}=(\widehat{\gamma_{1}},\ldots\widehat{\gamma_{p}},\mathbf{ \eta})\) as in Eq. (2.5) for the Gaussian model family of distributions, satisfy the following as the sample size \(n\to\infty\):,
1. The eigenvalues \(\widehat{\gamma_{j}}\) is consistent for \(\gamma_{j}^{*}\) and \(\widehat{\mathbf{v}}_{j}\) is consistent for \(\mathbf{v}_{j}^{*}\) for \(j=1,2,\ldots p\).
2. The scaled and centred estimated principal component eigenvalues \[\sqrt{n}\left(\begin{bmatrix}\widehat{\gamma_{1}}\\ \ldots\\ \widehat{\gamma_{p}}\end{bmatrix}-\begin{bmatrix}\gamma_{1}^{*}\\ \ldots\\ \gamma_{p}^{*}\end{bmatrix}\right)\] has an asymptotic \(p\)-variate normal distribution with mean \(\mathbf{0}\) and dispersion matrix \[\frac{(1+\alpha)^{p+4}}{(1+2\alpha)^{p/2}}\mathbf{M}^{-1}\left(A_{1}(\alpha)\text{ Diag}\left(\mathbf{\Gamma}^{-1}\right)\text{Diag}\left(\mathbf{\Gamma}^{-1}\right)^{ \intercal}+\frac{1}{2(1+2\alpha)^{2}}\mathbf{\Gamma}^{-2}\right)\mathbf{M}^{-1},\] where \[\mathbf{M}=\left(\frac{\alpha^{2}}{4}\text{Diag}\left(\mathbf{\Gamma}^{-1}\right)\text {Diag}\left(\mathbf{\Gamma}^{-1}\right)^{\intercal}+\frac{1}{2}\mathbf{\Gamma}^{-2} \right),\ A_{1}(\alpha)=\alpha^{2}\left[\frac{1}{(1+2\alpha)^{2}}-\frac{(1+2 \alpha)^{p/2}}{4(1+\alpha)^{p+2}}\right].\]
3. The scaled and centered estimated \(\widehat{\mathbf{\eta}}\) corresponding to the principal component eigenvectors, i.e., \(\sqrt{n}(\widehat{\mathbf{\eta}}-\mathbf{\eta}^{*})\) has an asymptotic normal distribution with mean \(0\) and dispersion matrix \[\frac{(1+\alpha)^{p+4}}{(1+2\alpha)^{2+p/2}}\left(\sum_{k=1}^{p}\sum_{l=1}^{p }\left(1-\frac{\gamma_{k}^{*}}{\gamma_{l}^{*}}\right)\mathbf{G}_{k}(\mathbf{v}_{l}^{* })(\mathbf{v}_{k}^{*})^{\intercal}\mathbf{G}_{l}^{\intercal}\right)^{-1},\] where \(\mathbf{G}_{k}=\frac{\partial\mathbf{v}_{k}}{\partial\mathbf{\eta}}|_{\mathbf{ \eta}=\mathbf{\eta}^{*}}\), the matrix corresponding of the gradients of the eigenvector \(\mathbf{v}_{k}\) with respect to its natural parametrization \(\mathbf{\eta}\).
4. The rPCAdpd estimate of the eigenvalues \((\widehat{\gamma}_{1},\ldots,\widehat{\gamma}_{p})\) and estimate of the eigenvectors \((\widehat{\mathbf{v}}_{1},\ldots\widehat{\mathbf{v}}_{p})\) are asymptotically independent.
**Remark 3.2**.: The independence of the rPCAdpd estimate of eigenvalues and eigenvectors can enable one to create confidence intervals for the eigenvalues and eigenvectors separately. To create the asymptotic confidence interval for the eigenvalues, the knowledge of the corresponding estimates of eigenvalues is sufficient. In contrast, the asymptotic confidence band for eigenvectors require both the eigenvalues and the eigenvectors.
**Remark 3.3**.: The density power divergence introduced in Basu et al. (1998) becomes the same as the Kullback-Leibler divergence between the true density and the model density \(f_{\mathbf{\theta}}(\cdot)\) as \(\alpha\to 0\). Thus, for \(\alpha\to 0\), the estimating equations for the MDPDE turn out to be equivalent to the estimating equations corresponding to the log-likelihood. Consequently, the MDPDE coincides with the maximum likelihood estimator as \(\alpha\to 0\). From Corollary 3.4 it then follows that the maximum likelihood estimates (MLE) of the eigenvalues of the covariance matrix under the Gaussian distribution are asymptotically normal with mean \(\gamma_{j}\) and covariance \(2\gamma_{j}^{2}/n\) and are asymptotically independent. This result has been well established in the literature; see Girshick (1939) for references. A similar result for the asymptotic distribution of the MLE of eigenvectors was derived by Anderson (1963). Results on the asymptotic independence between the MLE of the eigenvalues and eigenvectors were also derived by Tyler (1981) for a general setup with repeated eigenvalues. The Corollary 3.4 can be seen as a generalization of these results.
**Remark 3.4**.: In contrast to Remark 3.3, for \(\alpha=1\), the form of density power divergence becomes same as the \(L_{2}\) distance between the true density and the model density \(f_{\mathbf{\theta}}(\mathbf{x})\). If we denote the
minimum \(L_{2}\) distance estimator of the eigenvalues by \(\widetilde{\mathbf{\gamma}}=(\widetilde{\gamma}_{1},\ldots\widetilde{\gamma}_{p})^{ \intercal}\) and the true eigenvalues by \(\mathbf{\gamma}^{*}=\left(\gamma_{1}^{*},\ldots\gamma_{p}^{*}\right)^{\intercal}\), then
\[\sqrt{n}(\widetilde{\mathbf{\gamma}}-\mathbf{\gamma}^{*})\xrightarrow{d}\mathcal{N}_{p }\left(\mathbf{0},\mathcal{V}_{2}\right),\]
as \(n\to\infty\). Here, \(\xrightarrow{d}\) denotes the convergence in law. The asymptotic variance is given by
\[\mathcal{V}_{2}=\frac{2^{(p+8)}}{3^{(p/2)}}\mathbf{M}_{1}^{-1}\left(\left(\frac{1} {9}-\frac{3^{(p/2)}}{2^{(p+4)}}\right)\operatorname{Diag}\left(\mathbf{\Gamma}^{- 1}\right)\operatorname{Diag}\left(\mathbf{\Gamma}^{-1}\right)^{\intercal}+\frac{1 }{18}\mathbf{\Gamma}^{-1}\right)\mathbf{M}_{1}^{-1},\]
where \(\mathbf{M}_{1}=\left(\operatorname{Diag}\left(\mathbf{\Gamma}^{-1}\right)\operatorname {Diag}\left(\mathbf{\Gamma}^{-1}\right)^{\intercal}+2\mathbf{\Gamma}^{-2}\right)\). Since the quantity \(\frac{2^{(x+4)}}{3^{(x/2)}}\left(\frac{1}{9}-\frac{3^{(x/2)}}{2^{(x+4)}}\right)\) increases exponentially fast as \(x\) increases, the variance of the minimum \(L_{2}\)-distance estimator increases exponentially with increase in the dimension \(p\). This shows that by using the highly robust minimum \(L_{2}\) distance estimator to obtain the principal components, one sacrifices considerable efficiency in estimation.
### Influence Function Analysis
The influence function is a local measure of the sensitivity and robustness of an estimator (Hampel et al., 2011). In this section, we investigate the influence function of the rPCAdpd estimator for the Gaussian model family of distributions. For this particular choice, the asymptotic independence of the eigenvalues and the eigenvectors as shown in Theorem 3.4 helps in deriving the influence functions quite nicely. Let us assume that instead of the true distribution \(\Phi_{\mathbf{\theta}^{*}}(\mathbf{x})\), the observations \(\mathbf{X}_{i}\)s come from a contaminated distribution \(G_{\epsilon}(\mathbf{x})=(1-\epsilon)\Phi_{\mathbf{\theta}^{*}}(\mathbf{x})+\epsilon \delta_{\mathbf{y}}(\mathbf{x})\), where \(\delta_{\mathbf{y}}(\cdot)\) is the degenerate distribution at \(\mathbf{y}\in\mathbb{R}^{p}\). Let \(\phi_{\mathbf{\theta}^{*}}(\mathbf{x})\) be the density function corresponding to the Gaussian distribution function \(\Phi_{\mathbf{\theta}^{*}}(\mathbf{x})\). Then the influence of this contamination on the estimated principal components can be readily obtained from the influence function derived in Basu et al. (1998). Due to the asymptotic independence, the influence functions for the estimators of the eigenvalues and the eigenvectors can be separately obtained along with an application of the chain rule to incorporate the influence of the robust location estimator. It turns out that
\[I_{\alpha}(\Phi_{\mathbf{\theta}^{*}},\mathbf{\gamma};\mathbf{y})=\frac{4(1+ \alpha)^{2}}{C_{\alpha}}\left[\alpha^{2}\operatorname{Diag}\left(\mathbf{\Gamma} ^{-1}\right)\operatorname{Diag}\left(\mathbf{\Gamma}^{-1}\right)^{\intercal}+2\bm {\Gamma}^{-2}\right]^{-1}\\ \left[\begin{bmatrix}u_{\gamma_{1}^{*}}(\mathbf{y})\\ \vdots\\ u_{\gamma_{p}^{*}}(\mathbf{y})\end{bmatrix}\phi_{\mathbf{\theta}^{*}}^{\alpha}(\mathbf{y}) I(\Phi_{\mathbf{\theta}^{*}},\widehat{\mathbf{\mu}};\mathbf{y})-\begin{bmatrix}\xi_{\gamma_{1}^{*}} \\ \vdots\\ \xi_{\gamma_{p}^{*}}\end{bmatrix}\right],\]
\[I_{\alpha}(\Phi_{\mathbf{\theta}^{*}},\mathbf{\eta};\mathbf{y})=-\frac{(1+\alpha)^{2}}{C_ {\alpha}}\left[\sum_{k=1}^{p}\frac{G_{k}\mathbf{\Sigma}^{*}G_{k}^{\intercal}}{ \gamma_{k}^{*}}\right]^{-1}\sum_{k=1}^{p}\frac{G_{k}}{\gamma_{k}^{*}}(\mathbf{y}- \mathbf{\mu}^{*})(\mathbf{y}-\mathbf{\mu}^{*})^{\intercal}\mathbf{v}_{k}^{*}\phi_{\mathbf{\theta}}^ {\alpha}(\mathbf{y})I(\Phi_{\mathbf{\theta}^{*}},\widehat{\mathbf{\mu}};\mathbf{y}).\]
Here, \(u_{\gamma_{j}^{*}}(\mathbf{y})\) denotes the score function with respect to the \(j\)-th eigenvalue \(\gamma_{j}^{*}\) evaluated at the contaminating point \(\mathbf{y}\) and \(I(\Phi_{\mathbf{\theta}^{*}},\widehat{\mathbf{\mu}};\mathbf{y})\) is the influence function of the location estimator \(\widehat{\mathbf{\mu}}\) at \(\mathbf{y}\). We assume that the location estimator \(\widehat{\mathbf{\mu}}\) is robust and hence has a bounded influence function, which is true for the \(L_{1}\)-median. To show that both the above influence functions are bounded, one may note that the exponential quantity \(e^{-\alpha(\mathbf{y}-\mathbf{\mu}^{*})^{\intercal}(\mathbf{\Sigma}^{*})^{-1}(\mathbf{y}-\mathbf{ \mu}^{*})/2}\) present in the Gaussian density \(\phi_{\mathbf{\theta}^{*}}(\mathbf{y})\) is bounded below by \(e^{-\alpha\|\mathbf{y}-\mathbf{\mu}^{*}\|^{2}/2\gamma_{(p)}^{*}}\) and bounded above by \(e^{-\alpha\|\mathbf{y}-\mathbf{\mu}^{*}\|^{2}/2\gamma_{(1)}^{*}}\), where \(\gamma_{(1)}^{*}\) and \(\gamma_{(p)}^{*}\) are the largest and the smallest eigenvalues of \(\mathbf{\Sigma}^{*}\) respectively. Now the boundedness of the influence function follows from assumption (A3), which can be easily verified for \(g(x)=-x/2\)
corresponding to the Gaussian distribution. Thus, if the location estimator \(\hat{\mathbf{\mu}}\) is B-robust, the rPCAdpd estimator is also B-robust qualifying for one of the primary requirements for a robust estimator.
### Breakdown Point Analysis
The breakdown point of an estimator is another accepted measure of the robustness of an estimator besides the influence function which measures the highest level of contamination that an estimator can tolerate (Hampel, 1971). Given the true distribution \(H\), Ghosh and Basu (2013) consider the asymptotic breakdown point of an MDPDE functional \(T\) as the largest value of \(\epsilon\) such that there exists a sequence of distributions \(\{K_{m}\}\) with \(|T(H_{\epsilon,m})-T(H)|\to\infty\) as \(m\to\infty\) where
\[H_{\epsilon,m}=(1-\epsilon)H+\epsilon K_{m}. \tag{3.5}\]
However, such a definition makes sense only for the location estimators. For general estimators, Maronna et al. (2019) define the breakdown of a functional \(T\) for \(\epsilon\)-level contamination if \(T(H_{\epsilon,m})\to\theta_{\infty}\) as \(m\to\infty\) where \(\theta_{\infty}\in\partial\Theta\), the boundary of parameter space \(\Theta\). In the case of the rPCAdpd estimator of eigenvalues and corresponding eigenvectors, the boundary of the parameter space \(\mathbf{\Theta}=(\mathbb{R}^{+})^{p}\times S\) is
\[\partial\mathbf{\Theta}=\left\{(\gamma_{1},\ldots\gamma_{p},\mathbf{\eta}):\mathbf{\eta} \in S,\text{ and there exists }k\in\left\{1,\ldots p\right\}\text{ with }\gamma_{k}\in\left\{0,\infty \right\}\right\},\]
indicating that the breakdown can happen when any of the estimated eigenvalues either explodes to infinity or implodes to \(0\).
Since the rPCAdpd algorithm is composed of two steps: location estimation and eigenvalue and eigenvector estimation using the rSVDdpd procedure, the asymptotic breakdown of the entire procedure is the minimum of the asymptotic breakdown of these individual procedures. It is well known that the robust \(L_{1}\)-median (used as the location estimator in our entire study) has an asymptotic breakdown point of \(1/2\). Also, under fairly general conditions, Roy et al. (2023) showed that the robust MDPDE has a breakdown point at least \(\alpha/(1+\alpha)\), where \(\alpha\) is the robustness parameter with \(\alpha\in[0,1]\). Hence, the resulting rPCAdpd estimator has an asymptotic breakdown at least \(\alpha/(1+\alpha)\), which is also free of the dimension \(p\), demonstrating the scalability aspect of the proposed estimator.
Let, the distributions \(H_{\epsilon,m},H\) and \(K_{m}\) mentioned in the contamination model (3.5) have densities \(h_{\epsilon,m},h\) and \(k_{m}\) respectively. In Roy et al. (2023), the authors derive a lower bound of the breakdown point of the MDPDE in general under the following set of assumptions.
1. \(\int\min\{f_{\mathbf{\theta}}(x),k_{m}(x)\}dx\to 0\) uniformly as \(m\to\infty\) and \(\mathbf{\theta}\) is bounded away from the boundary \(\partial\mathbf{\Theta}\).
2. \(\int\min\{h(x),f_{\mathbf{\theta}_{m}}(x)\}dx\to 0\) as \(m\to\infty\) if \(\mathbf{\theta}_{m}\to\mathbf{\theta}_{\infty}\) where \(\mathbf{\theta}_{\infty}\) is some point on the boundary \(\partial\mathbf{\Theta}\).
3. \(M_{f_{\mathbf{\theta}_{m}}}\geq M_{k_{m}}\) for all \(m\geq M\) for sufficiently large \(M\) for any \(\mathbf{\theta}_{m}\to\mathbf{\theta}_{\infty}\) where \(\mathbf{\theta}_{\infty}\) is some point on the boundary \(\partial\mathbf{\Theta}\) and \(M_{f}=\int f^{1+\alpha}(x)dx\).
Assumptions (BP1) and (BP2) are quite standard assumptions for breakdown analysis. To verify assumption (BP3) for our setup, we note that \(M_{f_{\mathbf{\theta}_{m}}}=\frac{c_{(1+\alpha)g}}{c_{g}}\prod_{k=1}^{p}\gamma_{k, m}^{-\alpha/2}\) where \(\{\gamma_{k,m}\}\) is the sequence of eigenvalues in \(\mathbf{\theta}_{m}\). Clearly, when \(\{\mathbf{\theta}_{m}\}\) tends to a point on the boundary of the parameter space, for some \(k=1,\ldots p\), either \(\gamma_{k,m}\to 0\) or \(\gamma_{k,m}\to\infty\) as \(m\to\infty\). Since \(\alpha>0\)
either \(M_{f_{\mathbf{\theta}_{m}}}\to\infty\) or \(M_{f_{\mathbf{\theta}_{m}}}\to 0\) as \(m\to\infty\). When \(M_{f_{\mathbf{\theta}_{m}}}\) increases to \(\infty\), Assumption (BP3) holds trivially. When \(M_{f_{\mathbf{\theta}_{m}}}\) decreases to \(0\), Assumption (BP3) holds if \(M_{k_{m}}\) decreases to \(0\) at a faster rate than \(M_{f_{\mathbf{\theta}_{m}}}\). To ensure this, one such particular choice would be to restrict the contaminating distribution to any elliptically symmetric family of distributions with a singular dispersion matrix, implying that the high dimensional data have outlying values not all of the \(p\)-coordinates. Such outliers are more common when \(p\) is large; data where outlyingness occurs in all of the \(p\)-coordinates rarely show up for almost all practical purposes. Thus, we have the following corollary.
**Corollary 3.5**.: Under the assumptions (BP1)-(BP3), if the true density belongs to the model family of elliptically symmetric distributions, then the rPCAdpd estimator has a breakdown point at least as large as \(\alpha/(1+\alpha)\) for \(\alpha\in[0,1]\), provided that the robust location estimator used also has an asymptotic breakdown point larger than \(\alpha/(1+\alpha)\).
**Remark 3.5**.: Corollary 3.5 shows that by tuning the parameter \(\alpha\), one can change the breakdown point of the rPCAdpd estimator irrespective of the dimension \(p\) of the data. Also, note that as \(\alpha\to 0\), the lower bound of the breakdown becomes \(0\) suggesting a lack of robustness, while for \(\alpha=1\), one would get the highest possible breakdown \(1/2\).
Note that, Corollary 3.5 is in contrast to the breakdown point result obtained by Maronna (1976) for an affine equivariant M-estimator, which states that an affine equivariant M-estimator has a breakdown point at most \(1/(p+1)\) where \(p\) is the dimensionality of the data. As explained in Basu et al. (1998), the MDPDE is a special case of the M-estimator, and also we showed the orthogonal equivariance property of the rPCAdpd estimator in Section 3.3. This discrepancy holds because the classes of the M-estimator differs from the classes of minimum divergence estimators in which MDPDE belongs. In particular, Maronna (1976) considered the estimators given as the solution to the system of equations
\[\sum_{i=1}^{n}u_{1}\left((\mathbf{X}_{i}-\mathbf{\mu})^{\intercal}\mathbf{\Sigma}^{-1}(\bm {X}_{i}-\mathbf{\mu})\right)(\mathbf{X}_{i}-\mathbf{\mu}) =0,\]
\[\sum_{i=1}^{n}u_{2}\left((\mathbf{X}_{i}-\mathbf{\mu})^{\intercal}\mathbf{\Sigma}^{-1}(\bm {X}_{i}-\mathbf{\mu})\right)(\mathbf{X}_{i}-\mathbf{\mu})(\mathbf{X}_{i}-\mathbf{\mu})^{\intercal} =\mathbf{\Sigma},\]
where \(u_{1}(s)\) and \(u_{2}(s)\) are suitable nonincreasing functions for \(s\geq 0\). On the other hand, denoting \(\mathbf{\Sigma}=\sum_{k=1}^{p}\gamma_{k}\mathbf{v}_{k}\mathbf{v}_{k}^{\intercal}\), the estimating equations for MDPDE turn out to be
\[\sum_{i=1}^{n}\exp\left(-0.5\alpha(\mathbf{X}_{i}-\mathbf{\mu})^{\intercal}\mathbf{ \Sigma}^{-1}(\mathbf{X}_{i}-\mathbf{\mu})\right)(\mathbf{X}_{i}-\mathbf{\mu}) =0,\]
\[\sum_{i=1}^{n}\exp\left(-0.5\alpha(\mathbf{X}_{i}-\mathbf{\mu})^{\intercal}\mathbf{\Sigma} ^{-1}(\mathbf{X}_{i}-\mathbf{\mu})\right)((\mathbf{X}_{i}-\mathbf{\mu})(\mathbf{X}_{i}-\mathbf{\mu})^{ \intercal}-\mathbf{\Sigma}) =0,\]
under the Gaussian model as in Eq. (2.5). Therefore, the breakdown point results provided by Maronna (1976) do not apply to our proposed rPCAdpd estimator. This independence of the dimension \(p\) in the lower bound of the breakdown implies that in contrast to the classical M-estimator (Maronna, 1976), the rPCAdpd estimator can still remain useful for estimating principal components robustly in arbitrarily high dimensional data.
## 4 Simulation Studies
In this section, we perform a principal component analysis for data matrices with varying levels of contamination using the existing robust PCA algorithms and our proposed rPCAdpd algorithm.
Among the plethora of existing RPCA methods, we take the classical PCA (Jolliffe, 2002), spherical and elliptical PCA (LOC) (Locantore et al., 1999), ROBPCA algorithm by Hubert et al. (2005), projection pursuit based methods Proj and Grid (Croux and Ruiz-Gazen, 2005), robust PCA using robust covariance matrix estimation (RobCov) (Todorov and Filzmoser, 2010), principal component pursuit (PCP) algorithm by Candes et al. (2011) and Gmedian based robust principal component analysis (Gmed) by Cardot and Godichon-Baggioni (2017), for comparison purposes. We have performed the simulations with several variants of the rPCAdpd algorithm differing only in the location estimator used. Based on empirical performance, we have seen that \(L_{1}\)-median as a location estimator provides a desirable balance between robustness, efficiency and computational complexity, hence it is the only variant demonstrated in the results described in this section.
### Simulation Settings
In the simulation experiments, we consider a data matrix comprised of i.i.d. rows. The rows \(\mathbf{X}_{i}\) are generated as \(\mathbf{X}_{i}=(1-\delta_{i})\mathbf{\widetilde{X}}_{i}+\delta_{i}\mathbf{\epsilon}_{i}\) for \(i=1,2,\ldots n\). The uncontaminated sample \(\mathbf{\widetilde{X}}_{i}\) is normally distributed with zero mean vector and a dispersion matrix \(\mathbf{\Sigma}\) whose elements are given by \(\mathbf{\Sigma}_{ij}=\min(i,j)/p\) for \(i,j=1,2,\ldots p\). This setup is similar to the one described in Cardot and Godichon-Baggioni (2017) and can be regarded as a discretized version of a Brownian motion within the unit \((0,1)\) interval. The random variables \(\delta_{i}\) which control the level of contamination are i.i.d. Bernoulli random variable with success probability \(\delta\). The contaminating variable \(\mathbf{\epsilon}_{i}\)s are chosen to possess different features compared to \(\mathbf{\widetilde{X}}_{i}\), and in this regard, we feel that the choice of the distribution of outliers as given in Cardot and Godichon-Baggioni (2017) is too restrictive. In comparison, Hubert et al. (2005) consider outliers that have changes in both mean and variance components separately, and hence we choose to work with them. In summary, we consider the following simulation scenarios.
1. \(\delta=0\), i.e., only pure data is present and there is no contamination.
2. Here a proportion of elements are contaminated. The contaminating variable \(\mathbf{\epsilon}_{i}\)s are i.i.d. \(p\)-variate normal random variables with mean \(\mu(f_{1})\) and variance \(\mathbf{\Sigma}/f_{2}\). The mean vector \(\mu(f_{1})\) is a \(p\)-length vector where \(10\%\) of the entries are equal to \(f_{1}\) while the rest of the entries are equal to \(0\). 1. Here, \(f_{1}=3,f_{2}=1\) and \(\delta=0.1\). Therefore, on average \(10\%\) of the data will be contaminated. 2. Here, \(f_{1}=3,f_{2}=1\) and \(\delta=0.2\). Therefore, on average \(20\%\) of the data will be contaminated. 3. Similar to (S2a) but with \(f_{2}=5\). 4. Similar to (S2b) but with \(f_{2}=5\).
3. This is similar to simulation scenario (S2) but the contaminating variable \(\mathbf{\epsilon}_{i}\)s are i.i.d. \(p\)-variate \(t\)-distribution with 5 degrees of freedom with dispersion matrix \(\mathbf{\Sigma}/f_{2}\) and a non-centrality parameter \(\mu(f_{1})\). This is used to understand the behaviour of the PCA algorithms for heavy-tailed contaminating variables. 1. \(f_{1}=3,f_{2}=1\) and \(\delta=0.1\). 2. \(f_{1}=3,f_{2}=1\) and \(\delta=0.2\). 3. \(f_{1}=3,f_{2}=5\) and \(\delta=0.1\).
* \(f_{1}=3,f_{2}=5\) and \(\delta=0.2\).
In each of the above simulation scenarios, we consider five different situations with the number of samples \(n=50\) but with different dimensions ranging from very small to moderately large (\(p=10,25,50,100,250\)). Based on 1000 repetitions of each exercise, we obtained an estimate of bias, mean absolute error (MAE) of the estimated eigenvalues as
\[\text{Bias}_{k}=\frac{1}{B}\sum_{b=1}^{B}\widehat{\gamma}_{k}^{(b)}-\gamma_{k},\ \text{MAE}_{k}=\frac{1}{B}\sum_{b=1}^{B}\left|\widehat{\gamma}_{k}^{(b)}-\gamma _{k}\right|,\]
where \(\widehat{\gamma}_{k}^{(b)},\gamma_{k}\) respectively denote the estimate and the true \(k\)-th eigenvalue for the \(b\)-th sample. Similarly, to measure discrepancy in the estimated eigenvalues we look at the Subspace Recovery Error (SRE) given by
\[\text{SRE}=\frac{1}{B}\sum_{b=1}^{B}2\left(r-\text{Trace}\left(\widehat{ \boldsymbol{P}}_{b}\boldsymbol{P}\right)\right),\]
where \(\widehat{\boldsymbol{P}}_{b}=\sum_{k=1}^{r}\widehat{\boldsymbol{v}}_{k}^{(b) }(\widehat{\boldsymbol{v}}_{k}^{(b)})^{\intercal}\) is the projection matrix onto the span of the estimated eigenvectors corresponding to the largest \(r\) eigenvalues from \(b\)-th sample, and \(\boldsymbol{P}=\sum_{k=1}^{r}\boldsymbol{v}_{k}\boldsymbol{v}_{k}^{\intercal}\) be the corresponding projection matrix from the true eigenvectors. In each of these simulation scenarios, we keep the choice of \(r=5\) fixed, as more than \(90\%\) of the variability can be explained by the first 5 principal components.
### Simulation Results
The simulation results from the aforementioned algorithms are demonstrated in Tables 1-9. We denote the rPCAdpd estimator with \(L_{1}\)-median as the location estimator in these tables as the DPD method, with the robustness parameter shown in parenthesis. Also, the RobCov algorithm (Todorov and Filzmoser, 2010) uses MCD-based robust covariance estimation for RPCA. Thus, it is inapplicable when variables outnumber samples (\(n\leq p\)), and those entries are marked as NA in these tables.
Table 1 presents metrics for various PCA algorithms in setup (S1). For uncontaminated data, classical PCA outperforms all robust methods across all metrics. Gmed and ROBPCA exhibit
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c c} \hline \hline Metric & \(p\) & Classical & LOC & ROBPCA & Proj & RobCov & Grid & Gmed & PCP & \begin{tabular}{c} DPD \\ (0.25) \\ \end{tabular} & \begin{tabular}{c} DPD \\ (0.5) \\ \end{tabular} & \begin{tabular}{c} DPD \\ (0.75) \\ \end{tabular} &
\begin{tabular}{c} DPD \\ (1) \\ \end{tabular} \\ \hline \multirow{6}{*}{Bias} & 10 & 0.059 & 0.723 & 0.194 & 0.229 & 0.431 & 0.416 & 0.043 & 1.066 & 0.06 & 0.062 & 0.065 & 0.068 \\ & 25 & 0.019 & 2.175 & 0.227 & 0.362 & 0.336 & 0.807 & 0.079 & 2.45 & 0.017 & 0.013 & 0.01 & 0.008 \\ & 50 & 0.031 & 4.572 & 0.519 & 0.467 & NA & 1.414 & 0.177 & 4.729 & 0.026 & 0.017 & 0.007 & 0.017 \\ & 100 & 0.194 & 9.366 & 0.944 & 1.058 & NA & 2.827 & 0.314 & 9.387 & 0.201 & 0.216 & 0.233 & 0.254 \\ & 250 & 0.154 & 23.76 & 2.847 & 2.239 & NA & 6.906 & 0.644 & 23.301 & 0.184 & 0.236 & 0.295 & 0.359 \\ \hline \multirow{6}{*}{MAE} & 10 & 17.919 & 72.334 & 26.756 & 33.798 & 45.663 & 50.391 & 19.122 & 106.477 & 17.921 & 17.936 & 17.981 & 18.054 \\ & 25 & 38.106 & 217.462 & 50.069 & 75.426 & 49.757 & 123.166 & 40.445 & 244.951 & 38.25 & 38.485 & 38.814 & 39.252 \\ & 50 & 73.085 & 457.212 & 110.189 & 137.425 & NA & 240.293 & 84.614 & 472.875 & 73.126 & 73.225 & 73.489 & 73.803 \\ & 100 & 143.086 & 936.571 & 200.658 & 263.141 & NA & 381.432 & 154.736 & 938.658 & 143.426 & 144.012 & 144.595 & 145.234 \\ & 250 & 395.183 & 2375.96 & 536.35 & 731.985 & NA & 1010.852 & 434.823 & 2330.092 & 395.893 & 396.863 & 397.827 & 396.641 \\ \hline \multirow{6}{*}{SRE} & 10 & 1 & 1.347 & 1.172 & 1.677 & 1.429 & 2.498 & 1.126 & 1.188 & 0.99 & 0.983 & 0.99 & 0.995 \\ & 25 & 0.829 & 1.417 & 1.028 & 1.76 & 1.127 & 3.573 & 1.004 & 1.136 & 0.833 & 0.84 & 0.845 & 0.872 \\ \cline{1-1} & 50 & 0.766 & 1.336 & 0.959 & 1.653 & NA & 4.038 & 0.931 & 0.871 & 0.766 & 0.771 & 0.795 & 0.818 \\ \cline{1-1} & 100 & 0.836 & 1.459 & 0.985 & 1.587 & NA & 3.313 & 1.045 & 0.923 & 0.84 & 0.847 & 0.857 & 0.872 \\ \cline{1-1} & 250 & 0.828 & 1.268 & 0.927 & 1.494 & NA & 3.265 & 0.939 & 0.899 & 0.828 & 0.832 & 0.84 & 0.859 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Estimated Bias, Mean Absolute Error and Subspace Recovery Error (SRE) for different PCA algorithm for simulation scenario (S1)
relatively less efficiency loss. However, the proposed rPCAdpd consistently outperforms both under any \(\alpha\in[0,1]\) and regardless of the location estimator used. Increasing \(\alpha\) escalates efficiency loss moderately compared to other methods. Although the \(L_{1}\)-median is quite inefficient (Huber, 2004; Hampel et al., 2011), its strong robustness properties allow rPCAdpd to achieve extremely low MAE.
Tables 2 and 3 respectively show results for setups (S2a) and (S2b) differing in contamination level. As the level of contamination increases, classical PCA worsens as expected, spherical PCA (Locantore et al., 1999) yields biased estimates for large number of variables (large \(p\)), and the projection pursuit-based methods also perform poorly under the considered simulation scenarios. The ROBPCA algorithm by Hubert et al. (2005) and the Gmedian algorithm by Cardot and Godichon-Baggioni (2017) stand out to be the most promising among the existing methods. However, Gmedian algorithm suits applications where the outlying distribution and the true distribution have the same theoretical mean but a different covariance structure. In contrast, the ROBPCA algorithm works well with significant changes in mean between outlying distribution and true distribution. The proposed rPCAdpd algorithm, suited for similar scenarios with changes in mean, surpasses ROBPCA at high robustness parameter \(\alpha\), and is significantly better in high dimensions. The PCP algorithm (Candes et al., 2011) has consistent results across setups (S1), (S2a), and (S2b). This is due to the fact that the error comes only from the perturbation matrix \(\mathbf{E}\) in Eq. (1.4), which is inestimable by the PCP method. Table 4 and 5 summarises the results obtained for scenario (S2c) and (S2d). These results closely mirror those in scenarios (S2a) and (S2b) respectively.
In scenarios (S3a)-(S3d), the contaminating distribution changes to \(t\)-distribution with 5 degrees of freedom with a heavy tail. In these scenarios, ROBPCA (Hubert et al., 2005), Gmedian (Cardot and Godichon-Baggioni, 2017) algorithm and the proposed rPCAdpd methods perform closely. In (S3a), the rPCAdpd method excels for large values of \(\alpha\). As the contamination rises to 20%, as shown in Table 7, all of the chosen algorithms show a significant increase in MAE. However, the proposed estimator maintains a low bias for all components even for large \(p\) relative to \(n\), consistent with its theoretical breakdown point behaviour as pointed out in Section 3.6.
In essence, the proposed rPCAdpd algorithm excels at detecting and removing low-variance, different-location contaminating components, compared to the primary data distribution component. In all other cases, its performance is closely comparable to the existing algorithms. Also, across all
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c c} \hline \hline Metric & \(p\) & Classical & LOC & ROBPCA & Proj & RobCov & Grid & Gmed & PCP & DPD & DPD & DPD \\ & 10 & 0.158 & 0.74 & 0.22 & 0.205 & 0.44 & 0.458 & 0.08 & 1.065 & 0.15 & 0.08 & 0.017 & 0.009 \\ & 25 & 0.304 & 2.183 & 0.386 & 0.416 & 0.459 & 1.046 & 0.152 & 2.452 & 0.281 & 0.097 & 0.023 & 0.025 \\ Bias & 50 & 0.874 & 4.585 & 0.807 & 1.074 & NA & 2.308 & 0.274 & 4.724 & 0.792 & 0.248 & 0.121 & 0.129 \\ & 100 & 1.627 & 9.382 & 1.41 & 1.63 & NA & 4.246 & 0.433 & 0.386 & 1.445 & 0.251 & 0.111 & 0.132 \\ & 250 & 4.567 & 23.777 & 3.114 & 4.573 & NA & 11.98 & 1.471 & 23.303 & 4.134 & 0.866 & 0.718 & 0.777 \\ \hline & 10 & 29.382 & 74.012 & 30.359 & 37.423 & 47.943 & 59.314 & 24.69 & 106.383 & 30.81 & 24.729 & 18.938 & 18.165 \\ & 25 & 62.13 & 218.321 & 62.944 & 83.99 & 61.246 & 141.267 & 56.95 & 245.214 & 63.996 & 47.073 & 39.385 & 39.137 \\ MAE & 50 & 138.901 & 458.488 & 124.057 & 163.26 & NA & 305.897 & 111.013 & 472.415 & 145.313 & 95.448 & 82.258 & 82.154 \\ & 100 & 258.437 & 938.246 & 213.108 & 296.41 & NA & 495.19 & 211.008 & 938.639 & 268.129 & 155.77 & 140.017 & 139.704 \\ & 250 & 693.852 & 2377.669 & 558.396 & 729.947 & NA & 1311.666 & 545.383 & 2330.337 & 709.073 & 398.446 & 380.915 & 383.593 \\ \hline & 10 & 1.779 & 1.875 & 1.056 & 2.016 & 1.405 & 2.697 & 1.843 & 1.171 & 1.787 & 1.46 & 1.063 & 1.005 \\ & 25 & 2.135 & 2.322 & 1.063 & 2.243 & 1.076 & 3.774 & 2.197 & 1.152 & 2.137 & 1.261 & 0.872 & 0.852 \\ SRE & 50 & 2.185 & 2.43 & 0.998 & 2.2 & NA & 4.395 & 2.263 & 0.924 & 2.172 & 1.145 & 0.847 & 0.871 \\ & 100 & 2.251 & 2.482 & 1.084 & 2.3 & NA & 3.544 & 2.351 & 0.986 & 2.228 & 1.075 & 0.898 & 0.901 \\ & 250 & 2.231 & 2.504 & 0.991 & 2.229 & NA & 3.599 & 2.317 & 0.912 & 2.196 & 0.936 & 0.869 & 0.882 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Estimated Bias, Mean Absolute Error and Subspace Recovery Error (SRE) for different PCA algorithm for simulation scenario (S2a)
of the simulation setups considered, the proposed rPCAdpd algorithm yields significantly better estimates of principal components than the existing algorithms when the dimension of the data \(p\) is large, which is also theoretically justified by its dimension-independent asymptotic breakdown point.
## 5 Real Data Analysis
In this section, we demonstrate applications of the proposed rPCAdpd estimator on three real-life datasets. The first two datasets, namely the Car dataset and the Octane dataset are popular benchmark datasets used to compare performances of different RPCA algorithms (see Hubert et al. (2005) for details). We also consider a novel Credit Card Fraud Detection dataset to demonstrate how the proposed robust PCA estimator can serve as a preliminary preprocessing step to identify fraudulent transactions using credit cards before applying binary classification algorithms.
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c c} \hline \hline Metric & \(p\) & Classical & LOC & ROBPCA & Proj & RobCov & Grid & Gmed & PCP & DPD & DPD & DPD & DPD \\ & 25 & 0.328 & 2.182 & 0.249 & 0.305 & 0.604 & 1.168 & 0.286 & 2.458 & 0.335 & 0.127 & 0.112 & 0.105 \\ Bias & 50 & 0.855 & 4.592 & 0.139 & 0.494 & NA & 2.04 & 0.845 & 4.745 & 0.944 & 0.356 & 0.348 & 0.347 \\ & 100 & 1.793 & 9.394 & 0.152 & 1.086 & NA & 3.896 & 1.713 & 9.403 & 1.861 & 0.858 & 0.8 & 0.775 \\ & 250 & 3.839 & 23.786 & 1.204 & 2.871 & NA & 10.833 & 3.264 & 23.392 & 4.119 & 1.396 & 1.331 & 1.275 \\ \hline \multirow{12}{*}{MAE} & 10 & 29.127 & 74.586 & 28.237 & 32.563 & 44.906 & 57.255 & 25.461 & 106.444 & 34.154 & 22.089 & 19.424 & 19.468 \\ & 25 & 60.747 & 218.249 & 56.749 & 76.163 & 87.508 & 156.242 & 60.369 & 24.518 & 64.18 & 44.461 & 42.378 & 42.696 \\ & 50 & 129.217 & 459.232 & 99.63 & 149.733 & NA & 314.016 & 127.928 & 474.518 & 140.922 & 83.036 & 82.464 & 81.935 \\ & 100 & 253.278 & 939.405 & 195.978 & 280.408 & NA & 510.712 & 251.441 & 940.342 & 266.573 & 163.12 & 158.681 & 158.084 \\ & 250 & 633.745 & 2378.584 & 496.981 & 745.615 & NA & 1303.445 & 623.223 & 2339.228 & 676.812 & 398.135 & 394.581 & 395.265 \\ \hline \multirow{12}{*}{SRE} & 10 & 1.815 & 2.014 & 1.099 & 2.118 & 1.485 & 2.823 & 1.87 & 1.194 & 1.821 & 1.159 & 1 & 0.997 \\ & 25 & 2.167 & 2.43 & 1.013 & 2.408 & 2.127 & 4.035 & 2.261 & 1.151 & 2.064 & 0.992 & 0.888 & 0.902 \\ \cline{1-1} & 50 & 2.221 & 2.47 & 1.043 & 2.531 & NA & 4.64 & 2.328 & 1.03 & 2.101 & 0.983 & 0.971 & 0.969 \\ \cline{1-1} & 100 & 2.242 & 2.472 & 1.028 & 2.531 & NA & 3.721 & 2.34 & 0.96 & 1.942 & 0.918 & 0.882 & 0.893 \\ \cline{1-1} & 250 & 2.251 & 2.515 & 0.963 & 2.535 & NA & 3.702 & 2.379 & 0.906 & 2.019 & 0.882 & 0.869 & 0.897 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Estimated Bias, Mean Absolute Error and Subspace Recovery Error (SRE) for different PCA algorithm for simulation scenario (S2c)
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c c} \hline \hline Metric & \(p\) & Classical & LOC & ROBPCA & Proj & RobCov & Grid & Gmed & PCP & DPD & DPD & DPD & DPD \\ & 25 & 0.328 & 2.182 & 0.249 & 0.305 & 0.604 & 1.168 & 0.286 & 2.458 & 0.335 & 0.127 & 0.112 & 0.105 \\ Bias & 50 & 0.855 & 4.592 & 0.139 & 0.494 & NA & 2.04 & 0.845 & 4.745 & 0.944 & 0.356 & 0.348 & 0.347 \\ & 100 & 1.793 & 9.394 & 0.152 & 1.086 & NA & 3.896 & 1.713 & 9.403 & 1.861 & 0.858 & 0.8 & 0.775 \\ & 250 & 3.839 & 23.786 & 1.204 & 2.871 & NA & 10.833 & 3.264 & 23.392 & 4.119 & 1.396 & 1.331 & 1.275 \\ \hline \multirow{12}{*}{MAE} & 10 & 29.127 & 74.586 & 28.237 & 32.563 & 44.906 & 57.255 & 25.461 & 106.444 & 34.154 & 22.089 & 19.424 & 19.468 \\ & 25 & 60.747 & 218.249 & 56.749 & 76.163 & 87.508 & 156.242 & 60.369 & 24.518 & 64.18 & 44.461 & 42.378 & 42.696 \\ \cline{1-1} & 50 & 129.217 & 459.232 & 99.63 & 149.733 & NA & 314.016 & 127.928 & 474.518 & 140.92 & 83.036 & 82.464 & 81.935 \\ \cline{1-1} & 100 & 253.278 & 939.405 & 195.978 & 280.408 & NA & 510.712 & 251.441 & 940.342 & 266.573 & 163.12 & 158.681 & 158.084 \\ \cline{1-1} & 250 & 633.745 & 2378.584 & 496.981 & 745.615 & NA & 1303.445 & 623.223 & 2339.228 & 676.812 & 398.135 & 394.581 & 395.265 \\ \hline \multirow{12}{*}{SRE} & 10 & 1.815 & 2.014 & 1.099 & 2.118 & 1.485 & 2.823 & 1.87 & 1.194 & 1.821 & 1.159 & 1 & 0.997 \\ & 25 & 2.167 & 2.43 & 1.013 & 2.408 & 2.127 & 4.035 & 2.261 & 1.151 & 2.064 & 0.992 & 0.888 & 0.902 \\ \cline{1-1} & 50 & 2.221 & 2.47 & 1.043 & 2.531 & NA & 4.64 & 2.328 & 1.03 & 2.101 & 0.983 & 0.971 & 0.969 \\ \cline{1-1} & 100 & 2.242 & 2.472 & 1.028 & 2.531 & NA & 3.721 & 2.34 & 0.96 & 1.942 & 0.918 & 0.882 & 0.893 \\ \cline{1-1} & 250 & 2.251 & 2.515 & 0.963 & 2.535 & NA & 3.702 & 2.379 & 0.906 & 2.019 & 0.882 & 0.869 & 0.897 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Estimated Bias, Mean Absolute Error and Subspace Recovery Error (SRE) for different PCA algorithm for simulation scenario (S2b)
### Car Dataset
The Car dataset comprises \(n=111\) observations of cars with \(p=11\) variables, including the length, width, and height of the car. This dataset has served as a benchmark for various robust PCA methods (Hubert et al., 2005; Croux et al., 2007). We utilize it to assess the performance of our proposed rPCAdpd method on outlier detection. For visual evaluation, we adopt orthogonal and score distances as diagnostic metrics (Hubert et al., 2005).
Analyzing screeplots for both rPCAdpd and the classical PCA for the Car dataset reveals that the first four principal components capture more than 90% of the variance. We thus apply both algorithms to extract these components and compute orthogonal and score distances for each observation. Figure 1 illustrates this diagnostic analysis. Classical PCA identifies a cluster of influential points (observations \(25,30,32,34\), and \(36\)), which are also detected by rPCAdpd estimator. These points share a value of \((-2)\) for \(4\) of the \(11\) variables: Rear.Hd, Rear.Seat, Rear.Shld, and Luggage. However, classical PCA assigns low orthogonal distances to these outliers,
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c} \hline \hline Metric & \(p\) & Classical & LOC & ROBPCA & Proj & RobCov & Grid & Gmed & PCP & DPD & DPD & DPD & DPD \\ & 25 & 0.534 & 2.185 & 0.403 & 0.494 & 0.412 & 1.09 & 0.127 & 2.452 & 0.515 & 0.324 & 0.247 & 0.247 \\ Bias & 50 & 1.218 & 4.58 & 1.018 & 0.997 & NA & 2.249 & 0.296 & 4.717 & 1.159 & 0.715 & 0.493 & 0.455 \\ & 100 & 2.39 & 9.389 & 1.431 & 1.876 & NA & 4.388 & 0.597 & 9.372 & 2.218 & 1.201 & 0.873 & 0.857 \\ & 250 & 5.09 & 23.783 & 2.675 & 2.997 & NA & 9.718 & 1.631 & 23.309 & 4.503 & 1.83 & 1.424 & 1.451 \\ \hline \multirow{7}{*}{MAE} & 10 & 32.656 & 74.327 & 30.411 & 42.304 & 48.105 & 66.025 & 24.609 & 10.512 & 34.086 & 29.286 & 23.804 & 21.794 \\ & 25 & 76.302 & 218.539 & 58.718 & 28.371 & 62.299 & 144.168 & 53.527 & 24.173 & 78.951 & 61.355 & 54.034 & 54.426 \\ \cline{1-1} & 50 & 160.631 & 458.022 & 131.667 & 166.613 & NA & 302.974 & 113.246 & 47.168 & 166.973 & 127.601 & 105.154 & 100.407 \\ \cline{1-1} & 100 & 303.042 & 938.894 & 218.99 & 303.707 & NA & 503.095 & 218.692 & 937.193 & 314.138 & 220.16 & 188.221 & 189.425 \\ \cline{1-1} & 250 & 815.964 & 2378.289 & 589.99 & 858.143 & NA & 1279.957 & 662.041 & 2330.966 & 817.858 & 559.381 & 515.87 & 516.502 \\ \hline \multirow{7}{*}{SRE} & 10 & 1.822 & 1.882 & 1.152 & 1.945 & 1.537 & 2.698 & 1.822 & 1.155 & 1.818 & 1.636 & 1.292 & 1.158 \\ & 25 & 2.154 & 2.272 & 1.003 & 2.128 & 1.048 & 3.757 & 2.185 & 1.142 & 2.155 & 1.4 & 1.054 & 1.053 \\ \cline{1-1} & 50 & 2.204 & 2.368 & 1.017 & 2.152 & NA & 4.395 & 2.287 & 0.978 & 2.206 & 1.423 & 0.961 & 0.919 \\ \cline{1-1} & 100 & 2.251 & 2.475 & 1.02 & 2.279 & NA & 3.587 & 2.311 & 1.01 & 2.23 & 1.282 & 0.951 & 0.955 \\ \cline{1-1} & 250 & 2.264 & 2.521 & 1.072 & 2.202 & NA & 3.525 & 2.343 & 1.015 & 2.16 & 1.165 & 0.98 & 0.98 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Estimated Bias, Mean Absolute Error and Subspace Recovery Error (SRE) for different PCA algorithm for simulation scenario (S3a)
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c} \hline \hline Metric & \(p\) & Classical & LOC & ROBPCA & Proj & RobCov & Grid & Gmed & PCP & DPD & DPD & DPD \\ & 25 & 0.534 & 2.185 & 0.403 & 0.494 & 0.412 & 1.09 & 0.127 & 2.452 & 0.515 & 0.324 & 0.247 & 0.247 \\ Bias & 50 & 1.218 & 4.58 & 1.018 & 0.997 & NA & 2.249 & 0.296 & 4.717 & 1.159 & 0.715 & 0.493 & 0.455 \\ & 100 & 2.39 & 9.389 & 1.431 & 1.876 & NA & 4.388 & 0.597 & 9.372 & 2.218 & 1.201 & 0.873 & 0.857 \\ & 250 & 5.09 & 23.783 & 2.675 & 2.997 & NA & 9.718 & 1.631 & 23.309 & 4.503 & 1.83 & 1.424 & 1.451 \\ \hline \multirow{7}{*}{MAE} & 10 & 32.656 & 74.327 & 30.411 & 42.304 & 48.105 & 66.025 & 24.609 & 10.512 & 34.086 & 29.286 & 23.804 & 21.794 \\ & 25 & 76.302 & 218.539 & 58.718 & 28.371 & 62.299 & 144.168 & 53.527 & 24.173 & 78.951 & 61.355 & 54.034 & 54.426 \\ \cline{1-1} & 250 & 160.631 & 458.022 & 131.667 & 166.613 & NA & 302.974 & 113.246 & 47.168 & 166.973 & 127.601 & 105.154 & 100.407 \\ \cline{1-1} & 100 & 303.042 & 938.894 & 218.99 & 303.707 & NA & 503.095 & 218.692 & 937.193 & 314.138 & 220.16 & 188.221 & 189.425 \\ \cline{1-1} & 250 & 815.964 & 2378.289 & 589.99 & 858.143 & NA & 1279.957 & 662.041 & 2330.966 & 817.858 & 559.381 & 515.87 & 516.502 \\ \hline \multirow{7}{*}{SRE} & 10 & 1.822 & 1.882 & 1.152 & 1.945 & 1.537 & 2.698 & 1.822 & 1.155 & 1.818 & 1.636 & 1.292 & 1.158 \\ & 25 & 2.154 & 2.272 & 1.003 & 2.128 & 1.048 & 3.757 & 2.185 & 1.142 & 2.155 & 1.4 & 1.054 & 1.053 \\ \cline{1-1} & 250 & 2.204 & 2.368 & 1.017 & 2.152 & NA & 4.395 & 2.287 & 0.978 & 2.206 & 1.423 & 0.961 & 0.919 \\ \cline{1-1} & 100 & 2.251 & 2.475 & 1.02 & 2.279 & NA & 3.587 & 2.311 & 1.01 & 2.23 & 1.282 & 0.951 & 0.955 \\ \cline
indicating their good fit, thus inflating distances for most points. Conversely, rPCAdpd assigns high orthogonal distances to these outliers. Additionally, rPCAdpd identifies a different set of outliers (observations \(102-107,109\)), unnoticed by classical PCA, consistent with findings in Hubert et al. (2005). As demonstrated in Figure 1, ROBPCA and Gmedian algorithms also spot such outliers.
### Octane Data
The Octane dataset, sourced from Esbensen et al. (2002), features spectroscopic data with octane numbers derived from near-infrared (NIR) absorbance spectra of 39 gasoline samples. Measurements span 226 electromagnetic radiation wavelengths (1102 nm to 1552 nm), each of which gives rise to a feature. With 39 observations and 226 features, principal component analysis (PCA) is pivotal for dimension reduction and subsequent analysis. Six samples (25, 26, and \(36-39\)) contain additional alcohol, making them distinct (Hubert et al., 2005).
Similar to the Car dataset, a screenplot analysis reveals that there are only 2 significant principal
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c c} \hline \hline Metric & \(p\) & Classical & LOC & ROBPCA & Proj & RobCov & Grid & Gmed & PCP & \begin{tabular}{c} DPD \\ (0.25) \\ \end{tabular} & \begin{tabular}{c} DPD \\ (0.5) \\ \end{tabular} & \begin{tabular}{c} DPD \\ (0.75) \\ \end{tabular} &
\begin{tabular}{c} DPD \\ (1) \\ \end{tabular} \\ \hline \multirow{4}{*}{Bias} & 10 & 0.454 & 0.756 & 0.376 & 0.415 & 0.509 & 0.749 & 0.172 & 1.065 & 0.478 & 0.399 & 0.283 & 0.207 \\ & 25 & 0.957 & 2.198 & 0.529 & 0.828 & 1.049 & 1.596 & 0.363 & 2.448 & 0.968 & 0.817 & 0.523 & 0.416 \\ & 50 & 2.053 & 4.603 & 0.829 & 1.571 & NA & 3.221 & 0.729 & 4.613 & 2.108 & 1.738 & 0.952 & 0.674 \\ & 100 & 4.085 & 9.41 & 1.438 & 2.712 & NA & 5.838 & 1.174 & 9.136 & 4.235 & 3.121 & 1.585 & 1.015 \\ & 250 & 10.553 & 23.793 & 5.036 & 8.705 & NA & 17.022 & 4.318 & 22.72 & 10.683 & 8.138 & 5.083 & 3.947 \\ \hline \multirow{4}{*}{MAE} & 10 & 52.663 & 75.648 & 40.059 & 50.066 & 53.751 & 81.22 & 29.654 & 106.464 & 56.791 & 50.076 & 39.234 & 32.751 \\ & 25 & 113.553 & 219.799 & 73.592 & 109.254 & 119.102 & 191.308 & 75.088 & 244.825 & 119.018 & 106.796 & 78.32 & 67.627 \\ & 50 & 236.518 & 60.337 & 122.695 & 206.081 & NA & 411.252 & 144.926 & 261.708 & 249.263 & 218.761 & 141.867 & 114.571 \\ & 100 & 492.217 & 941.042 & 241.644 & 399.289 & NA & 647.709 & 288.144 & 914.552 & 534.963 & 445.369 & 293.533 & 237.476 \\ & 250 & 1175.092 & 2379.306 & 659.666 & 1087.267 & NA & 1791.531 & 725.623 & 2273.85 & 1231.25 & 1015.697 & 716.547 & 604.369 \\ \hline \multirow{4}{*}{SRE} & 10 & 1.839 & 1.993 & 1.134 & 2.326 & 1.465 & 2.825 & 1.886 & 1.198 & 1.85 & 1.797 & 1.556 & 1.351 \\ & 25 & 2.22 & 2.414 & 1.071 & 2.784 & 2.217 & 4.183 & 2.249 & 1.234 & 2.237 & 2.026 & 1.356 & 1.176 \\ \cline{1-1} & 50 & 2.304 & 2.528 & 0.97 & 2.888 & NA & 4.909 & 2.358 & 2.287 & 2.312 & 2.149 & 1.43 & 1.196 \\ \cline{1-1} & 100 & 2.286 & 2.491 & 1.034 & 2.838 & NA & 3.809 & 2.307 & 2.288 & 2.309 & 1.993 & 1.223 & 0.995 \\ \cline{1-1} & 250 & 2.307 & 2.481 & 0.952 & 2.83 & NA & 3.847 & 2.34 & 2.308 & 2.317 & 1.997 & 1.348 & 1.172 \\ \hline \hline \end{tabular}
\end{table}
Table 7: Estimated Bias, Mean Absolute Error and Subspace Recovery Error (SRE) for different PCA algorithm for simulation scenario (S3b)
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c c} \hline \hline Metric & \(p\) & Classical & LOC & ROBPCA & Proj & RobCov & Grid & Gmed & PCP & \begin{tabular}{c} DPD \\ (0.25) \\ \end{tabular} & \begin{tabular}{c} DPD \\ (0.5) \\ \end{tabular} & \begin{tabular}{c} DPD \\ (0.75) \\ \end{tabular} &
\begin{tabular}{c} DPD \\ (1) \\ \end{tabular} \\ \hline \multirow{4}{*}{Bias} & 10 & 0.149 & 0.746 & 0.228 & 0.181 & 0.464 & 0.505 & 0.123 & 1.065 & 0.163 & 0.081 & 0.043 & 0.041 \\ & 25 & 0.357 & 2.188 & 0.258 & 0.402 & 0.543 & 1.057 & 0.381 & 2.457 & 0.382 & 0.174 & 0.148 & 0.143 \\ & 50 & 0.777 & 4.59 & 0.33 & 0.455 & NA & 2.011 & 0.676 & 4.738 & 0.823 & 0.296 & 0.265 & 0.253 \\ & 100 & 1.517 & 9.387 & 0.667 & 1.276 & NA & 4.291 & 1.144 & 9.402 & 1.495 & 0.442 & 0.392 & 0.372 \\ & 250 & 3.886 & 23.785 & 1.012 & 2.727 & NA & 9.358 & 3.513 & 23.383 & 3.883 & 1.464 & 1.386 & 1.359 \\ \hline \multirow{4}{*}{MAE} & 10 & 25.443 & 74.609 & 32.294 & 37.131 & 50.803 & 64.494 & 22.821 & 106.453 & 26.579 & 19.682 & 15.649 & 15.393 \\ & 25 & 59.571 & 218.815 & 53.687 & 74.807 & 76.628 & 147.007 & 60.002 & 425.654 & 65.782 & 44.871 & 42.176 & 41.994 \\ \cline{1-1} & 50 & 133.973 & 459.029 & 108.565 & 159.38 & NA & 319.77 & 123.649 & 473.841 & 141.108 & 87.353 & 84.297 & 84.503 \\ \cline{1-1} & 100 & 256.543 & 938.692 & 193.825 & 284.691 & NA & 496.1 & 225.919 & 940.223 & 263.121 & 159.77 & 154.23 & 154.083 \\ \cline{1-1} & 250 & 676.504 & 2378.474 & 524.335 & 711.74 & NA & 1230.442 & 649.349 & 2338.346 & 678.719 & 435.267 & 427.714 & 430.439 \\ \hline \hline \multirow{4}{
components present in the Octane dataset. However, the first principal value estimated by the classical PCA (0.132) is several magnitudes higher than the first principal value estimated by rPCAdpd (0.01075), which aligns with the estimates obtained from existing robust PCA algorithms (Hubert et al., 2005). Diagnostic plots in Figure 2 demonstrate classical PCA's failure to detect outliers, except observation 26, while rPCAdpd identifies alcohol-mixed gasoline samples accurately. The ROBPCA algorithm also detects these outliers, with a similar score and orthogonal distances. However, the Gmedian algorithm labels most of these points as orthogonal outliers only.
### Credit Card Fraud Detection
Credit card fraud detection is a very challenging problem because of the specific nature of transaction data and the labelling process. Most of the practical transaction data is highly imbalanced, and the number of fraudulent transactions is far too less compared to the extremely large number of valid transactions made on a day-to-day basis. There are primarily two kinds of strategies to detect such fraudulent transactions: the first one models the situation as a binary classification problem with some sampling procedures to counter class imbalance, and the second approach assumes that the fraudulent transactions are outliers in the data and applies an outlier detection algorithm. Many existing supervised and unsupervised machine learning algorithms (Carcillo et al., 2018, 2019) employ outlier detection to spot such fraudulent transactions. These methods often begin with a principal component analysis (PCA) to reduce dimensions and training time for real-time application.
To this end, we anticipate that the proposed robust PCA algorithm will outperform classical PCA in dimensionality reduction and provide reliable principal component estimates. We demonstrate this using the Credit Card Fraud Detection Dataset from Le Borgne and Bontempi (2004). The dataset encompasses 28 anonymized features over 284807 transactions, with only 0.1% (492) being fraudulent. For demonstration, we randomly sample 5% of the dataset, including 19 fraudulent transactions. The first 5 principal components, explaining over 80% of variation, are retained for both classical and rPCAdpd algorithms. Diagnostic plots in Figure 3 portray the outcomes, with red squares denoting fraudulent transactions. As shown in Figure 3, the classical PCA method fails to separate most of the fraudulent transactions, correctly identifying only 5 (in red). In contrast,
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c} \hline \hline Metric & \(p\) & Classical & LOC & ROBPCA & Proj & RobCov & Grid & Gmed & PCP & \begin{tabular}{c} DPD \\ (0.25) \\ \end{tabular} & \begin{tabular}{c} DPD \\ (0.5) \\ \end{tabular} & \begin{tabular}{c} DPD \\ (0.75) \\ \end{tabular} &
\begin{tabular}{c} DPD \\ (1) \\ \end{tabular} \\ \hline \multirow{6}{*}{Bias} & 10 & 0.268 & 0.79 & 0.139 & 0.347 & 0.633 & 0.724 & 0.183 & 1.066 & 0.324 & 0.217 & 0.115 & 0.113 \\ & 25 & 0.673 & 2.231 & 0.243 & 0.71 & 0.725 & 1.615 & 0.486 & 2.461 & 0.748 & 0.458 & 0.304 & 0.297 \\ & 50 & 1.325 & 4.641 & 0.13 & 1.474 & NA & 3.602 & 0.709 & 4.649 & 1.669 & 0.934 & 0.522 & 0.481 \\ & 100 & 2.772 & 9.436 & 0.101 & 2.798 & NA & 6.713 & 1.462 & 9.206 & 3.428 & 2.046 & 1.142 & 1.098 \\ & 250 & 6.861 & 23.832 & 0.34 & 6.809 & NA & 17.565 & 3.587 & 22.922 & 8.555 & 5.192 & 3.156 & 2.813 \\ \hline \multirow{6}{*}{MAE} & 10 & 35.048 & 79.01 & 31.374 & 49.556 & 67.262 & 81.659 & 27.681 & 106.519 & 39.974 & 32.114 & 20.677 & 19.517 \\ & 25 & 83.555 & 223.086 & 58.178 & 105.35 & 112.409 & 201.416 & 70.421 & 246.107 & 91.476 & 65.052 & 49.229 & 46.971 \\ \cline{1-1} & 50 & 179.148 & 44.061 & 112.288 & 227.377 & NA & 458.559 & 129.521 & 465.585 & 210.767 & 138.549 & 96.856 & 89.979 \\ \cline{1-1} & 100 & 370.199 & 943.586 & 216.485 & 440.313 & NA & 738.381 & 227.546 & 921.936 & 430.631 & 287.941 & 91.931 & 187.771 \\ \cline{1-1} & 250 & 908.806 & 2383.166 & 566.649 & 118.667 & NA & 1918.17 & 688.634 & 2294.489 & 1054.348 & 709.767 & 492.689 & 450.457 \\ \hline \multirow{6}{*}{SRE} & 10 & 1.849 & 2.045 & 1.084 & 2.42 & 2.053 & 3.117 & 1.955 & 1.026 & 1.851 & 1.608 & 1.225 & 1.16 \\ & 25 & 2.145 & 2.383 & 0.948 & 2.836 & 2.283 & 4.553 & 2.233 & 1.077 & 2.134 & 1.409 & 1.04 & 0.965 \\ \cline{1-1} & 50 & 2.235 & 2.495 & 0.983 & 2.951 & NA & 5.331 & 2.34 & 2.274 & 2.286 & 1.496 & 1.112 & 1.022 \\ \cline{1-1} & 100 & 2.269 & 2.529 & 0.985 & 2.928 & NA & 3.971 & 2.351 & 2.311 & 2.296 & 1.503 & 1.01 & 0.953 \\ \cline{1-1} & 250 & 2.28 & 2.506 & 1.039 & 3.009 & NA & 4.132 & 2.388 & 2.326 & 2.295 & 1.551 & 1.176 & 1.076 \\ \hline \hline \end{tabular}
\end{table}
Table 9: Estimated Bias, Mean Absolute Error and Subspace Recovery Error (SRE) for different PCA algorithm for simulation scenario (S3d)
the rPCAdpd algorithm separates out 13 out of 19 outliers. Existing robust PCA methods such as ROBPCA and GMedian spot 7 and 6 outliers respectively, which are better than classical PCA but at the cost of many false positives (outliers without red squares). Thus, substituting classical PCA with the robust rPCAdpd algorithm in the preprocessing or dimensionality reduction step of this analysis can greatly enhance the results of the existing machine learning algorithms. By doing so, valuable insights about fraudulent transactions can assist the existing outlier detection and classification algorithms on the transformed, lower-dimensional data.
## 6 Conclusion
As described in Section 1, a plethora of algorithms from an extensive range of disciplines use principal component analysis. Unfortunately, with the emergence of the era of big data, it has become increasingly difficult to check or validate the authenticity, trustworthiness and overall correctness of the data. As a result, most of the input data to these algorithms are highly susceptible of being contaminated by various forms of noise and outlying observations. Since classical PCA is heavily affected by such outliers, several robust PCA algorithms have been proposed in the last two decades. Many of these are not both fast and scalable. M-estimation based techniques are computationally efficient to obtain, but their breakdown point decays rapidly with the increase in dimension making it unacceptable for being used for high dimensional data. On the other hand, MVE, MCD and other projection pursuit based methods are highly scalable, but they are either computationally extremely intensive or lack proper theoretical guarantees of consistency, asymptotic normality or bounded influence function along with high breakdown. We believe that this paper will help to fill this gap by providing a robust, scalable and efficient PCA estimator with
Figure 1: Diagnostic plots for the Car dataset
the help of the popular density power divergence. We demonstrate its various desirable theoretical properties in the present work. It also has a dimension-free breakdown point making it attractive to be used in arbitrarily high dimensional data analysis. Also, the robustness parameter \(\alpha\) in rPCAdpd can be tuned to provide a smooth bridge between efficiency in estimation and robustness capabilities.
In all the datasets used to describe the practical applicability of the rPCAdpd, we estimate the significant number of principal components to be extracted based on thresholding the proportion of the variation explained by the first few principal components. However, such a procedure would require the estimation of all principal components first and then computing the proportion. From a computational point of view, it is highly beneficial to estimate the rank of the low-rank matrix \(\mathbf{L}\) first, and then proceed with the estimation of principal components. We will investigate this direction in a future study.
Figure 2: Diagnostic plots for the Octane dataset
Figure 3: Diagnostic plots for the Credit Card dataset for different robust PCA methods.
## Appendix A Proofs of the Results
### Normalization constant of Elliptically Symmetric Families of Distributions
Here we show that the normalizing constant for the elliptically symmetric family of densities is of the form \(c_{g}\det(\mathbf{\Sigma})^{1/2}\). To see this, we note that it can be expressed as
\[\mathcal{C}_{g}=\int_{\mathbb{R}^{p}}\exp\left[g\left((\mathbf{x}-\mathbf{\mu})^{ \intercal}\sum_{k=1}^{p}\gamma_{k}^{-1}\mathbf{v}_{k}\mathbf{v}_{k}^{\intercal}(\mathbf{x }-\mathbf{\mu})\right)\right]d\mathbf{x}.\]
Let \(\mathbf{P}\) be the \(p\times p\) orthogonal matrix whose rows are the vectors \(\mathbf{v}_{k}^{\intercal}\) for \(k=1,2,\ldots p\). Then, applying a change of variable \(\mathbf{z}=\mathbf{P}^{\intercal}(\mathbf{x}-\mathbf{\mu})\), we can rewrite the integral as
\[\mathcal{C}_{g}=\int_{\mathbb{R}^{p}}\exp\left[g\left(\sum_{k=1}^{p}\gamma_{k} ^{-1}z_{k}^{2}\right)\right]d\mathbf{z},\]
where \(\mathbf{z}=(z_{1},z_{2},\ldots z_{p})^{\intercal}\). Finally, another change of variable with \(u_{k}=z_{k}/\sqrt{\gamma_{k}}\) for \(k=1,2,\ldots p\) yields,
\[\mathcal{C}_{g}=\int_{\mathbb{R}^{p}}2^{-p}\prod_{k=1}^{p}\gamma_{k}^{1/2} \prod_{k=1}^{p}u_{k}^{-1/2}\exp\left[g\left(\sum_{k=1}^{p}u_{k}^{2}\right) \right]du_{1}du_{2}\ldots du_{p}=\det(\mathbf{\Sigma})^{1/2}c_{g},\]
where the constant \(c_{g}\) is the integral it is replacing. Clearly, the term \(c_{g}\) is free of the mean \(\mathbf{\mu}\) and the dispersion \(\mathbf{\Sigma}\) matrix, and hence is a constant depending only on the \(g\) function.
### Proof of Theorem 3.1
First note that the eigenvectors \(\mathbf{v}_{k}\) lie in the Stiefel manifold of order \(p\), which is a closed and bounded subset of \(\mathbb{R}^{p}\), hence is compact.
Also, since \(g(x)\) is a continuous decreasing function, \(\lim_{x\to\infty}e^{g(x)}=0\). Otherwise if \(\lim_{x\to\infty}e^{g(x)}=\epsilon>0\), it implies that the integral \(\int_{0}^{\infty}e^{g(x)}\) diverges by comparison test, contradicting the existence of the elliptically symmetric probability density function.
Fixing \(\mathbf{\mu}\in\mathbb{R}^{p}\), let us now observe how the objective function \(Q\) behaves for extreme values of the eigenvalues \(\gamma_{1},\ldots\gamma_{p}\). If \(\gamma_{1}\to 0\), then it follows that
\[\lim_{\gamma_{1}\to 0}Q(\gamma_{1},\ldots,\gamma_{p},\mathbf{\eta})=\lim_{ \gamma_{1}\to 0}\gamma_{1}^{-1/2}\left[\frac{c_{(1+\alpha)g}}{c_{g}}-\lim_{x\to \infty}e^{g(x)}\right]\geq 0,\]
since \(c_{g}>0\) for any choice of \(g\) function by definition. On the other hand, if \(\gamma_{1}\to\infty\), the quadratic form
\[(\mathbf{X}_{i}-\mathbf{\mu})^{\intercal}\sum_{k=1}^{p}\gamma_{k}^{-1}\mathbf{v}_{k}(\mathbf{ \eta})\mathbf{v}_{k}(\mathbf{\eta})^{\intercal}(\mathbf{X}_{i}-\mathbf{\mu})\to(\mathbf{X}_{i}-\bm {\mu})^{\intercal}\sum_{k=2}^{p}\gamma_{k}^{-1}\mathbf{v}_{k}(\mathbf{\eta})\mathbf{v}_{k}( \mathbf{\eta})^{\intercal}(\mathbf{X}_{i}-\mathbf{\mu}).\]
Then by the strong law of large numbers, it follows that for sufficiently large \(n\), with probability \(1\),
\[\frac{1}{n}\sum_{i=1}^{n}\exp\left\{\alpha g\left((\mathbf{X}_{i}-\mathbf{ \mu})^{\intercal}\sum_{k=2}^{p}\gamma_{k}^{-1}\mathbf{v}_{k}(\mathbf{\eta})\mathbf{v}_{k}( \mathbf{\eta})^{\intercal}(\mathbf{X}_{i}-\mathbf{\mu})\right)\right\}\] \[\rightarrow \mathbb{E}\left[\exp\left\{\alpha g\left((\mathbf{X}-\mathbf{\mu})^{ \intercal}\sum_{k=2}^{p}\gamma_{k}^{-1}\mathbf{v}_{k}(\mathbf{\eta})\mathbf{v}_{k}(\mathbf{ \eta})^{\intercal}(\mathbf{X}-\mathbf{\mu})\right)\right\}\right]\] \[\geq \mathbb{E}\left[\exp\left\{\alpha g\left((\mathbf{X}-\mathbf{\mu})^{ \intercal}\sum_{k=1}^{p}\gamma_{k}^{-1}\mathbf{v}_{k}(\mathbf{\eta})\mathbf{v}_{k}(\mathbf{ \eta})^{\intercal}(\mathbf{X}-\mathbf{\mu})\right)\right\}\right]\] \[= \int_{R^{p}}\exp\left\{(1+\alpha)g\left((\mathbf{x}-\mathbf{\mu})^{ \intercal}\sum_{k=1}^{p}\gamma_{k}^{-1}\mathbf{v}_{k}(\mathbf{\eta})\mathbf{v}_{k}(\mathbf{ \eta})^{\intercal}(\mathbf{x}-\mathbf{\mu})\right)\right\}d\mathbf{x}\] \[= \frac{c_{(1+\alpha)g}}{c_{g}},\]
where the inequality uses the fact that \(g\) is monotonically decreasing. Therefore, for sufficiently large \(n\), with probability \(1\), \(Q(\gamma_{1},\ldots\gamma_{p},\mathbf{\eta})\) increases to \(0\) as \(\gamma_{1}\) increases to \(\infty\). Therefore, for any given \(\epsilon>0\), there exists \(0<a_{1}<b_{1}<\infty\) such that \(Q(\gamma_{1},\gamma_{2},\ldots\gamma_{p})>(-\epsilon)\) for any \(\gamma_{1}\not\in[a_{1},b_{1}]\). Note that, since \(\gamma_{1}\) is chosen arbitrarily, the same conclusion also holds for all other eigenvalues, possibly with different choices of \(a_{k}\) and \(b_{k}\) for \(k=1,2,\ldots p\). Letting, \(\epsilon=-\inf Q(\gamma_{1},\ldots\gamma_{p},\mathbf{\eta})/2\) (which is finite by continuity of \(Q\) and the limiting behaviour described above) and considering the set \(K=\prod_{k=1}^{p}[a_{k},b_{k}]\times S\), we note that the infimum of \(Q\) must exist within the set \(K\). Since \(K\) is a compact subset of \(\mathbb{R}^{p}\), it follows by the Extreme Value Theorem that the infimum must be attained. This proves the existence of the rPCAdpd estimator for any arbitrary value of \(\mu\), including the location estimate \(\widehat{\mu}\).
### Proof of Theorem 3.2
Let \(\widehat{\mathbf{\mu}}_{Y}\) and \(\widehat{\mathbf{\mu}}_{X}\) be the robust estimates of the location based on the sample \(\mathbf{Y}_{1},\ldots\mathbf{Y}_{n}\) and \(\mathbf{X}_{1},\ldots\mathbf{X}_{n}\) respectively. Then by the orthogonal equivariance of the location estimator, we have that \(\widehat{\mathbf{\mu}}_{Y}=a\mathbf{P}\widehat{\mathbf{\mu}}_{X}+\mathbf{b}\). The equivariance property for the estimated eigenvalues and eigenvectors by the rPCAdpd algorithm then follows from the observation that the quadratic form of the transformed data can be expressed as
\[(\mathbf{Y}_{i}-\widehat{\mathbf{\mu}}_{Y})^{\intercal}\left(\sum_{k=1}^{ p}\gamma_{k}^{-1}\mathbf{v}_{k}\mathbf{v}_{k}^{\intercal}\right)(\mathbf{Y}_{i}-\widehat{\mathbf{ \mu}}_{Y}) =a(\mathbf{X}_{i}-\widehat{\mathbf{\mu}}_{X})^{\intercal}\mathbf{P}^{\intercal }\left(\sum_{k=1}^{p}\gamma_{k}^{-1}\mathbf{v}_{k}\mathbf{v}_{k}^{\intercal}\right)( \mathbf{X}_{i}-\widehat{\mathbf{\mu}}_{X})\mathbf{P}a\] \[=(\mathbf{X}_{i}-\widehat{\mathbf{\mu}}_{X})^{\intercal}\left(\sum_{k=1}^ {p}(\gamma_{k}/a^{2})^{-1}\mathbf{P}^{\intercal}\mathbf{v}_{k}\mathbf{v}_{k}^{\intercal}\mathbf{ P}\right)(\mathbf{X}_{i}-\widehat{\mathbf{\mu}}_{X}).\]
It shows that, if rPCAdpd estimates of the eigenvalues for the sample \(\mathbf{X}_{1},\ldots\mathbf{X}_{p}\) are \(\gamma_{k}^{*}\) for \(k=1,2,\ldots p\) respectively, then the rPCAdpd estimate of the same for the transformed sample would be \(a^{2}\gamma_{k}^{*}\). A similar conclusion can be drawn for the rPCAdpd estimate of eigenvectors as well.
### Proof of Lemma 3.1
Let, \(h_{\mathbf{\theta}}(\mathbf{x})=c_{\alpha}^{-1}(\mathbf{\theta})f_{\mathbf{\theta}}^{(1+\alpha) }(\mathbf{x})\) be another density function. Note that
\[\frac{\partial}{\partial\mathbf{\theta}}\log(h_{\mathbf{\theta}}(\mathbf{x}))=-\frac{ \partial}{\partial\mathbf{\theta}}\log(c_{\alpha}(\mathbf{\theta}))+(1+\alpha)u_{\mathbf{ \theta}}(\mathbf{x}),\]
where \(u_{\theta}(\mathbf{x})\) is the score function corresponding to \(f_{\mathbf{\theta}}(\mathbf{x})\). Under the standard regularity conditions, one can exchange the differentiation and the integral sign to obtain that the expectation of the score function is equal to \(0\). Therefore,
\[0=\int\frac{\partial}{\partial\mathbf{\theta}}\log(h_{\mathbf{\theta}}(\mathbf{x}))h_{\mathbf{ \theta}}(\mathbf{x})d\mathbf{x}=-\frac{\partial}{\partial\mathbf{\theta}}\log(c_{\alpha}( \mathbf{\theta}))+\frac{(1+\alpha)}{c_{\alpha}(\mathbf{\theta})}\xi_{\mathbf{\theta}}.\]
Interchanging the sides and solving for \(\xi_{\mathbf{\theta}}\) yields the result.
### Proof of Lemma 3.2
Starting with the decomposition
\[u_{\mathbf{\theta}}^{h}(\mathbf{x})=\frac{\partial}{\partial\mathbf{\theta}}\log(h_{\mathbf{ \theta}}(\mathbf{x}))=-\frac{\partial}{\partial\mathbf{\theta}}\log(c_{\alpha}(\mathbf{ \theta}))+(1+\alpha)u_{\mathbf{\theta}}(\mathbf{x}),\]
it follows that
\[\left(u_{\mathbf{\theta}}^{h}(\mathbf{x})\right)\left(u_{\mathbf{\theta}}^{ h}(\mathbf{x})\right)^{\intercal} =\left(\frac{\partial}{\partial\mathbf{\theta}}\log(c_{\alpha}(\mathbf{ \theta}))\right)\left(\frac{\partial}{\partial\mathbf{\theta}}\log(c_{\alpha}( \mathbf{\theta}))\right)^{\intercal}\] \[+2(1+\alpha)u_{\mathbf{\theta}}(\mathbf{x})\left(\frac{\partial}{ \partial\mathbf{\theta}}\log(c_{\alpha}(\mathbf{\theta}))\right)^{\intercal}+(1+\alpha )^{2}u_{\mathbf{\theta}}(\mathbf{x})u_{\mathbf{\theta}}^{\intercal}(\mathbf{x}).\]
Multiplying both sides with \(h_{\mathbf{\theta}}(\mathbf{x})\) and integrating with respect to \(\mathbf{x}\) yields
\[i^{h}(\mathbf{\theta})=\left(\frac{\nabla_{\mathbf{\theta}}c_{\alpha}(\mathbf{\theta})}{c _{\alpha}(\mathbf{\theta})}\right)\left(\frac{\nabla_{\mathbf{\theta}}c_{\alpha}(\mathbf{ \theta})}{c_{\alpha}(\mathbf{\theta})}\right)^{\intercal}-2\left(\frac{\nabla_{ \mathbf{\theta}}c_{\alpha}(\mathbf{\theta})}{c_{\alpha}(\mathbf{\theta})}\right)\left( \frac{\nabla_{\mathbf{\theta}}c_{\alpha}(\mathbf{\theta})}{c_{\alpha}(\mathbf{\theta})} \right)^{\intercal}+\frac{(1+\alpha)^{2}}{c_{\alpha}(\mathbf{\theta})}J_{\mathbf{ \theta}},\]
where \(\nabla_{\mathbf{\theta}}c_{\alpha}(\mathbf{\theta})=\frac{\partial c_{\alpha}(\mathbf{ \theta})}{\partial\mathbf{\theta}}\). Solving for \(J_{\mathbf{\theta}}\) yields Eq. (3.4).
### Proof of Corollary 3.2
Since the normalized density \(c_{\alpha}^{-1}(\mathbf{\theta})f_{\mathbf{\theta}}^{(1+\alpha)}\) also belongs to an elliptically symmetric class of densities, it follows that
\[c_{\alpha}(\mathbf{\theta})=c_{(1+\alpha)g}\prod_{k=1}^{p}(\gamma_{k})^{1/2}c_{g}^ {-(1+\alpha)}\prod_{k=1}^{p}(\gamma_{k})^{-(1+\alpha)/2}.\]
Putting this value and its derivative with respect to \(\mathbf{\theta}\) into Lemma 3.1 yields Corollary 3.2.
### Proof of Corollary 3.3
We start by defining a few notations as follows:
\[Q(\mathbf{x}) =(\mathbf{x}-\mathbf{\mu}^{*})^{\intercal}\left(\sum_{k=1}^{p}\gamma_{k}^ {-1}\mathbf{v}_{k}\mathbf{v}_{k}^{\intercal}\right)(\mathbf{x}-\mathbf{\mu}^{*}),\] \[A_{2}(g) =\int g^{\prime}(Q(\mathbf{x}))(\mathbf{x}-\mathbf{\mu}^{*})(\mathbf{x}-\mathbf{\mu} ^{*})^{\intercal}c_{0}(\mathbf{\theta})^{-1}\exp(g(Q(\mathbf{x})))d\mathbf{x},\] \[A_{4}(g;\mathbf{u},\mathbf{v}) =\int\left(g^{\prime}(Q(\mathbf{x}))\right)^{2}(\mathbf{x}-\mathbf{\mu}^{*}) (\mathbf{x}-\mathbf{\mu}^{*})^{\intercal}\mathbf{u}\mathbf{v}^{\intercal}(\mathbf{x}-\mathbf{\mu}^{*}) (\mathbf{x}-\mathbf{\mu}^{*})^{\intercal}c_{0}(\mathbf{\theta})^{-1}\exp(g(Q(\mathbf{x})))d \mathbf{x}.\]
Here, \(c_{0}(\mathbf{\theta})\) is the normalizing constant the normalizing constant for the elliptically symmetric density proportional to \(\exp(g(Q(\mathbf{x}))\). Clearly, \(c_{0}(\mathbf{\theta})=c_{g}\prod_{k=1}^{p}\gamma_{k}^{1/2}\). All of these quantities are well
defined due to the Assumptions (A1) and (A3). Also, let \(\mathbf{G}_{k}=\frac{\partial\mathbf{v}_{k}}{\partial\mathbf{\eta}}\) denote the \(p(p+1)/2\times p\) matrix whose columns are the gradients of the entries \(v_{kj}\) for \(j=1,2,\ldots p\), of \(\mathbf{v}_{k}\) with respect to the parameter \(\mathbf{\eta}\). One important aspect is to note the quantities \(A_{2}(g)\) and \(A_{4}(g;\mathbf{u},\mathbf{v})\) are free of \(\mathbf{\mu}^{*}\), which can be verified by a simple substitution in the integral.
Starting with the identity
\[c_{0}(\mathbf{\theta})=\int\exp\left(g(Q(\mathbf{x}))\right)d\mathbf{x},\]
and differentiating both sides by \(\gamma_{k}\) and \(\mathbf{\eta}\) respectively, we obtain the identities
\[\gamma_{k}^{-2}\mathbf{v}_{k}^{\intercal}A_{2}(g)\mathbf{v}_{k}=-\frac{1}{2\gamma_{k }},\ \sum_{k=1}^{p}\gamma_{k}^{-1}\mathbf{G}_{k}A_{2}(g)\mathbf{v}_{k}=0,\] (A.1)
both of which will be used later in the proof.
Let \(h_{\mathbf{\theta}}(\mathbf{x})=c_{(1+\alpha)}(\mathbf{\theta})^{-1}e^{(1+\alpha)g(Q(\mathbf{ x}))}\) be a density belonging to the same elliptically symmetric family. Then, the score function \(u_{\mathbf{\theta}}^{h}(\mathbf{x})\) corresponding to \(h_{\mathbf{\theta}}\) can be expressed as
\[u_{\mathbf{\theta}}^{h}(\mathbf{x})=\begin{bmatrix}\frac{1}{2}\text{Diag}\left(\mathbf{ \Gamma}^{-1}\right)-(1+\alpha)g^{\prime}(Q(\mathbf{x}))\mathbf{\Gamma}^{-2}\mathbf{V}^{ \intercal}(\mathbf{I}_{p}\otimes(\mathbf{x}-\mathbf{\mu})(\mathbf{x}-\mathbf{\mu})^{\intercal})\bm {V}\\ 2(1+\alpha)g^{\prime}(Q(\mathbf{x}))\mathbf{G}(\mathbf{\Gamma}^{-1}\otimes(\mathbf{x}-\mathbf{\mu })(\mathbf{x}-\mathbf{\mu})^{\intercal})\mathbf{V}\mathbf{1}_{p}\end{bmatrix}.\] (A.2)
Using the expression for \(u_{\mathbf{\theta}}^{h}(\mathbf{x})\), we can further differentiate this with respect to the entries of \(\mathbf{\theta}\) and take expectation. This leads to the Fisher Information matrix in the partitioned form as follows,
\[i^{h}(\mathbf{\theta})=\begin{bmatrix}i^{h}(\gamma_{1},\gamma_{1})&\ldots&i^{h}( \gamma_{1},\gamma_{p})&i^{h}(\gamma_{1},\mathbf{\eta})\\ \vdots&\ddots&\vdots&\vdots\\ i^{h}(\gamma_{p},\gamma_{1})&\ldots&i^{h}(\gamma_{p},\gamma_{p})&i^{h}(\gamma _{p},\mathbf{\eta})\\ i^{h}(\gamma_{1},\mathbf{\eta})^{\intercal}&\ldots&i^{h}(\gamma_{p},\mathbf{\eta})^{ \intercal}&i^{h}(\mathbf{\eta},\mathbf{\eta})\end{bmatrix},\]
where,
\[i^{h}(\gamma_{k},\gamma_{l}) =\left(\frac{\partial q_{(1+\alpha)g}}{\partial\gamma_{k}} \right)\left(\frac{\partial q_{(1+\alpha)g}}{\partial\gamma_{l}}\right)+ \left(\frac{\partial q_{(1+\alpha)g}}{\partial\gamma_{k}}\right)\gamma_{l}^{ -2}\mathbf{v}_{l}^{\intercal}A_{2}((1+\alpha)g)\mathbf{v}_{l}\] \[\quad+\left(\frac{\partial q_{(1+\alpha)g}}{\partial\gamma_{l}} \right)\gamma_{k}^{-2}\mathbf{v}_{k}^{\intercal}A_{2}((1+\alpha)g)\mathbf{v}_{k}+ \frac{\mathbf{v}_{k}^{\intercal}A_{4}((1+\alpha)g;\mathbf{v}_{k},\mathbf{v}_{l})\mathbf{v}_{l }}{\gamma_{k}^{2}\gamma_{l}^{2}}\] \[=-\left(\frac{\partial q_{(1+\alpha)g}}{\partial\gamma_{k}} \right)\left(\frac{\partial q_{(1+\alpha)g}}{\partial\gamma_{l}}\right)+ \frac{\mathbf{v}_{k}^{\intercal}A_{4}((1+\alpha)g;\mathbf{v}_{k},\mathbf{v}_{l})\mathbf{v}_{l }}{\gamma_{k}^{2}\gamma_{l}^{2}},\ k,l=1,2,\ldots p\] \[i^{h}(\gamma_{k},\mathbf{\eta}) =-2\left(\frac{\partial q_{(1+\alpha)g}}{\partial\gamma_{k}} \right)\sum_{k=1}^{p}\gamma_{k}^{-1}\mathbf{G}_{k}A_{2}((1+\alpha)g)\mathbf{v}_{k}- \frac{2}{\gamma_{k}^{2}}\sum_{l=1}^{p}\gamma_{l}^{-1}\mathbf{v}_{k}^{\intercal}A_{4 }((1+\alpha)g;\mathbf{v}_{k},\mathbf{v}_{l})\mathbf{G}_{l}^{\intercal}\] \[=-\frac{2}{\gamma_{k}^{2}}\sum_{l=1}^{p}\gamma_{l}^{-1}\mathbf{v}_{k} ^{\intercal}A_{4}((1+\alpha)g;\mathbf{v}_{k},\mathbf{v}_{l})G_{l}^{\intercal},\ k=1, \ldots p\] \[i^{h}(\mathbf{\eta},\mathbf{\eta}) =4\sum_{k=1}^{p}\sum_{l=1}^{p}\gamma_{k}^{-1}\gamma_{l}^{-1}\mathbf{ G}_{k}A_{4}((1+\alpha)g;\mathbf{v}_{k},\mathbf{v}_{l})\mathbf{G}_{l}^{\intercal},\]
where we use the identities (A.1). In all of the above expressions, the quantity \(q_{g}\) denoted the logarithm of the normalizing constant, i.e., \(q_{g}=\log(c_{0}(\mathbf{\theta}))\) and \(q_{(1+\alpha)g}=\log(c_{\alpha}(\mathbf{\theta}))\). Finally, Corollary 3.3 follows from using Lemma 3.2 and the expression of \(\mathbf{\xi}_{\mathbf{\theta}}\) given in Corollary 3.2.
### Proof of the Theorem 3.4
The proof of the Theorem 3.4 closely resembles the proof of Theorem 3.1 of Ghosh and Basu (2013). For brevity, we shall only indicate the modifications pertinent to the special scenario of principal components. Given the location estimator \(\widehat{\mathbf{\mu}}\), using the same notation as in Ghosh and Basu (2013), we define
\[V(\mathbf{X},\mathbf{\theta})=\prod_{k=1}^{p}\gamma_{k}^{-\alpha/2}\left[\frac{c_{(1+ \alpha)g}}{c_{g}}-\left(1+\frac{1}{\alpha}\right)e^{\alpha g\left((\mathbf{X}- \widehat{\mathbf{\mu}})^{\intercal}\sum_{k=1}^{p}\gamma_{k}^{-1}\mathbf{v}_{k}(\mathbf{ \eta})\mathbf{v}_{k}(\mathbf{\eta})\mathbf{\tau}(\mathbf{X}-\widehat{\mathbf{\mu}})\right)}\right]\]
which are the summands in the objective function in Eq. (3.1). Now, conditional on \(\widehat{\mathbf{\mu}}\), by an application of the Law of Large Numbers, we have
\[\frac{1}{n}\sum_{i=1}^{n}\nabla V(\mathbf{X}_{i},\mathbf{\theta}^{*})\mid\widehat{\bm {\mu}}\xrightarrow{P}0,\text{ and, }\ \frac{1}{n}\sum_{i=1}^{n}\nabla^{2}V(\mathbf{X}_{i},\mathbf{\theta}^{*})\mid\widehat{ \mathbf{\mu}}\xrightarrow{P}\mathbf{J}_{\mathbf{\theta}^{*}}\]
where \(\mathbf{\theta}^{*}\) is the true value of the parameters. Now, since the right-hand sides of both of these are continuous functions of \(\widehat{\mathbf{\mu}}\) and as \(\widehat{\mathbf{\mu}}\xrightarrow{P}\mathbf{\mu}^{*}\) (the true location parameter) due to the consistency of the location estimator, it follows that the unconditional random variables also converges in probability to the same value. As the support of the elliptically symmetric family of distributions is assumed to be the entire space \(\mathbb{R}^{p}\), \(\mathbf{J}_{\mathbf{\theta}^{*}}\) becomes free of the choice of location which makes this convergence possible. Now, one can replicate the proof for consistency to show that the rPCAdpd estimator is consistent.
To prove the asymptotic normality, we need to show that \(T_{n}=\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\nabla^{2}V(\mathbf{X}_{i},\mathbf{\theta}^{*})\) converges in distribution to a random variable \(\mathbf{Z}\) following a multivariate normal distribution with mean \(0\) and variance \(\mathbf{K}_{\mathbf{\theta}^{*}}\). Due to Portmanteau's theorem, it is enough to show that for any bounded continuous function \(h\), \(|\mathbb{E}(h(T_{n}))-\mathbb{E}(h(\mathbf{Z}))|\to 0\) as \(n\to\infty\). An application of Lindeberg-Levy Central Limit Theorem and Portmanteau's theorem yields that as \(n\to\infty\),
\[|\mathbb{E}(h(T_{n})\mid\widehat{\mathbf{\mu}})-\mathbb{E}(h(\mathbf{Z}))|\to 0.\]
Since \(\mathbb{E}(h(T_{n})\mid\widehat{\mathbf{\mu}})\) is also a bounded and continuous function of \(\widehat{\mathbf{\mu}}\), it follows that
\[\mathbb{E}(h(T_{n}))=\mathbb{E}\left[\mathbb{E}(h(T_{n})\mid\widehat{\mathbf{\mu} })\right]\to\mathbb{E}\left[\mathbb{E}(h(\mathbf{Z})\mid\mathbf{\mu}^{*})\right]= \mathbb{E}(h(\mathbf{Z})),\text{ as }n\to\infty,\]
where the last equality follows due to the fact that both mean and the variance \(\mathbf{K}_{\mathbf{\theta}^{*}}\) of \(\mathbf{Z}\) is free of the choice of location \(\mathbf{\mu}^{*}\). The rest of the proof follows as in Ghosh and Basu (2013).
### Proof of the Corollary 3.4
The generating function for the Gaussian distribution in the elliptically symmetric family of distributions is \(g(x)=(-x/2)\). It follows that \(g^{\prime}(x)=-1/2\) and the normalizing constant \(\mathcal{C}_{g}=(2\pi)^{p/2}\prod_{k=1}^{p}\gamma_{k}^{1/2}\). For ease of notation, we also define
\[c_{\alpha}=\frac{\mathcal{C}_{(1+\alpha)g}}{\mathcal{C}_{g}}=(2\pi)^{-\alpha p /2}(1+\alpha)^{-p/2}\prod_{k=1}^{p}(\gamma_{k}^{*})^{-\alpha/2}.\]
Now, some standard calculation using properties of normal distribution and its quadratic forms (Petersen et al., 2008) reveals that \(A_{2}((1+\alpha)g)=(1+\alpha)\mathbf{\Sigma}^{*}/4\), and
\[A_{4}((1+\alpha)g;\mathbf{u},\mathbf{v})=\frac{1}{4}\left[\mathbf{\Sigma}^{*}\left(\mathbf{u} \mathbf{v}^{\intercal}+\mathbf{v}\mathbf{u}^{\intercal}\right)\Sigma^{*}+\operatorname{ Trace}\left(\mathbf{u}\mathbf{v}^{\intercal}\mathbf{\Sigma}^{*}\right)\mathbf{\Sigma}^{*} \right].\]
In particular, for any \(k,l=1,2,\ldots p\),
\[A_{4}((1+\alpha)g;\mathbf{v}_{k}^{*},\mathbf{v}_{l}^{*}) =\frac{1}{4}\left[\Sigma^{*}\left((\mathbf{v}_{k}^{*})(\mathbf{v}_{l}^{*}) ^{\intercal}+(\mathbf{v}_{l}^{*})(\mathbf{v}_{k}^{*})^{\intercal}\right)\Sigma^{*}+ \operatorname{Trace}\left((\mathbf{v}_{k}^{*})(\mathbf{v}_{l}^{*})^{\intercal}\mathbf{ \Sigma}^{*}\right)\mathbf{\Sigma}^{*}\right]\] \[=\frac{1}{4}\left[\gamma_{k}^{*}\gamma_{l}^{*}\left((\mathbf{v}_{k}^{ *})(\mathbf{v}_{l}^{*})^{\intercal}+(\mathbf{v}_{l}^{*})(\mathbf{v}_{k}^{*})^{\intercal} \right)+\mathbf{1}_{\{k=l\}}\gamma_{l}^{*}\mathbf{\Sigma}^{*}\right],\]
where we use the fact that \(\mathbf{v}_{k}^{*}\) is an eigenvector of \(\Sigma^{*}\) corresponding to the eigenvalue \(\gamma_{k}^{*}\). Thus, it turns out that \(j^{h}(\mathbf{\mu}^{*},\mathbf{\mu}^{*})=\frac{c_{\alpha}}{(1+\alpha)}(\Sigma^{*})^{-1}\), and
\[j^{h}(\gamma_{k}^{*},\gamma_{l}^{*})=\frac{c_{\alpha}}{4(1+\alpha)^{2}\gamma_ {k}^{*}\gamma_{l}^{*}}\left(\alpha^{2}+2\mathbf{1}_{\{k=l\}}\right)\]
and \(j^{h}(\gamma_{k}^{*},\mathbf{\eta}^{*})=0\), where we use the fact that \(\mathbf{G}_{k}\mathbf{v}_{k}^{*}=0\). This equality follows from differentiating both sides of the identity \((\mathbf{v}_{k}^{*})^{\intercal}(\mathbf{v}_{k}^{*})=1\) with respect to the parameter \(\mathbf{\eta}\) at \(\mathbf{\eta}=\mathbf{\eta}^{*}\). Similarly, differentiating the identity \((\mathbf{v}_{k}^{*})^{\intercal}(\mathbf{v}_{l}^{*})=0\) for \(k\neq l\) with respect to \(\mathbf{\eta}\) yields that \(\mathbf{G}_{k}\mathbf{v}_{l}^{*}+\mathbf{G}_{l}\mathbf{v}_{k}^{*}=0\). Some lengthy calculation and an application of this identity allows us to obtain
\[j^{h}(\mathbf{\eta}^{*},\mathbf{\eta}^{*})=\frac{c_{\alpha}}{(1+\alpha)^{2}}\left( \sum_{k=1}^{p}\sum_{l=1}^{p}\left(1-\frac{\gamma_{k}^{*}}{\gamma_{l}^{*}} \right)\mathbf{G}_{k}(\mathbf{v}_{l}^{*})(\mathbf{v}_{k}^{*})^{\intercal}\mathbf{G}_{l}^{ \intercal}\right).\]
A similar calculation may be performed to determine the entries of \(K_{\mathbf{\theta}^{*}}\). This completes the proof of the corollary, with a direct application of Theorem 3.4.
## Appendix B Performance Metrics for Assessmentof Principal Components
Letting \(\widehat{\mathbf{P}}\) denote the estimated principal component matrix where we stack each principal component vector as columns, the quantity \(\widehat{\mathbf{X}}=\mathbf{X}\widehat{\mathbf{P}}\widehat{\mathbf{P}}^{\intercal}\) becomes the projection of the samples \(\mathbf{X}_{1},\ldots\mathbf{X}_{n}\) (here \(\mathbf{X}_{i}\) is the \(i\)-th row of \(\mathbf{X}\)) onto the principal component space (i.e., the vector space spanned by the eigenvectors corresponding to the first \(r\) eigenvalues). Then the orthogonal distance for the \(i\)-th datapoint is calculated as the Euclidean distance between \(\mathbf{X}_{i}\) and \(\widehat{\mathbf{X}}_{i}\) where \(\widehat{\mathbf{X}}_{i}\) denotes the \(i\)-th row of \(\widehat{\mathbf{X}}\). On the other hand, the score distance of the \(i\)-th datapoint would be given by
\[\text{Score distance}_{i}=\sum_{k=1}^{r}\frac{\widehat{X}_{ik}^{2}}{\widehat {\gamma}_{k}},\] (B.1)
where \(\widehat{\gamma}_{k}\) is the \(k\)-th eigenvalue and \(\widehat{X}_{ik}\) is the \(k\)-th element of the vector \(\widehat{\mathbf{X}}_{i}\). |
2308.00102 | Can A Single Human Supervise A Swarm of 100 Heterogeneous Robots? | An open research question has been whether a single human can supervise a
true heterogeneous swarm of robots completing tasks in real world environments.
A general concern is whether or not the human's workload will be taxed to the
breaking point. The Defense Advanced Research Projects Agency's OFFsensive
Swarm-Enabled Tactics program's field exercises that occurred at U.S. Army
urban training sites provided the opportunity to understand the impact of
achieving such swarm deployments. The Command and Control of Aggregate Swarm
Tactics integrator team's swarm commander users the heterogeneous robot swarm
to conduct relevant missions. During the final OFFSET program field exercise,
the team collected objective and subjective metrics related to teh swarm
commander's human performance. A multi-dimensional workload algorithm that
estimates overall workload based on five components of workload was used to
analyze the results. While the swarm commander's workload estimate did cross
the overload threshold frequently, the swarm commander was able to successfully
complete the missions, often under challenging operational conditions. The
presented results demonstrate that a single human can deploy a swarm of 100
heterogeneous robots to conduct real-world missions. | Julie A. Adams, Joshua Hamell, Phillip Walker | 2023-07-31T19:24:00Z | http://arxiv.org/abs/2308.00102v1 | # Can A Single Human Supervise A Swarm of 100 Heterogeneous Robots?
###### Abstract
An open research question has been whether a single human can supervise a true heterogeneous swarm of robots completing tasks in real world environments. A general concern is whether or not the human's workload will be taxed to the breaking point. The Defense Advanced Research Projects Agency's OFFensive Swarm-Enabled Tactics program's field exercises that occurred at U.S. Army urban training sites provided the opportunity to understand the impact of achieving such swarm deployments. The Command and Control of Aggregate Swarm Tactics integrator team's swarm commander uses the heterogeneous robot swarm to conduct relevant missions. During the final OFFSET program field exercise, the team collected objective and subjective metrics related to the swarm commander's human performance. A multi-dimensional workload algorithm that estimates overall workload based on five components of workload was used to analyze the results. While the swarm commander's workload estimates did cross the overload threshold frequently, the swarm commander was able to successfully complete the missions, often under challenging operational conditions. The presented results demonstrate that a single human can deploy a swarm of 100 heterogeneous robots to conduct real-world missions.
## 1 Introduction
Stated simply, the answer to the title's question is: Yes! The Defense Advanced Research Projects Agency's (DARPA) OFFensive Swarm-Enabled Tactics (OFFSETET) program (DARPA, nd) created a unique opportunity to investigate a long standing open question related to a single human's ability to supervise a true heterogeneous swarm of robots completing a complex mission in a complex urban environment. This manuscript presents the first human performance results for such real-world swarm deployments. Swarms of this nature have broad future application in domains, such as disaster response (e.g., infrastructure safety inspections, wildland fire identification and tracking) and commercial applications (e.g., general logistics, deliveries).
The Command and Control of Aggregate Swarm Tactics (CCAST) DARPA OFFSET Program integrator team, led by Raytheon BBN and including personnel from Oregon State University and SIFT, LLC, developed
a heterogeneous swarm to advance and accelerate elements of enabling swarm technologies, focusing on the swarm autonomy and human-swarm teaming (Clark et al., 2021). A near-the-battle human supervisor, the Swarm Commander (SC), deployed the heterogeneous robot swarm using mission plans and SC generated tactics to complete the assigned missions. The SC used the CCAST Immersive Interaction Interface, a virtual reality based system, as the only human responsible for deploying the swarm.
The OFFSET program incorporated six Field Exercises (FXs) conducted in urban environments. CCAST supports approximately 200 hardware ground (UGV) and aerial (UAV) vehicles (summarized in Table 1) and 250 simulation vehicles that were deployed throughout the program at United States military urban operations training facilities, or Combined Arms Collective Training Facilities (CACTFs). The missions incorporated either hardware only, CCAST's multi-resolution simulation's virtual, or live-virtual (i.e., hardware and virtual vehicles) swarms. The CCAST system supports hardware and virtual vehicles identically, and the SC interactions are agnostic to the vehicles' instantiation.
The final field exercise, FX-6, occurred at Fort Campbell's Cassidy CACTF in November 2021. A human subjects evaluation collected performance metrics from the team's two SCs during shift deployments. Given the nature of the CCAST swarm, the SCs must be trained with deploying the swarm and using the SC interface. The evaluation's results support the qualitative evidence generated during the prior field exercises, that a single human SC can achieve the mission deployment and associated mission goals. The SCs' overall workload was assessed based on individual contributors to overall workload. Specifically, a multi-dimensional workload algorithm was used to estimate and continuously classify overall workload based on recorded measurements of the cognitive, speech, auditory, and physical workload components that were combined with separate visual workload model values. The SC's estimated overall workload was only classified as an overload state for 3.2% of the 12,181 usable workload estimates, and the algorithm demonstrated sensitivity to workload changes for this challenging human subjects evaluation environment.
The background, Section 2, provides overviews of the CCAST swarm implementation, the immersive virtual reality interface, and the multi-dimensional workload algorithm. The experimental methodology, including important context related to field exercises, and specifically FX-6, are provided in Section 3. An analysis of the evaluation results is provided in Section 4, with Section 5 providing conclusions.
## 2 Background
### CCAST System Overview
CCAST's heterogeneous autonomous hardware swarm is composed of physically small inexpensive commercial-off-the-shelf vehicles that support large scale swarm operations in small congested areas. The robots' computational capabilities and payloads differ, as detailed in Table 1; however, all robots can be assigned the majority of the missions tactics. For example, when the mission planner issues a Surveil tactic of the outside of a structure, the assigned UAVs must have the specified number of UAVs with forward facing cameras and a UAV with a downward facing cameras. The UAVs are assigned to the tactic only based on their camera payload position, irrespective of the UAV hardware model. Robots designated for indoor operations (i.e., the Aion UGVs, UVify IFO-S, and Modal AI Seeker) have more expensive and capable payloads that support running computationally complex algorithms (e.g., simultaneous localization and mapping). Otherwise, all UAVs can be assigned the same tasks simultaneously.
The swarm vehicles maintain a communication link to support vehicle deconfliction and tasking. The individual vehicles communicate, via an LTE network, discovered obstacles and objectives of interest (i.e., artifacts), as well as a telemetry package to a centralized dispatcher that also enables communication with the SC. The LTE communication network requires the vehicles to have line-of-sight to the base station in order to maintain communications, as a result, vehicles are periodically out of communications.
The dispatcher translates the SC's commands, called tactics, into vehicle understandable instructions (Clark et al., 2021). If the SC explicitly specifies particular vehicles to execute a tactic, the dispatcher's commands are directed at those vehicles. However, the SC does not have to select specific vehicles, rather the dispatcher can automatically select and assign vehicles with the necessary capabilities that are proximally close to the specified tactic's goal execution location. The dispatcher deconflicts vehicle assignments for some tactics, but other tactics require explicit communication of the assigned vehicles' positions so that the vehicles can deconflict themselves.
The CCAST Swarm Tactics Exchange library incorporates both CCAST developed tactics and tactics developed by external collaborators (Prabhakar et al., 2020). Tactics include surveillance (Surveil) of structures or areas of interest, Cordon, Flocking, agent Following, Exploring the interior of buildings, etc. The swarm robots are assigned tactics, either as individuals, or as a coordinated team. The robots can automatically Swap in order to continue tactic execution when robot (i.e., UAV) battery levels become too low (Diehl and Adams, 2022). Once a tactic is assigned, the robots conduct real-time navigation planning using extensions to the real-time, rapidly exploring random tree star (RT-RRT*) algorithm (Naderi et al., 2015).
CCAST's Swarm Tactics and Operations Mission Planner is used prior to mission deployments for developing multi-dimensional, multi-phase mission plans. This planner is integrated with CCAST's multi-resolution swarm simulation, which facilitates evaluating and refining the plans. Once the hardware vehicles are staged in the launch area and powered on, the mission plan is instantiated, binding available vehicles on the LTE network to roles or groups. The SC loads the mission plan and either executes the entire mission plan, or portions of (i.e., signals within) a multi-phase mission plan.
The CCAST team extended Microsoft Research's AirSim (Shah et al., 2018) to provide a multi-fidelity
\begin{table}
\begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline
**Aion Robotics R1** & **3DR Solo** & **UVify IFO-S** & **Modal AI** & **Modal AI Seeker** \\ \hline \multicolumn{1}{|c|}{\(\$3600\)} & \multicolumn{1}{c|}{\(\$750\)} & \multicolumn{1}{c|}{\(\$3900\)} & \multicolumn{1}{c|}{\(\$2300\)} & \multicolumn{1}{c|}{\(\$2700\)} \\ \hline All FXs & All FXs & FX-4 \& FX-6 & FX-6 & FX-6 \\ \hline \hline Custom expansion board & Nvidia Jetson TX2 co-processor & Nvidia Jetson Nano co-processor & VOXL companion board & VOXL companion board \\ \hline Raspberry Pi 3B+ co-processor & Intel RealSense depth camera & Intel RealSense depth camera & Stereo camera & Stereo camera \\ \hline \multirow{5}{*}{10} & \multirow{5}{*}{2D spinning lidar} & Downward-facing Raspberry Pi Cam and optical flow & Forward-/Downward-facing cameras and optical flow & Downward facing camera and optical flow \\ \cline{1-1} & & & & Time-of-flight camera \\ \hline USB LTE modem & USB LTE modem & Integrated LTE modem & Integrated LTE modem & Integrated LTE modem \\ \hline \end{tabular}
\end{table}
Table 1: CCAST’s Swarm Robot Platforms
swarm simulator. The simulator facilitates system development, pre-field exercise (e.g., congestion testing) and pre-mission (e.g., mission planning) analysis with larger swarm sizes at a more rapid, cheaper and larger scale. The simulator capabilities directly support live-virtual swarms composed of hardware and virtual vehicles during field exercise mission deployments.
CCAST's 3D terrain elevation model, includes obstacles, and is used to generate a spatial database that also includes the swarm vehicles' telemetry information. Telemetry information is considered to be approximate, given the hardware vehicles' known GPS error.
The DARPA OFFSET program provided proxies for real world entities using AprilTags (Olson, 2011) that were easy for the vehicle's on-board image analysis tools to sense. The tags are placed on flat vertical and horizontal surfaces around the CACTF, such as outside and inside buildings, or on boxes with AprilTags on the four sides and top. The tags represent artifacts ranging from general navigation hints (e.g., building identifiers, ingress markers), non-combatants, hostiles, coded intelligence, and high-value targets. Some artifacts are _active_ and can interact with the vehicles, or vice versa, via Bluetooth. For example, a hostile or an explosive device can neutralize a vehicle before the vehicle neutralizes the hostile or explosive1. The imaging payload is used to recognize the AprilTag identifier that is matched via a look up table to the corresponding artifact, which triggers any necessary vehicle responses.
Footnote 1: Neutralization causes a vehicle to stop its tactic execution. The vehicle cannot execute tactics towards the mission objective until it is revived by a medic, but can be commanded to move about the CACTF.
The programs' constraints and the shear expected swarm size, combined with the built urban deployment environments created unique challenges from a robotics perspective. The missions required autonomous robots capable of navigating while avoiding obstacles and power lines, collecting intelligence, and responding to artifacts using Bluetooth that require, in some cases close proximity, which are challenging objectives for more advanced robots, even more so for the CCAST swarm. The CACTFs' presented common built environment challenges, such as curbs, steps, barriers, street signs, and power lines, as shown in Figure 0(a). UGVs can leverage the road network, but the CCAST 3D terrain elevation model also provided necessary context regrading obstacles, such as barriers, steps, and drainage ditches. While the UAVs autonomously ascended to a safe flight altitude, above buildings, trees, and other obstacles for autonomous enroute navigation to tactic goal locations, the UAVs frequently performed autonomous tasks at lower altitudes within the built environment, which required avoiding tress, bridges between buildings, and power lines. A common phase I mission plan tactic was to autonomously Surveil the CACTF's buildings' exteriors to collect intelligence. The UAVs with forward facing cameras descending into the built environment in order to detect and classify the AprilTags on the sides of buildings, such as in Figure 0(b). The FX-6 mission scenario necessitated the need to use multiple UAVs to descend and interact via their Bluetooth beacons with active artifacts on the ground, an achievable, but also challenging accomplishment.
More specifically, the autonomous swarm robots are assigned the same tactics per either the mission plan
Figure 1: (a) The FX-4 Joint Base Lewis-McChord CACTF. (b) A UAV conducting a building Surveil. Photos courtesy of DARPA.
or the SC. A typical Phase I mission plan incorporated multiple Surveil tactics to gather information to inform specific Phase II mission plan actions (e.g., searching a specific location for additional information). All robots gathered the information throughout the CACTF. UAVs with forward facing cameras typically gathered information on the sides of structures, those with downward facing cameras gathered information on the tops of structures or that were flat on the ground, while UGVs gathered some information from structures as well as 3-D artifacts on the ground. UGVs conduct tactics in accessible buildings (e.g., doorways that UGVs can drive to), but the Uvify IFO-S and Modal AI Seeker UAVs were integrated to extend the swarm's access to buildings inaccessible to the UGVs. All robots had either an electronic payload (e.g., disabling improvised explosive devices) or an anti-personnel payload (e.g., secure adversaries). Any UGV or UAV with an electronic payload was able to disable the adversaries' electronic systems, similarly any UGV or UAV with an anti-personnel payload was able to secure an adversary. Some active artifacts required simultaneous interaction by multiple robots with the necessary payload combinations. The Phase II mission plan typically incorporated taking action based on the Phase I intelligence to locate the high valued target, while a Phase III mission plan focused on securing that target. Neutralized UGVs autonomously navigated to a known medic, while neutralized UAVs autonomous returned to the launch zone and landed. Once revived by a medic, the robots either continued their prior tactic or were assigned a new tactic.
### Immersive Interaction Swarm Commander Interface Overview
Real world command and control of heterogeneous swarms requires exploring new interface and control concepts. The CCAST swarm command necessitates a control system in which a single operator can efficiently task hundreds of robots, while maintaining awareness of the environment. OFFSET focused on urban operations across multiple urban blocks. The Immersive Interactive Interface (I3) is the CCAST team's solution (Walker et al., 2023). Traditional command and control stations often rely on a two dimensional top-down map views annotated with entity and tasking symbology, which cannot adequately support the types of missions and tactics required to conduct the OFFSET mission. The OFFSET swarm missions challenged CCAST to move away from these traditional control systems in order to accommodate:
* Swarm groupings, a fluid concept, potentially representing a collection of mixed capability robots.
* Verticality, critical when expressing urban terrain, especially for multi-story structures.
* The sides of structures, critically important for tactical tasking in urban environments.
* The volume of occupied space, especially along the vertical axis.
* Multiple perspective inspection of scenario elements, in terms of raw viewpoint and level of detail/abstraction.
The SC is assumed to be "near-the-battle" with reliable, low latency data links to the battle field. The SC did not have line of sight observability of the swarm or urban environment. The swarm's UGVs and UAVs composition necessitates the SC's simultaneous viewing of both robot types. Further, since robots were deployed in the urban environment and entered buildings, the SC needed different viewing perspectives, including the swarms' egocentric perspective. The observability of the vehicles and artifacts can be obstructed by three dimensional virtual CACTF structures; however, the perceived benefits to the SC's overall awareness, including spatial awareness and the ability to precisely localize vehicles and artifacts, outweigh the negatives often associated with immersive interfaces.
I3's virtual reality interface is built within the Unity game engine and leverages SteamVR and the Valve Index hardware system (Walker et al., 2023). The virtual reality places the SC directly in the virtual battle space, enabling the SC to inspect and interact with the swarm at varying detail and control levels. The SC is assumed to be in a dedicated command center "near-the-battle", not physically in the battle field. The SC's laptop is connected to the swarm control network, but is positioned in a physically suitable environment that
supports safe usage of the virtual reality hardware. I3 receives live (or low latency) telemetry from all vehicles via the dispatcher, while the SC issues commands in the form of tactics and mission plan engagements. These aspects help to minimize the impacts of virtual reality induced motion sickness.
The SC's Valve Index head mounted display provides a three dimensional perspective. The two Valve Index handheld controllers are used to inspect, interact with, and navigate the virtual world. The Valve Index chest tracker enables separate reference frames for the head and body, which supports virtual side panels. The system relies on outside-in virtual reality tracking; thus, two tripod-mounted tracking beacons are used during field exercise deployments.
I3's virtual world began with a sand table concept (Walker et al., 2023), see example in Figure 1(a). The sand table can support rapid perspective transitions, multimodal interaction, and unique visualization options unavailable elsewhere. The SC can manipulate the world space, effectively transforming the world around them, both in terms of navigation and interaction with proxy elements to engage real-world behaviors. The sand table is built upon a hierarchy of transformations, permitting the SC to manipulate rotation, scale, and translation, while still maintaining spatial relationships between modeled elements. Given the OFFSET program's field exercise locations' scale, it was sufficient to treat coordinate translation as a mapping between Latitude, Longitude, and Altitude (mean sea level) into an XYZ reference frame defined in meters. Static world elements defined the operational environment.
The provided world map is composed of several layers that include a digital elevation model, human-defined and named obstacle and building boundaries, and a photogrammetry generated object model (Walker et al., 2023). During FX-6, externally generated building floor plans were integrated as geo-rectified images, enabling the SC to inspect the idealized interior of buildings while controlling the swarm.
The Valve Index hand controllers are a primary I3 input mechanism, with each hand assigned a controller. I3 is sensitive to the controllers' position in the world space, which permits accurate interaction contexts. The controllers' haptic feedback provides a cue to important events and supplements the corresponding visual context. The head mounted display's position and orientation within the virtual space are used to recognize the central view axis. Audio cues indicate incoming important information, such as the detection or neutralization of a hazard and neutralization of CCAST vehicles. Text-to-speech provides notifications of tactic failures (e.g., "Surveil failed").
I3 shifts the world around the user during virtual world navigation. The left hand controller, while the trigger is pressed, supports scaling (i.e., thumb tracker slide), rotating (i.e., joystick) and translating (i.e., moving the controller) the world. I3 also supports scenario-defined sand table transformations, which effectively map
Figure 2: (a) A sand table representation, including a neutralized UGV’s glyph. (b) A UAV glyph (indicated by the propeller), where the top bars indicate communication connectivity (blue) and battery level (multi-colored), the lightning bolt indicates an electronic warfare payload, the forward facing camera, the central icon indicates the task currently being executed, and the gray dashed box indicates the UAV is virtual.
into saved viewpoints. These capabilities facilitate changes in the visual display of the environment, and no very large viewport changes occur without an explicit action taken by the SC. This interaction requirement provides a means of reducing virtual reality induce motion sickness.
Various world model elements and entities are visualized. A priori static entities (e.g., buildings and obstacles) as well as dynamic entities are populated in the virtual space to represent both physical (e.g., vehicles) and synthetic (e.g., tactics) concepts. These entities are mapped to the sand table and each entity's visualization depends on its internal state, SC interactions, and a distance-based level of detail capability. Some static objects (e.g., buildings) have identifiers the SC can use when issuing tactics, which can simplify explicit tasking and provide important information to the swarm (e.g., ingress points).
Representations of the AprilTags (i.e., artifacts) or the entities (e.g., swarm, vehicle, artifact) have customized visualizations in the sand table with which the SC can interact. For instance, when a hostile adversarial artifact is recognized, the tag identifier and pose estimate are communicated to I3, which maps the coordinates to the virtual world space, displays the appropriate icon, adds a type-specific threat ring representing the range at which the hostile can interact with the swarm vehicles, and places it into the table, as shown in Figure 2(a). The very large field exercise scenarios prohibited visualizing all entities, particularly those that were not necessary to support SC's situation awareness and interactions. The capability to enable or disable entity classes via toggling them was implemented.
The SC typically conducts the mission by interacting at the swarm level (Walker et al., 2023). Individual vehicles are synthesized into swarm groups based on shared tactics, irregardless of whether the assignment derives from the mission plan or SC specified tactics. For example, vehicles assigned to a tactic, see Figure 2(b), are designated as a swarm group, with a common shade, and have a reference handle for tactic manipulation. Individual vehicles are represented with a generic object model corresponding to their type (i.e., UAV, UGV) that can represent both hardware (i.e., live) and virtual vehicles.
Entities can be inspected using a context-aware system. The right hand controller recognizes when the cursor intersects with an entity and constructs a summary glyph, see the example in Figure 1(b). The glyph communicates the vehicle's type, payload, remaining battery level, if up-to-date vehicle telemetry is being received, current tactic, and if it is a hardware or virtual vehicle. The vehicle's planned navigation route is also displayed. The DARPA OFFSET scenario can cause vehicles to be neutralized; thus, the vehicle representation changes to indicate a neutralized status, as shown in Figure 1(a). The hazard (e.g., hostile) summary glyph is similar, but a line is visualized to the tasked vehicles on hover. Hovering over a tactic summary visualization highlights the associated vehicles or swarms, as shown in Figure 4(a).
The I3 SC can create dynamic geometry by entering an input mode that permits specifying a point, a polyline, a polygon, or an extruded polygon. The right hand controller is used to specify discrete vertices
Figure 3: Example of (a) an improvised explosive device artifact and associated threat ring, and (b) swarm visualizations, where six sub-swarms are beginning to execute a mission. Note, the fuchsia ring represents the spatial area the SC sees within the head mounted display.
and, optionally, a depth for extruded geometries. The resulting geometries can be used to explicitly specify swarm or individual vehicle tactics. For example, a polygon defining the area to Surveil.
The SC can display a menu system around the right controller's interaction location that facilitates interactions at the world level (e.g., visualization toggles, tactics menu) and context-sensitive queries or tactics. This menu placement allows the SC to maintain attention on the relevant information during its use. The menus are nested arbitrarily deep, contain custom icons and visualizations, support multiple widget types, and for explicit hand controller buttons, supports both long and short click behaviors. The primary menu is accessed by pressing the right hand controller's 'A' button and provides all available I3 actions (i.e., visualization toggles, geometry creation menu, tactics menu, and mission plan controls). The context menu supports query or engage behaviors, based upon what is in proximity of the controller's cursor. Most entities can be interacted with (i.e., buildings, artifacts, vehicles, swarms, tactic visualization nodes, and mission plan elements). The initial menu row contains references to applicable, possibly multiple entities, that based on a threshold, are identified relative to the interaction point. This approach permits quick selection within a sparse location and enables interactions in dense locations. The context menu is typically used for explicit tactic invocation. For example, interacting with an at altitude UAV to specify that it return to the launch (RTL) area immediately. A tactic can also be specified by interacting with a building element and requesting an immediate Surveil tactic, which may allow the dispatcher to auto-allocate suitable vehicles, or transition into the tactic calling menu with the pre-specified building as the tactic's target.
The CCAST system facilitates a large number of tactics. A complete description of those tactics is beyond this manuscript's scope; however, a tactic defines a behavior to be performed by the allocated vehicles, along with optional navigation and execution parameters. Tactics may explicitly reference vehicles, or the dispatcher may autonomously allocate vehicles based on required capabilities and physical proximity. The tactics menu is customized prior to an FX to provide the most relevant tactics, which are filtered by use case, see the FX-6 tactics menu in Figure 4. Three different agent specification levels exist. The most granular is the vehicle level, at which explicit vehicle(s) call sign(s) are provided. The next level uses swarm labels that cause any vehicle with the specified label to accept the tactic. The final level specifies (or accepts default) "wildcard" values that leaves the vehicle selection to the dispatcher.
Tactics are called by I3 using three different mechanisms (Walker et al., 2023). The SC can _explicitly_ select the tactic and vehicle(s), which provides great control, but costs execution time. The context menu can be used to identify vehicle(s) or a target for which the menu system _pre-seeds_ one or more of the tactic fields, which the operator can refine or reject, but at the very least must manually engage the tactic. The context menu can also be used to _instantly_ execute a tactic without specifying the vehicles or other details. For example, to execute a simple Stop or RTL command on a vehicle, or an automated execution of a Surveil tactic on a building. Each called tactic has a visualization (see Figure 4(a)) that indicates the tactic type and any associate geometry (e.g., search area). This visualization changes as the top level tactic moves through its lifecycle, eventually disappearing upon tactic completion.
The CCAST mission plan is critical to achieving the mission objectives. Mission plans are developed a priori and the I3 SC loads the plan from a centralized repository. Mission plans contain nodes, each containing one or more tactics. Tactics may begin at mission start, commence upon explicit SC or software issued signals,
Figure 4: The FX-6 tactics menu.
or execute upon completion conditions asserted by predecessor tactics. The mission plan is visualized above the sand table as a hierarchical tree (see Figure 4(b)). The plan tree contains top level signals, and subsequent levels conforming to the tactic completion dependencies. The physical signal and tactic nodes' positions are generated from the centroid of deconflicted associated tactic geometries. The SC can trigger signals relative to the mission plan that gate the execution of one or more mission plan nodes. Typically, a mission has multiple phases, represented by the gated signals, that permit triggering the scenario phases as the SC determines conditions are suitable. For instance, most FX mission plans involve an initial series of Surveil tactics deconflicted by region to reduce the risk of mid-air UAV collisions during navigation to or from the launch area. Each region has an independent discrete signal. The SC engages the signals by interacting with the associated nodes. Hovering the right controller over a specific mission plan node causes the vehicles and geometries associated with the specific sub-tactics to be highlighted within the sand table.
Always-available information is provided via a heads-up display positioned relative to the SC's viewpoint, as shown in Figure 5(a), that incorporates current vehicles' telemetry status, which can indicate communication issues, and is constantly updated with available vehicle counts by type. A notification pane provides critical information, including new scenario intelligence sightings or vehicle neutralizations.
The Valve Index's chest tracker provides an inertial frame for displaying at-a-glance tactic and scenario panels to the SC's sides. The tactics panel, displayed on the SC's left side, lists tactics being executed, including a notion of tactic type, target, composition, and state. Selecting a tactic facilitates terminating it. The panel on the SC's right side lists hazard and artifacts of interest, emphasizing threat status, including whether or not they have been addressed.
Figure 5: (a) Example tactic and (b) mission plan visualizations, where the mission plan contains several nodes (white disks) gated by a single signal (red button with raised cover).
Figure 6: An example (a) heads-up display indicating 49 UAVs and 20 UGVs, and (b) a tactics panel showing five tactics. Both are displayed relative to the SC’s focus to support quick viewing.
The deployment of the CCAST swarm using I3 is sufficiently complex as to require a minimal level of training. This training can be used to vet the susceptibility of potential SCs to virtual reality induced motion sickness. Throughout the OFFSET program many untrained or minimally trained individuals used I3 via the virtual swarm capabilities with no ill effects.
#### 2.2.1 Typical Mission: Swarm Commander's Perspective
The SC, during a typical shift, executes at least one mission plan and specified tactics. At shift start, the SC loads the mission plan and when all systems are ready, requests either specific mission plan signals, or the entire plan be executed. Once the assigned vehicles clear the launch area, the SC often begins issuing tactics to the remaining vehicles. The initial phases seek to gather intelligence and identify important information (e.g., locations of high valued targets or the medic) needed to execute the mission. As UGVs encounter adversaries and are neutralized, they autonomously navigate to a medic, if it has been located. Otherwise, the neutralized UGVs RTL. Neutralized UAVs autonomously RTL. UAVs that have completed their tactics will also automatically RTL, while UGVs will wait in place for a new assignment.
As intelligence is gathered, the SC can customize the swarm's response to act on the information. For example, if the swarm finds information about a high value target's location, the SC may send vehicles to that location to investigate. The mission plan often includes phased signals intended to respond to the gathered intelligence that can facilitate continued intentional mission progress, including mission phases. A typical second mission phase acts on the gathered intelligence to localize a high value target, while the next phase focuses on neutralizing that target.
During FX-6, the DARPA provided scenario quickly neutralized large numbers of vehicles. Thus, a mobile medic was introduced at the launch area to revive neutralized UAVs. The mobile medic required a human to walk through the launch zone with a Bluetooth-enabled device that revived the vehicles, which, for safety purposes, required waiting until a large number of the UAVs had RTL'ed. The long CACTF shifts require the UAV batteries to be swapped during the mission execution, which was completed by humans.
### Multi-Dimensional Workload Estimation
Human supervisors (e.g., CCAST's SC) may experience erratic workload levels (Wickens et al., 2004; Sim et al., 2008), where performance tends to decline when workload is too high (overload) or too low (underload) (Wickens et al., 2004). An increase in overall workload does not necessarily mean task performance will decrease, as performance depends on the human's overall resources and if there are competing resources (e.g., multiple tasks requiring human visual attention). Overall workload is frequently assessed as a discrete measurement of cognitive workload (e.g., (Kaber and Endsley, 2004; Schwarz and Fuchs, 2018)). However, overall workload can be decomposed into workload components (i.e., cognitive, auditory, visual, speech, and physical workload (McCraken and Aldrich, 1984; Mitchell, 2000)) in order to provide necessary insight into the factors contributing to the human's current workload state.
Non-invasive wearable devices can collect objective workload metrics (e.g., heart-rate variability), whose values have been found to correlate with one or more workload components (e.g., (Harriott et al., 2013; Harriott et al., 2015; Harriott, 2015)). Recent workload assessment algorithms have combined these objective metrics into a workload component classification using machine-learning techniques (e.g., (Durkee et al., 2016; Popovic et al., 2015)). These algorithms typically rely on metrics that are not viable for dynamic domains (e.g., eye-tracking, EEG) to classify cognitive workload, and tend to focus on only the normal load and overload classifications. The algorithms do not discern the state of the other workload components and fail to adequately classify the underload condition. The discrete classifications also do not allow for understanding workload trends (i.e., increasing, decreasing, or unchanged).
Reviews of relevant metrics exist, but do not address all of the problem's aspects (Harriott et al., 2013;
Harriott, 2015; Charles and Nixon, 2019). Heard and Adams reviewed relevant metrics and algorithms for assessing overall workload and its components (Heard et al., 2018). None of the workload assessment algorithms estimated each workload component and classified both the underload and overload workload states using workload metrics collected from wearable devices suitable for the DARPA OFFSET domain.
Many workload assessment algorithms rely on metrics collected from EEG headsets (Bian et al., 2019; Durkee et al., 2016; Gupta et al., 2021), cameras (Bloos et al., 2019; Heard et al., 2019; Paris et al., 2019), motion capture (Kubota et al., 2019), dedicated interaction systems (i.e., keyboards (Oliver et al., 2002; Popovic et al., 2015), and smartphones (Ronao and Cho, 2016)) to infer the human's cognitive workload. Typically, those systems only infer the normal and overload workload states. A critical aspect of such systems is their inability to adapt to most unstructured, dynamic environments.
A relevant tree classifier assessed overall workload accurately (Rusnock et al., 2015), but did not include multidimensional workload components. The MBioTracker is a multimodal wearable system designed to detect workload, but only classifies cognitive workload (Dell'Agnola et al., 2021). A closely related approach used proprietary algorithms to classify cognitive, visual, auditory, speech, and physical workload, but did not estimate overall workload (Popovic et al., 2015).
#### 2.3.1 Multi-Dimensional Workload Algorithm Overview
Heard and Adams' multi-dimensional workload algorithm estimates a human's workload components and the composite overall workload state (Fortune et al., 2020; Heard et al., 2019; Heard et al., 2019; Heard and Adams, 2019; Heard, 2019). This algorithm was developed specifically to support unstructured dynamic domains (e.g., disaster response, military) using primarily wearable, non-vision based sensors that can objectively measure the human's current performance (e.g., overall workload (Heard and Adams, 2019; Heard et al., 2019)). The multi-dimensional workload component states (i.e., auditory, cognitive, physical, speech, and visual (McCraken and Aldrich, 1984)) are estimated and are used to estimate and classify overall workload (i.e., underload, normal load, and overload). The algorithm incorporates objective physiologically-based metrics, available via wearable sensors, and a non-physiological environmental metric that correlate to overall workload and the multidimensional components (Fortune et al., 2020; Harriott et al., 2013; Harriott et al., 2015; Heard et al., 2018; Heard and Adams, 2019; Heard et al., 2019; Heard et al., 2019).
The multi-dimensional workload algorithm estimates overall workload and its components by extracting time-based features (i.e., mean, variance, average gradient, and slope) from thirty second epochs for each objective workload metric (e.g., heart-rate variability, posture magnitude, noise level). The time based features serve as inputs to a corresponding neural network that estimates each workload component (Fortune et al., 2020; Heard and Adams, 2019; Heard, 2019). The means and standard deviations capture the metrics' response to workload variations, but do not capture a metric's directional shift, (e.g., the metric is increasing over the time window). The average gradient and slope features capture this directional change. Slope is the linear change over the window, while the gradient is the average change between each second in the window.
The multi-dimensional workload assessment algorithm was trained and validated using IMPRINT Pro workload models (Heard et al., 2019). IMPRINT Pro (Archer et al., 2005) supports modeling complex task networks that designate start and stop times for each task and anchors each task to workload component values (i.e., a conversation is anchored to a speech workload component value of 4.0). The task networks and workload component values are used to derive continuous models across seven workload components: auditory, cognitive, visual, speech, gross motor, fine motor, and tactile. The approach in this manuscript combines the gross motor, fine motor, and tactile components into a physical workload component. An overall workload model is generated by uniformly aggregating the workload component models. IMPRINT Pro uses a linear workload model incorporating the workload components to classify a predicted overall workload, where \(\geq 60\) as overload. IMPRINT Pro does not provide an underload threshold. Specific IMPRINT Pro models must be developed to represent the underload, normal load, and overload conditions. The resulting IMPRINT Pro workload models represent predicted workload outcomes, are static, and do
not adjust in real-time to the current situation. These modeling constraints limit considerably the ability to use IMPRINT Pro in uncertain and dynamic environments; thus, the need for using the developed multi-dimensional workload assessment algorithm that was shown to have generalizability between task domains and environments (Heard et al., 2019b).
Dynamic environments contain time-varying contributions from multiple workload components and contextual features capture these time-varying workload contributions. Contextual features calculated from the IMPRINT Pro workload models are required by the multi-dimensional workload algorithm to produce more accurate workload estimates. Three contextual features exist: cognitive task composition, physical task composition, and auditory task composition, where task composition represents how much the respective workload component contributes to the human's overall workload. Speech task composition is not included as a contextual feature, due to using voice activity detection to determine if the human is speaking or not. These contextual features can be set to zero for an unfamiliar environment. Given that the multi-dimensional workload algorithm was trained using the supervisory-based IMPRINT Pro's calculated contextual feature values, the respective values are set to zero for the OFFSET FX-6 workload estimates.
The multi-dimensional workload algorithm estimates the cognitive, auditory, and physical workload components every five seconds. The speech workload component is estimated every second and is resampled to a five second frequency before estimating overall workload. A separate neural network exists for each workload component. Visual workload is estimated using a relevant IMPRINT Pro model. The component estimates are uniformly aggregated to estimate overall workload, which was mapped to a state (i.e., underload, normal load, or overload) using thresholds. Heard et al. conducted extensive validation of the multi-dimensional algorithm across supervisory and peer-based relationships, tasks, workload conditions and populations (Fortune et al., 2020; Heard et al., 2019a; Heard and Adams, 2019; Heard, 2019). These validations used IMPRINT Pro models, developed prior to conducting human subjects evaluations, as the comparison to the multi-dimensional workload algorithm's results. Separate IMPRINT Pro models were developed for the underload, normal load, and overload conditions in order to support the validation efforts.
It is well known that some physiological metrics (e.g., heart rate, respiration rate) are impacted by other human performance factors (e.g., stress). The multi-dimensional workload algorithm mitigates the impacts of other performance factors in two ways. It is common in the literature to equate overall workload and cognitive workload. Rather, the multi-dimensional algorithm estimates overall workload based on the individual workload components, where the individual workload components use different sets of metrics to estimate the corresponding component's workload value. This approach decreases the influence of a particular metric that may be influenced by another human performance factor. The cognitive and auditory workload components also incorporate noise level, as measured with a noise meter, a non-physiological metric. The second factor that contributes to mitigating the impact of potentially confounding factors (e.g., stress) is the incorporation of the time-based directional change features (i.e., average gradient and slope) that ensures the algorithm does not rely solely on a metric's overall magnitude.
Underload, normal load and overload IMPRINT Pro models were developed for a supervisory-based adaptive human-robot teaming architecture (Heard et al., 2020). The corresponding human subjects evaluation incorporated a physically expanded version of the NASA Multi-Attribute Task Battery (NASA MATB-II) (Comstock and Arnegard, 1992). The physically distributed NASA MATB-II simulated supervising a remotely piloted aircraft, incorporated four tasks: tracking, system monitoring, resource management, and communications. Each task was distributed to different monitors, two of which required physically walking around a table. Workload was manipulated by changing various parameters of each task in order to determine the adaptive teaming system's effectiveness. The adaptive architecture was shown to select an appropriate level of autonomy or system interaction based on real-time workload estimates from the multi-dimensional workload algorithm, and resulted in improved overall task performance (Heard et al., 2020).
The nature of the FX-6 deployments do not support developing a priori IMPRINT Pro models to support an analysis similar to the prior analyses. The goal for the DARPA OFFSET program was to leverage an existing model to generate estimates for the SC during mission deployments. Therefore, the neural network models
developed for the supervisory-based adaptive human-robot teaming architecture validation (Heard et al., 2020) were used to provide the multi-dimension workload component estimates and the overall workload estimates for FX-6. Heard et al. calculated overload and underload thresholds previously using prior multi-dimensional workload algorithm results and their underload, normal load, and overload models. The overload threshold was found to be 60, which matches IMPRINT Pro's threshold, and the underload threshold was determined to be 25. These thresholds are used in this manuscript to classify workload states.
## 3 Method
The human subjects evaluation's purpose was to understand a single SC's ability to conduct missions using I3 and the swarm. Unlike controlled laboratory evaluations, the OFFSET FXs include uncontrollable variances, such as extreme weather conditions impacting hardware functionality that causes autonomous UAVs and UGVs to perceive the environment and conduct their tactics differently across shifts.
### SCs
The presented results are for two SCs, both of whom are core CCAST team members and system developers. The SCs are 31-40 years old, have at least a Bachelor's degree, and are highly proficient computer users, using such devices eight or more hours a week. The SCs play video games on average 3-8 hours a week and consider themselves proficient players. Finally, both SCs spend on average 3-8 hours a week using I3, with the virtual reality equipment, and consider themselves to be very to highly proficient system users.
The nature of the DARPA OFFSET field exercises, including the swarm's size, the developmental nature of the technology as well as the associated costs and safety concerns, implies that a team member acts as the SC. Both SCs became project team members in October 2017, when the program began. Swarm Commander 1 (\(SC_{1}\)) completed shifts at all field exercises, while Swarm Commander 2 (\(SC_{2}\)) only attended FX-3 and FX-6. During FX-3 and FX-6, the SCs traded off shifts, generally serving as SC for as close to an equivalent number of shifts as possible. \(SC_{1}\) was the sole SC at all other field exercises.
### Field Exercises
FX-6 was conducted Nov 3-19, 2021 at Fort Campbell. The FXs always include shifts for integrating the CCAST system with the government systems and dry runs. Exercise shifts, during which the CCAST team attempted to achieve the mission, account for the remaining shifts. It is noted that some early exercise shifts are effectively dry runs, as system modifications are the focus. Human subjects data collection commenced once the team transitioned to addressing the mission objectives. Even after this transition, some shifts encountered unavoidable technical difficulties (e.g., LTE communication failures). The CCAST team completed twenty shifts during FX-6, and human factors data collection occurred during twelve shifts.
#### 3.2.1 FX Operational Conditions
The FXs are physically and mentally draining, with sifts occurring seven days a week, with an average of 13.5 hours at the CACTF daily, often with additional work conducted in the evening. At FX-6, the team worked in a large tent, without climate control, potable running water, etc. Teams must supply their meals and beverages, as external sources are not easily accessible.
The shift preparation required distributing and preparing all hardware vehicles in the launch area and setting up the command center (C2) systems. The off-shift SC often contributed to the hardware vehicle distribution and set up, while the on-shift SC set up I3 in the C2. The CCAST system dispatcher was set up in another C2 area, sufficiently distant from the SC to prohibit direct communication. During shift preparation, the
SC verified communications between I3 and the dispatcher system. The CCAST team member setting up the dispatcher system verified communications between it and the LTE basestation. During a shift, dedicated CCAST team members were responsible for acting as in field safety spotters, managing the vehicle hardware (e.g., swapping UAV batteries), etc. Communication between the distributed human team members occurred via walkie-talkie. The human subjects experimenter was responsible for relaying communications that required SC response, or originated with the SC (e.g., "launching UAVs").
The FX-6 C2 was in a cinder block building (see Figure 7). The SC's I3 station was set up in a single room on the second floor that minimized light pollution. The I3 virtual reality headset and the C2's room location resulted in all swarm operations being beyond the SC's visual line of sight; however, the SC was able to hear to the UAVs take off, depart and RTL, but is unable to hear the UGVs.
Prior to mission start, a mission brief provided the mission's objectives. Upon shift completion, a shift debrief was conducted, usually followed by a brief break. After the break, the SCs frequently completed system development tasks, or provided demonstrations for visitors.
During FX-3, it was determined that the virtual reality hand controllers' functionality was impacted negatively by cold temperatures. The FX-6 cinder block C2 room was frequently many degrees colder than the outside ambient temperature. The hand controllers were placed inside the SC's clothing or hand warmers were used to maintain the controllers' responsiveness. During the shift, the hand controllers did not exhibit issues. A similar concern arose for the virtual reality chest tracker, which was also kept under the SC's clothing until the shift began; thus, avoiding temperature induced issues. Appendix A's Table 10 provides detailed weather conditions for each CACTF data collection day.
### FX Variances
Field exercises provide ecologically valid opportunities to assess SC performance with actual hardware systems in representative environments while conducting representative missions; however, they also create numerous challenges. Each OFFSET field exercise increasingly scaled the mission and swarm complexity. Both the CCAST's swarm's number of vehicles and heterogeneity increased with each FX. Table 1 indicates which hardware vehicles were used at each FX. The FX-3 swarm shifts included up to 55 UAVs and 30 UGVs, FX-4's swarms had up to 50 UAVs and 60 UGVs, while the FX-6 swarms incorporated up to 139 UAVs, 44 UGVs and up to 100 virtual vehicles. Further, each field exercise was conducted at a different CACTF, where each CACTF's built environment varied substantially. The FX-6's Fort Campbell Cassidy CACTF is more compact than FX-4's Joint Base Lewis-McChord's Leschi Town, but presented a denser urban environment, see Figure 8. The FX-6 C2 building is identified on the figure and FX-6's launch area was located on the road in front and in the parking lot to the east of C2.
Figure 7: The FX-6 I3 C2 demonstration environment. Note, the larger display, right side, supports explanations to external observers, and is not used to control the swarm. Photo courtesy of DARPA.
Each field exercise increased the mission complexity by significantly increasing the number and types of artifacts to be detected and responded to appropriately. The number of artifacts that neutralized vehicles increased with each field exercise, as did the complexity of responses that vehicles were to perform upon detecting an active artifact. As such, the mission objectives and the associated mission plans varied across the field exercises. Mission plans also varied across shifts within a FX as new information became available, artifacts were modified by the DARPA team, etc. FX-6's increased mission complexity resulted in a higher neutralization of vehicles before they were able to venture very far into the CACTF.
FX-6 shift durations varied from 1 to 3.5 hours. Longer shifts occurred later in the field exercise. The UAVs tend to have short battery lives (i.e., 10-20 minutes), which makes it is necessary that the UAVs autonomously RTL for battery swaps. Longer shifts often also result in more neutralized vehicles that need to go to the medic (UGVs) or RTL (UAVs). The swarm's mission progression varied significantly during the longer shifts, which may include additional mission plan phrases, or the number and type of SC specified tactics. Each of these factors can dramatically change the SC's actions across all shifts.
DARPA's invited distinguished visitor day was Nov 16\({}^{th}\). The visitors congregated in designated safe observation areas. CCAST's mission objectives for this day were to (a) place every operational UGV and UAV in the launch zone, and (b) deploy all of those vehicles immediately upon shift start and maintain a high vehicle activity deployment tempo for the entire observation period, the first thirty minutes of the shift. Additional relevant facts are provided in a more detailed analysis of this shift in the results section.
An FX-6 "surprise", announced on Nov 15\({}^{th}\), was the notion of both integrator teams' swarms2 performing the mission objectives during _Joint Shifts_. During these shifts, both DARPA OFFSET integrator teams deployed vehicles simultaneously. The CACTF was spatially divided, such that the CCAST team conducted their mission activities on the half of the CACTF closest to C2. Both Nov 18\({}^{th}\) shifts were conducted in a similar manner; however, during the 1330-1500 shift, the CCAST SCs jointly deployed the swarm.
Footnote 2: The second integrator team was lead by Northrup Grumman.
Figure 8: The Fort Campbell Cassidy CACTF, the site of FX-6. The yellow area is the C2 building.
### Data Collection
#### 3.4.1 Physical Data Collection Configuration
The I3 SC and the human subjects evaluator shared a table in C2, as shown in Figure 8(a). The I3 SC requires the virtual reality equipment with associated charging cables, and the laptop that runs the I3 software. The evaluator's equipment is positioned to the right on the same table.
The evaluator's monitor, shown in Figure 9, is directly connected to the I3 laptop and displays the virtual environment and I3 interaction components in real-time. The evaluator's viewable area is larger than the SC's in the virtual reality headset; thus, an indicator assists the evaluator in understanding what the SC can currently view. The evaluator's tools include the laptop on which the data collection software runs, a second laptop for recording notes, events and in situ responses, as well as all necessary sensors and their associated components, shown in Figure 8(b).
#### 3.4.2 Objective Data Collection Sensors
The multi-dimensional workload algorithm can estimate the cognitive, speech, auditory, visual and physical workload components, which are used to estimate overall workload. The visual workload estimate requires an eye tracker. The Valve Index headset does not incorporate an eye tracker, and the evaluator's eye tracker cannot be worn with the headset. Thus, visual workload was not objectively measured, but was estimated using existing an IMPRINT Pro model.
The multi-dimensional workload algorithm incorporates the physiological-based metrics: heart-rate, heart rate variability (HRV), respiration rate, posture magnitude, speech rate, voice pitch, intensity, and activity, as well as noise level (decibels: DB). These metrics are used to estimate the component workload levels that are combined into the overall workload estimate. Cognitive workload is estimated using heart rate, heart rate variability, and noise level variability. Physical workload relies on heart rate, respiration rate, and posture magnitude. The auditory workload is estimated using noise level variability, while speech workload is estimated using voice intensity, pitch and activity, as well as speech rate.
The heart rate, heart rate variability, respiration rate and posture magnitude are measured using a BioPac Bioharness(tm) sensor attached to a chest strap. A Reed R8080 decibel meter provides the noise level data. The 44100 Khz dual-channel audio signal captured by a Shure PGX1 microphone is transformed into a mono-channel signal prior to calculating the speech rate, as well as voice intensity, activity and pitch metrics. Table 2 correlates the sensors to the respective workload component.
Figure 9: The I3 SC operational and the human subjects data collection area.
The SC wears the virtual reality headset and chest tracker, as well as the Bioharness BioPac chest strap and sensor, and the Shure microphone headset with the transmitter attached to the SC's pocket, both of which are visible in Figure 8(a). The Bioharness chest strap is worn underneath the SC's clothing. The noise meter is positioned on the table, left side of Figure 8(b).
All of the sensors are designed for indoor, controlled environments and are not hardened for use in extreme conditions. FX-6 was the first time the sensors were used outside of controlled laboratory conditions. Generally, the sensors performed as expected. The Bioharness transmits measurements in real-time to the data collection laptop via Bluetooth.
Prior to shift start, the SC donned the sensors. The Bioharness sensor must be placed on the side of the upper torso. If the sensor is improperly placed, the heart rate readings are very low (e.g., \(\leq 50\)). A correct heart rate reading is around 80, but varies by individual. The experimenter conducted data collection to verify that the Bioharness data was accurate. The experimenter learned the expected heart rate values for each SC. After the SC donned and positioned the microphone, the experimenter asked the SC to speak a sentence recorded using the Audacity software. If the speech was not adequately captured, the microphone was adjusted and the test repeated.
The speech data was not collected on Nov \(11^{th}\) due to a missing component, or for November \(12^{th}\) due to experimenter error. It is unclear why only two minutes of speech data was recorded during the Nov \(18^{th}\)\(1330\)-\(1530\) shift. The noise meter malfunctioned on each data collection shift. Often the noise meter functioned properly for a period of time, and then malfunctioned. There were a small number of instances where the sensor data collection was interrupted and then restarted, which are classified as "No data".
The sensor streams for cognitive, speech, auditory and physical were processed using the multi-dimensional workload algorithm's neural networks for the supervisory-based adaptive human-robot teaming architecture, see Section 2.3.1. The FX's variable nature makes developing OFFSET specific training data sets or corresponding IMPRINT Pro models difficult. Using the reduced set of workload components to estimate overall workload actually underestimates overall workload, as it does not incorporate the missing visual workload and for some shifts, the missing speech workload. The IMPRINT Pro models developed for the supervisory-based adaptive human-robot teaming architecture (Section 2.3.1) can be leveraged to provide reasonable estimates of overall workload for the missing workload components (i.e., all visual, and some speech).
The impact of the missing workload components on the overall workload estimate is a percent change relative to the current components' workload value. The resulting overall workload estimate needs to be normalized. The standard normalization equation (i.e., \((value-min)/(max-min)\)) can be reduced to Equation 1, where the _min_ and _max_ values map to the supervisor-based adaptive human-robot teaming architecture's IMPRINT Pro model's values, 0 and 70.4, respectively. This reduction results in a normalization equation, \(value/MaxOverallWorkloadVal\), where _MaxOverallWorkloadVal_ is the maximum raw value from the IMPRINT Pro model. The _value_ component usually is the estimated overall workload from the
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|} \hline
**Sensor** & **Metric** & **Cognitive** & **Auditory** & **Speech** & **Physical** \\ \hline \multirow{4}{*}{BioHarness} & Heart rate & & & & \\ \cline{2-5} & HRV & & & & \\ \cline{2-5} & Respiration rate & & & & \\ \cline{2-5} & Postural magnitude & & & & \\ \hline \multirow{4}{*}{Microphone} & Speech rate & & & & \\ \cline{2-5} & Voice intensity & & & & \\ \cline{2-5} & Voice activity & & & & \\ \cline{2-5} & Pitch & & & & \\ \hline Reed decibel meter & Noise level & & & & \\ \hline \end{tabular}
\end{table}
Table 2: The correspondence between the sensors and the multi-dimensional workload components.
multi-dimensional workload algorithm; however, some component values (i.e., visual, sometimes speech) are missing. The missing components reduce the maximum overall workload value the algorithm can estimate, due to the uniform aggregation of the workload components. Thus, the algorithm's estimated overall workload value must be adjusted by \(value=RawVal+MissingComponentVals\), where _RawVal_ is the multi-dimensional workload algorithm's estimated overall workload value without the missing components, and the _MissingComponentsVal_ is the respective average values from the missing components. Lastly, the result is multiplied by 100 to ensure the values are in the range 0 - 100, resulting in Equation 1's _ScaledNormalizedVal_ producing the estimated overall workload.
\[ScaledNormalizedVal=100*(RawVal+MissingComponentsVals)/MaxOverallWorkloadVal. \tag{1}\]
The lower the estimated workload using the FX data, the larger the impact of incorporating the missing components' contributions. It is important to recognize that the missing components are very unlikely to be at their maximum value, especially if other components are overloaded; thus, estimating the overall workload in this manner provides a more accurate overall workload estimate for the FX-6 results.
The overall workload estimates were classified into the workload levels (i.e., underload, normal load, overload) using the same thresholds as the prior work, see Section 2.3.1. The resulting overall workload estimate was classified as underload if the value was \(\leq 25\), overload if the value was \(\geq 60\), and normal load otherwise.
#### 3.4.3 Subjective Data Collection
In situ probes (Harriott et al., 2013) focused on the workload components, stress and fatigue. The experimenter asked the SC, approximately every ten minutes to respond to each of the in situ probes with their subjective rating. The meaning of the in situ probe terms were defined for the SCs, who verified their understanding of the terms' meanings prior to data collection. Overtime, the experimenter simply stated, for example "[SC name]: cognitive", in order to minimize the disruption to the SC's current tasks.
The SCs rated their perceived workload components' (i.e., cognitive, auditory, speech, visual and physical), stress and fatigue levels on a scale from 1 (very low) to 7 (very high). All in situ probe responses were normalized to a value between 1 and 100. The SCs provided subjective workload component weightings post-FX. The SCs were instructed to weight each component relative to how much they felt each component impacted their overall workload. The total of the components weights was required to equal 100. Each individual normalized workload component in situ response was averaged. The respective component weighting, using the subjective weightings, was applied to create a weighted mean for each component. The weighted component means were summed to generate the overall subjective workload values.
#### 3.4.4 Dependent Variables
The objective dependent variables are the workload component estimates as well as the overall workload estimate. Speech workload metrics were not collected on Nov 11\({}^{th}\) (all shifts), Nov 12\({}^{th}\), and Nov 18\({}^{th}\) 1330-1530. Table 3 indicates, by shift, which workload component estimates were determined using the collected objective metrics (green), and which used the IMPRINT Pro-based model estimates (orange).
Due to an out of the box default programming parameter, the noise meter stopped recording data during the data collection. Resetting the noise meter during a shift and debugging the issue did not resolve the issue. 35,562 good readings were recorded across five shifts during the FX (i.e., Nov 13, 14, 16, 17 1200-1400, 18 1000-1130) or 21.9% of those shifts' total data points. The weighted minimum raw noise meter reading value across these shifts was 50.78 dB (i.e., moderate rainfall (The Decibel Pro App, 2022)) and the weighted maximum of 81.89 dB (i.e., an alarm clock). The weighted mean noise level across these data points was 60.75 dB (weighted standard deviation = 6.00, i.e., normal conversation). The analysis used all good recorded raw noise meter values. The bad readings were replaced with a point sample from a Gaussian distribution with the weighted mean and weighted standard deviations as the \(\mu\) and \(\sigma\) (i.e., distribution parameters),
respectively. The sampled point was clipped to be within the weighted minimum and maximum. This approach is more representative of the actual auditory workload in the FX environment. As such, the light green in Table 3 for the Auditory component represents the use of actual and mean dB values for the shifts with valid recorded values, or the substituted dB values.
The respective missing workload component estimates (i.e., orange in the table) estimated overall workload using the IMPRINT Pro model's average, or mid-point, values. The estimated missing components are combined with the workload estimates to obtain the overall workload estimation. This estimation approach is justified given that it is highly unlikely that all workload components will be overload simultaneously, which was confirmed via observation of the OFFSET SCs. Specifically, the IMPRINT Pro model averages were used for the visual and some speech workload estimates.
## 4 FX-6 Human Subjects Evaluation Results
Human subjects data was collected over eight days and twelve shifts, with swarms that differed in the numbers and combinations of hardware and virtual vehicles. The SCs generally selected amongst themselves who served as a shift's commander; however, the experimenter did discuss with them balancing the number of shifts and hours serving as the commander.
Rain caused the Nov \(11^{th}\)'s five shifts to occur in the hotel conference room using only virtual vehicles. No objective data was collected for the 1100-1200 shift. Six dedicated CCAST shifts at the CACTF occurred Nov \(12^{th}\) through the Nov \(17^{th}\) 1200-1400 shift. The remaining three CACTF shifts were Joint Shifts. During the final joint shift, both CCAST SCs jointly deployed the swarm. The primary data collection days between Nov \(14^{th}\) and the Nov \(17^{th}\) 1200-1400 shift had a range of 81 (Nov \(17^{th}\) 1200-1400: 10 UGVs and 71 UAVs) to 93 hardware platforms (Nov \(14^{th}\): 8 UGVs and 78 UAVs). During these same dates, the number of virtual vehicles ranged from 30 (three shifts: 10 UGVs, 20 UAVs) to 125 virtual vehicles (Nov \(17^{th}\) 1200-1400: 20 UGVs and 105 UAVs). The largest number of vehicles were used for the Nov \(18^{th}\) 1330-1530 joint shift (Hardware: 30 UGVs, 110 UAVs; Virtual: 10 UGVs, 50 UAVs). Additional swarm vehicle composition details are provided in Appendix A Table 11.
Overall, \(SC_{1}\) completed eight shifts, totaling 15 hours, and \(SC_{2}\) had seven shifts totaling 12.5 hours. An
\begin{table}
\begin{tabular}{|c|c||c|c|c|c|} \hline \multicolumn{2}{|c||}{**Shift**} & \multicolumn{1}{c|}{**Cognitive**} & \multicolumn{1}{c|}{**Physical**} & \multicolumn{1}{c|}{**Speech**} & \multicolumn{1}{c|}{**Auditory**} & \multicolumn{1}{c|}{**Visual**} \\ \hline
**Date** & **Time** & & & & & \\ \hline \hline \multirow{4}{*}{11-Nov} & 1100-1200 & & & & & \\ \cline{2-6} & 1300-1400 & & & & & \\ \cline{2-6} & 1500-1600 & & & & & \\ \cline{2-6} & 1630-1730 & & & & & \\ \cline{2-6} & 1800-1900 & & & & & \\ \hline
12-Nov & 0830-1130 & & & & & \\ \hline
13-Nov & 1430-1630 & & & & & \\ \hline
14-Nov & 0800-1130 & & & & & \\ \hline
15-Nov & 1300-1630 & & & & & \\ \hline
16-Nov & 1000-1200 & & & & & \\ \hline
17-Nov & 1200-1400 & & & & \\ \cline{2-6} & 1400-1630 & & & & \\ \hline
18-Nov & 1000-1130 & & & & \\ \cline{2-6} & 1330-1530 & & & & \\ \hline \end{tabular}
\end{table}
Table 3: Objectively assessed workload components (Green) vs. components estimated using IMPRINT Pro-based model results (Orange), by shift. No data was collected for the two shifts (red).
I3 hardware failure caused \(SC_{1}\) to assume the SC role a few minutes into the Nov \(15^{th}\) shift. Due to these unexpected changes, no data was collected. No data was recorded during the 1100-1200 Nov \(11^{th}\) shift, or the last Nov \(18^{th}\) joint SC shift. As a result, 12.5 hours of data was collected for \(SC_{1}\) and 10 hours for \(SC_{2}\).
### Subjective Results
The in situ probes provide insight into the SC's state during a shift, as compared to post-shift (e.g., post-trail) tools, such as NASA Task Load Index (Hart and Staveland, 1988). However, several known issues are associated with subjective metrics, including workload (Matthews et al., 2020). The normalized subjective in situ overall workload results are presented in Table 4. The gray rows represent \(SC_{2}\)'s shifts.
The SCs' subjective weightings differed across the workload components. \(SC_{1}\)'s weights were, from highest to lowest: Visual: 35%, Cognitive: 25%, Speech: 20%, Auditory: 15%, Physical: 5%, while \(SC_{2}\)'s responses were: Cognitive: 40%, Visual: 20%, Speech and Physical: 15%, Auditory: 10%. The mean overall subjective workload across all shifts calculated using the subjective weightings was 33 (Standard Deviation, SD = 5.83). Nov \(16^{th}\) resulted in the highest perceived overall workload, as shown in Table 4.
The normalized in situ component responses are provided in Appendix B.1's Table 12. The highest subjective component CACTF shift responses for the Cognitive, Speech and Auditory components occurred on Nov \(16^{th}\), the distinguished visitors day. The Visual workload responses that day were effectively tied for the highest ratings on Nov \(13^{th}\), which can also be said for the Physical component on Nov \(16^{th}\) and Nov \(12^{th}\).
The in situ subjective stress and fatigue values were normalized, see the descriptive statistics in Table 4. The overall mean stress level across all shifts was 28.94 (SD = 10.8). Stress varied substantially across shifts, with the highest level reported on Nov \(16^{th}\). This high stress level led to \(SC_{2}\)'s mean CACTF shifts stress being recorded as 44.67 (SD = 5.41). \(SC_{1}\)'s reported CACTF shifts stress level was 40.41 (SD = 9.07).
Generally, fatigue was higher during the CACTF shifts, the exception was the last Nov \(11^{th}\) virtual shift, shown in Table 4. The virtual shifts were short (1 hour), with short breaks (30 minutes) between shifts and additional shifts added late in the day. The mean subjective fatigue level across all shifts was 32.81 (SD = 14.32). \(SC_{2}\) reported higher mean fatigue, 36.27 (SD = 19.7), than \(SC_{1}\), 40.41 (SD = 9.1).
\begin{table}
\begin{tabular}{|c|c||c|l|l|} \hline \multicolumn{2}{|c||}{**Shift**} & \multicolumn{2}{c|}{**Overall Subjective**} & \multicolumn{1}{c|}{**Stress**} & \multicolumn{1}{c|}{**Fatigue**} \\ \hline
**Date** & **Time** & **Workload** & **Stress** & **Fatigue** \\ \hline \hline \multirow{4}{*}{11-Nov} & 1300 & 28.14 (5.27) & 47.2 (13.81) & 34 (0) \\ \cline{2-5} & 1500 & 22.06 (16.15) & 18 (0) & 18 (0) \\ \cline{2-5} & 1630 & 28.1 (**10.43**) & 14.6 (**7.6**) & 24.4 (**8.76**) \\ \cline{2-5} & 1800 & 31.88 (6.71) & 31.13 (15.54) & 50.5 (0) \\ \hline
12-Nov & 0830 & 36.57 (**16.92**) & 26.88 (**15.3**) & 51.92 (**19.13**) \\ \hline
13-Nov & 1430 & 35.21 (9.44) & 42.3 (11.55) & 42.25 (8.7) \\ \hline
14-Nov & 0800 & 37.24 (9.56) & 32.39 (15.68) & 52.33 (9.62) \\ \hline
16-Nov & 1000 & 43.93 (7.53) & 63.46 (18.51) & 44.71 (17.77) \\ \hline
17-Nov & 1200 & 32.66 (8.81) & 26.73 (8.36) & 43.09 (15.25) \\ \cline{2-5} & 1400 & 27.38 (15.77) & 22.36 (18.32) & 31.71 (6.05) \\ \hline
18-Nov & 1000 & 28.65 (**10.08**) & 18 (**0**) & 38.95 (**7.97**) \\ \cline{2-5} & 1330 & 37.51 (11.02) & 41.73 (22.83) & 35.35 (15.65) \\ \hline \end{tabular}
\end{table}
Table 4: The subjective in situ normalized overall workload, stress and fatigue descriptive statistics, mean (SD), by shift and SC. Gray cells represent \(SC_{2}\)’s results.
### Estimated Workload Results
The overall workload estimates were classified as normal workload if (\(25<X<60\)), where \(X\) represents the overall workload estimate. The estimates were classified as underload, if \(X\leq 25\) and overload if \(X\geq 60\), with these thresholds set as described in Section 2.3.1.
#### 4.2.1 Estimated Overall Workload Descriptive Statistics
The mean, standard deviation (SD), minimum (min), and maximum (min) overall workload estimates for each shift are presented in Table 5. A total of 12,181 usable data points were recorded for all shifts, see Table 6. The mean estimated overall workload weighted by the number of estimates per shift was 46.58 (SD = 6.4). The CACTF shifts' estimated weighted average overall workload was slightly lower, 46.27 (SD = 6.44). The weighted means for Nov 17\({}^{th}\) and 18\({}^{th}\) dropped marginally to 46.18 (SD = 6.24). The difference between the SCs' overall weighted means was 5.23. \(SC_{2}\) had the higher overall weighted mean workload estimate, 49.56 (SD = 6.23), as compared to \(SC_{1}\)'s, 44.23 (SD = 6.53). A larger difference, 6.61, existed when comparing the SCs using only the CACTF shifts' results. \(SC_{1}\)'s weighted mean estimates over four CACTF shifts was 42.98 (SD = 6.54), but was 49.60 (SD = 6.34) across \(SC_{2}\)'s four CACTF shifts. The minimum estimated overall workload across all shifts was 28.71, a normal load classification. The maximum estimated overall workload was classified as overload for eight, or two-thirds, of all shifts.
The joint shifts on Nov 17\({}^{th}\) 1400-1630 and Nov 18\({}^{th}\) resulted in \(SC_{1}\)'s lowest FX overall workload estimates over all shifts. \(SC_{2}\)'s estimated overall workload during the joint shift on Nov 18\({}^{th}\) was lower than the two prior shifts, but was not this SC's lowest. A review of the CACTF shifts in Table 5 reveals that two days stand out, as far as the highest estimated workloads. Both occurred for \(SC_{2}\) on Nov 16\({}^{th}\), and the Nov 17\({}^{th}\) 1200-1400 shift. These two shifts will be the focus of additional analysis in Section 4.2.3.
The descriptive statistics provide a high level perspective of the overall workload estimates, but also obscure more important and impactful results. These descriptive statistics results do not communicate the extent to which the SCs experienced overload or underload states, or the sensitivity of the multi-dimensional workload algorithm to changes in overall workload.
\begin{table}
\begin{tabular}{|c|c||c|c|} \hline \multicolumn{2}{|c||}{**Shift**} & \multicolumn{2}{c|}{**Overall Workload**} \\ \hline
**Date** & **Time** & **Mean (SD)** & **Min-Max** \\ \hline \hline \multirow{4}{*}{11-Nov} & 1300 & 45.25 (6.76) & 30.45-61.56 \\ \cline{2-4} & 1500 & 51.88 (6.68) & 32.34-67.19 \\ \cline{2-4} & 1630 & 47.66 (4.90) & 34.62-58.55 \\ \cline{2-4} & 1800 & 44.22 (5.28) & 32.60-54.08 \\ \hline
12-Nov & 0830 & 48.68 (6.33) & 32.91-64.61 \\ \hline
13-Nov & 1430 & 44.40 (6.52) & 31.01-64.19 \\ \hline
14-Nov & 0800 & 43.07 (6.96) & 28.71-63.61 \\ \hline
16-Nov & 1000 & 50.48 (6.25) & 32.11-70.23 \\ \hline \multirow{2}{*}{17-Nov} & 1200 & 50.25 (6.52) & 32.76-69.59 \\ \cline{2-4} & 1400 & 41.73 (4.90) & 29.87-54.63 \\ \hline \multirow{2}{*}{18-Nov} & 1000 & 48.42 (6.27) & 34.06-63.57 \\ \cline{2-4} & 1330 & 42.20 (6.88) & 30.76-58.36 \\ \hline \end{tabular}
\end{table}
Table 5: The overall workload estimates descriptive statistics by shift and SC.
#### 4.2.2 Estimated Workload State Frequencies
12,242 estimates were generated across all data collection shifts; however, 61 estimates were invalid, resulting in 12,181 usable estimates. Each usable overall workload estimate was classified using the defined thresholds, as normal load, overload, or underload. No underload instances existed. The frequency counts by shift, classification, and SC are summarized in Table 6. \(SC_{1}\) completed seven shifts with 6,712 (55.1%) usable overall workload estimates. \(SC_{2}\)'s five shifts resulted in 5,469 (44.9%) usable estimates. The 61 "No Data" estimates represent instances where the data recording software failed, but was restarted during the shift.
A total of 377 overload instances (3.19% of all usable estimates) occurred. \(SC_{2}\) encountered the majority of overload instances, 263 (2.22% of usable estimates), across this SC's four CACTF shifts. \(SC_{2}\) had no overload instances during the virtual shift. The \(SC_{2}\)'s highest overload instance frequency, 203 (53.85% of all overload state instances), occurred on two days. The highest frequency, 127 (33.69%), occurred on Nov 16\({}^{th}\), while 76 (20.02%) instances occurred the next day. \(SC_{1}\)'s highest overload frequency, 72 (19.10%), occurred during a Nov 11\({}^{th}\) virtual shift, with the second highest frequency, 25 (6.63%), being the Nov 13\({}^{th}\) CACTF shift. \(SC_{1}\)'s overload frequencies occurred across four shifts, two virtual shifts on Nov 11\({}^{th}\) and two CACTF shifts, with no overload classifications during three of \(SC_{1}\)'s seven shifts. During \(SC_{1}\)'s two joint shifts, the last shifts on both Nov 17\({}^{th}\) and 18\({}^{th}\), all estimated overall workload instances were classified as normal load; however, \(SC_{2}\) experienced 25 overload state instances during the Nov 18\({}^{th}\) 1000-1130 joint shift.
#### 4.2.3 Individual Shift Overall Workload Estimate Analyses
The estimated overall workload state classification frequencies hint at differences within shifts. Those results also demonstrate that the SCs' estimated overall workload generally remains in the normal load range across the shifts, number of hardware and virtual vehicles, and mission plans. However, those results may also lead to the incorrect conclusion that a SC experienced the overload states consecutively during a particular shift. Plotting the individual overall workload estimates across a shift provides a better continuous representation. The plots of the individual overall workload estimate instances, estimated every five seconds per Section 2.3.1, were generated and analyzed for each shift, but due to space limitations only three are presented. Additional analysis for each shift relates the in situ subjective results by their recorded times to the associated overall workload estimates (i.e., twelve estimates based on the five seconds between estimates). As well, the number of tactics issued, the number of tasked and active vehicles, and the number of vehicles blocked due to congestion are presented, as each are indicative of the SC's task demands that impact workload.
\begin{table}
\begin{tabular}{|c|c||c|c|c||c|} \hline \multicolumn{3}{|c||}{**Shift**} & \multicolumn{3}{|c|}{**Overall Workload**} & \multirow{2}{*}{**No Data**} & \multirow{2}{*}{**Total**} \\ \cline{1-2} \cline{5-6}
**Date** & & & **Normal Load** & & **Overload** \\ \hline \hline \multirow{4}{*}{11-Nov} & 1300-1400 & 569 & 5 & 0 & 574 \\ \cline{2-6} & 1500-1600 & 689 & 72 & 0 & 761 \\ \cline{2-6} & 1630-1730 & 401 & **0** & **0** & **401** \\ \cline{2-6} & 1800-1900 & 234 & 0 & 0 & 234 \\ \hline
12-Nov & 0830-1130 & 1,041 & **35** & 48 & 1,124 \\ \hline
13-Nov & 1430-1630 & 1,098 & 25 & 0 & 1,123 \\ \hline
14-Nov & 0800-1130 & 2,205 & 12 & 13 & 2,230 \\ \hline
16-Nov & 1000-1200 & 1,521 & 127 & 0 & 1,648 \\ \hline
17-Nov & 1200-1400 & 1,174 & 76 & 0 & 1,250 \\ \cline{2-6} & 1400-1630 & 757 & 0 & 0 & 757 \\ \hline
18-Nov & 1000-1130 & 1,069 & 25 & 0 & 1,094 \\ \cline{2-6} & 1330-1530 & 1,046 & 0 & 0 & 1,046 \\ \hline \hline \multicolumn{3}{|c||}{**Total**} & 11,804 & 377 & 61 & 12,242 \\ \hline \end{tabular}
\end{table}
Table 6: The overall workload estimate instances’ state classifications by shift, and SC.
Nov \(16^{th}\) Shift:This shift was the FX-6 distinguished visitor day, which is generally the most stressful for the entire CCAST team, including the SC. The FX-6 distinguished visitor day occurred 14 days into the FX and was the largest contingent of any FX. The distinguished visitors observed the shift's mission deployment from 1000 to about 1035. The mission plan was designed to deploy all 91 hardware vehicles (10 UGVs and 81 UAVs) immediately upon commencing the mission and to continue maximizing the number of deployed vehicles throughout the observation period.
\(SC_{2}\) was the shift's commander. The ambient temperature and the mean wind speeds were relatively reasonable, see Appendix A's Table 10 for details. During the first 35 minutes, 74 unique hardware vehicles were deployed, many multiple times. The number of tasked and active vehicles is shown in Figure 10. After the observation period completed, ten virtual UGVs and twenty virtual UAVs were added, and the SC deployed 103 unique vehicles. Note that active agents in the figure only represents active hardware vehicles, as tasked hardware agents may fail to execute their tactics. Therefore, both metrics are plotted for hardware vehicles.
The tasked simulated vehicles automatically execute the tactics.
\(SC_{2}\)'s estimated overall workload throughout the shift is plotted in Figure 11. The dashed red line represents the overload threshold (i.e., 60), the dashed blue line represents the underload threshold (i.e., 25), and the green line represents when the majority of the distinguished visitors left the observation area. The estimated workload components were cognitive, speech, auditory, and physical, per Table 3.
The first takeaway is that the majority, 70, of the SC's overload classifications occurred during the distinguished visitor observation period, between 1000 and 1035, or 55% of all overload classifications for the entire shift. The highest overload estimate, 70.23, occurred at 1007. The longest sustained overload classification during the observation period was two minutes and ten seconds between 1023 and 1025. While the estimated overall workload values oscillated between normal load and overload during the distinguished visitors observation period, the estimates were generally classified as normal load, 95% of all estimates, after 1035. The longest sustained overload period occurred between 1043 and 1046, lasting 3 minutes and 35 seconds, with a range of 60.36 to 64.53.
The mission plan involved deploying all vehicles at the start of the shift (1000) to conduct various tactics around the CACTF. \(SC_{2}\) loaded the mission plan at exactly 1000 and launched a volley of vehicles within seconds. The mission plan contained eight tactics, as shown by the orange bar in Figure 12a, where each tactic tasked multiple vehicles, as shown in Figure 10.
Figure 10: The Nov \(16^{th}\) 1000-1200 shift’s tasked (by vehicle type) and active vehicles.
The assigned vehicles each plan a navigation path and autonomously begin executing the assigned tactic. Typically deploying a large number of vehicles results in a large number of UGVs and UAVs becoming blocked, as shown in Figure 11(b). When a block occurs, the impacted vehicle attempts to autonomously resolve the issue. If a block continues for a prolonged period, the SC can issue a Nudge tactic, which causes the vehicle to move a predefined amount before attempting to plan a new navigation path. For example, nudged UAVs will increase in altitude in order to support generating a clear navigation path. The SC may also issue a Stop tactic followed by new tactic with a new goal location, or a RTL tactic.
The SC uses predefined tactics with modifiable default parameters to specify tactics. The tactics assign vehicles to complex tactics, such as Surveil a building, or simpler tactics, such as Goto a specific location, Stop, and RTL. The blue tactics shown in Figure 11(a) between 1001 and 1035 represent SC generated tactics. Tactic generation often leads to higher SC workload, especially if the SC specifies particular vehicles for a tactic, rather than allowing the CCAST system to automatically allocate vehicles to tactics. This increased workload is reflected in Figure 11, where the overall workload estimates increase before tactics are issued.
The overload instances in the first five minutes are due to tactic generation to resolve the vehicle blockages and deploy more vehicles. Typically the SC waits to allow the vehicles to launch, plan navigation paths and resolve any blockages autonomously. Since this shift was observed by the distinguished visitors, \(SC_{2}\) began generating tactics to resolve blockages earlier in order to move the swarm out over the CACTF. Figure 11 shows spikes in workload around 1003 related to generating the new tactics that were issued at approximately the same time, as shown in Figure 11(a).
The second overload instance, and the highest (70.23), is related to \(SC_{2}\) attempting to determine two things. The status of the remaining blocked vehicles over the launch area, and if the returning vehicles had completed their tactic or were neutralized. A medic's location, near the launch area, was identified at the start of the mission. After locating the medic, neutralized UGVs autonomously navigated to it in order to be revived, otherwise they RTL. Neutralized UAVs autonomously RTL. UAVs are not revived until it is safe for a human
Figure 11: Overall workload estimates for the Nov 16\({}^{th}\) 1000-1200 shift. The majority of distinguished visitors moved on to other activities at 1035 (green line). Note, the blue line represents the underload threshold and the red line represents the overload threshold.
mobile medic to walk through the launch area. As soon as a vehicle is neutralized, it is no longer tasked. A large number of UGVs and UAVs were neutralized during the initial deployment, which is evident in the steep decline of active vehicles in Figure 10. The mobile medic was deployed around 1015, at which time the UAV batteries were also swapped. As that process was completing, \(SC_{2}\) queued tactics to be issued; thus, the spike in workload between 1017 and 1019.
\(SC_{2}\)'s estimated workload is in the overload range between 1020 and 1025 as the SC attempts to issue additional tactics with larger numbers of assigned vehicles, which also results in additional blockages. However, the number of active vehicles has again declined due to neutralizations. Just before 1035 \(SC_{2}\) specifies and issues a tactic intended to assign close to sixty vehicles.
Figure 12: The Nov 16\({}^{th}\) 1000-1200 shift’s (a) issued tactics and (b) vehicle blockages by the minute.
Overall, the estimated overall workload mean during the distinguished visitors observation period (1000-1035) was 54.85 (SD = 4.98), with a minimum of 44.1 and a maximum (overload) estimate of 70.23. During this period, 432 workload estimates were generated, with 70 (16.2%) classified as overload.
The estimated workload was generally lower after the distinguished visitors departed at 1035. 1,216 workload estimates were generated during this period, of which 57 (4.69%) were classified as overload. The mean estimated overall workload was 48.92 (SD = 5.91), with a minimum of 32.11 and a maximum of 64.53.
The longest sustained period three minutes and thirty-five seconds, occurred between 1043 and 1046. During this time, \(SC_{2}\) was attempting to assess how many vehicles were still active or were neutralized, and how many UAVs needed battery changes. The mobile medic was deployed and the UAV batteries changed around 1050. The Phase II mission plan was loaded at 1053 and launched just before 1054. Throughout the remainder of the shift, \(SC_{2}\) generated vehicle tactics. During the 1054 and 1055 time frame, \(SC_{2}\) generated tactics to launch UAV sorties (7 UAVs each) resulting in a thirty second overload with a maximum value of 62.46. \(SC_{2}\) tasked UGVs to do various tactics, generally one to four UGVs per tactic, between 1119 and 1122. Forty-eight predominately normal load estimates were generated during that time frame, with a mean of 52.24 (SD = 6.76), of which four were classified as overload.
\(SC_{2}\)'s in situ subjective fatigue level was low, 18, just prior to shift start through 1015, with a mean of 24.4 (SD = 8.76), as shown in Table 7. About 15 minutes prior to shift start, \(SC_{2}\) rated the in situ subjective stress level as 7 (i.e., 100 on the normalized scale), the maximum value. At the start of the shift, the in situ subjective stress was rated as a 6, normalized to 83.5 in the table. \(SC_{2}\)'s reported stress was 83.5 (SD = 11.67) during most of the observation period, but dropped to 52.29 (SD = 9.92) after 1035. The in situ subjective fatigue ratings gradually continued to increase over the remainder of the shift, resulting in a mean of 56 (SD = 8.25).
The mean overall workload estimates for the minutes at which the in situ subjective assessments were calculated are presented in Table 7. Note that the 1000 in situ ratings were collected seconds before launching the mission plan. The estimated workload is substantially higher than the in situ workload ratings. Eight of \(SC_{2}\)'s thirteen reported in situ overall workload values were below the estimated overall workload value. All instances where \(SC_{2}\)'s reported in situ overall workload was above the estimated overall workload occurred after the observation period. The SC reported a subjective high stress level during the distinguish visitor observation period. During this time, the reported in situ overall workload was quite low, even though the SC was doing a large amount of work.
\begin{table}
\begin{tabular}{|c||l|l|l|l|} \hline \multirow{2}{*}{**Time**} & **Subj. Overall** & **Est. Overall** & \multirow{2}{*}{**Stress**} & \multirow{2}{*}{**Fatigue**} \\ & **Workload** & **Workload** & & \\ \hline \hline
1000 & 31.60 & 53.73 (1.8) & 83.5 & 18 \\ \hline
1012 & 44.73 & 50.51 (1.96) & 83.5 & 18 \\ \hline
1020 & 44.73 & 58.55 (1.30) & 83.5 & 34 \\ \hline
1030 & 46.38 & 51.95 (1.12) & 67 & 34 \\ \hline
1043 & 39.0 & 60.59 (2.15) & 50.5 & 50.5 \\ \hline
1051 & 48.03 & 45.64 (0.73) & 50.5 & 50.5 \\ \hline
1101 & 42.25 & 50.73 (1.11) & 67 & 50.5 \\ \hline
1111 & 51.15 & 49.63 (0.68) & 50.5 & 50.5 \\ \hline
1120 & 46.38 & 49.91 (2.37) & 50.5 & 50.5 \\ \hline
1130 & 45.55 & 43.74 (1.74) & 50.5 & 50.5 \\ \hline
1140 & 51.33 & 48.25 (2.18) & 34 & 67 \\ \hline
1150 & 43.98 & 50.46 (1.39) & 50.5 & 67 \\ \hline
1155 & 52.98 & 45.04 (2.86) & 67 & 67 \\ \hline \end{tabular}
\end{table}
Table 7: Nov 16\({}^{th}\) 1000-1200 sift’s subjective (Subj.) in situ fatigue, stress and overall workload as well as the estimated (Est.) overall workload descriptive statistics recorded throughout the shift.
Stress is a known confound with some physiological metrics (e.g., heart rate). The multi-dimensional workload algorithm is designed to mitigate the effects of other human performance factors (e.g., stress). The individual workload component results (see Appendix B.2 in Table 7) highlight that the cognitive workload component's metrics appear to be less susceptible to stress, but the physical workload component is influenced by stress and possibly fatigue. Throughout the shift the cognitive workload estimates loosely track the in situ cognitive workload. A limitation is the in situ query's 7 point Likert scale. However, the physical workload estimates are high relative to the corresponding in situ physical workload. This apparent over estimation appears to be due to the heart rate and respiration rate metrics, which are known to be impacted by other human performance factors and represent two of the three metrics for estimating physical workload. Two very high physical workload estimates, at 1000 (61.34) and 1020 (51.73) appear to be due to \(SC_{2}\)'s stress level (83.5). Two additional instances occurred at 1043 (62.8) and 1120 (51.73). At these times, \(SC_{2}\) reported moderate stress, but the fatigue level increased to a moderate level. It is known that \(SC_{2}\) was not physically active enough to obtain an overload physical workload estimates, which indicates a clear influence from stress early in the shift and the combination of stress and fatigue later in the shift.
During a post-FX debrief, \(SC_{2}\) commented that this shift resulted in the highest subjective stress level, and at the end of the shift, \(SC_{2}\) was very fatigued. After shift completion, \(SC_{2}\) indicated a lower stress level, as the major goal had been completed and the CCAST swarm had performed well.
Nov \(17^{th}\) 1200-1400 Shift:A particularly challenging shift occurred on Nov \(17^{th}\), during which the wind gusts were the highest, 28 MPH, CCAST had experienced while on shift (see Appendix A Table 10). The wind created a number of issues. The pre-mission brief indicated that 118 hardware vehicles (10 UGVs and 108 UAVs) were to be deployed during this shift. The estimated workload components were cognitive, speech, auditory and physical.
The intention at shift start was to test fly one 3DR Solo and one VOXEL M500, as the CCAST team had never flown the UAVs in such high winds. However, the LTE system became a continual problem for the first hour and a half, requiring multiple restarts. Each time the LTE restarts, all vehicles and the dispatcher must be restarted. An I3 restart is not required, but I3 is usually restarted. The LTE issues resulted in no vehicles being deployed early in the shift, as shown in Figure 13. It is also important to note that if the vehicles have intermittent, or no communication with the dispatcher and I3, then the telemetry is not logged, and cannot be represented in the figures related to t
Figure 13: The Nov \(17^{th}\) 1200-1400 shift’s tasked (by vehicle type) and active vehicles.
At approximately 1230, it was believed that the LTE issues were resolved and the objective of test flying the UAVs proceeded. \(SC_{2}\) generated the tactic at 1237, but the Unity engine required for I3 crashed and had to be restarted. \(SC_{2}\) issued the tactic at 1238, as shown in Figure (a)a. The two vehicles were tasked and active shortly thereafter, as shown in Figure 13. I3 did not show the tactic visualization, which the SC fixed on the fly and resulted in an estimated overload state at 1240, as shown in Figure 15.
The LTE issues persisted, at 1245 the team restarted the dispatcher and I3 using virtual vehicles, 10 UGVs and 20 UAVs. \(SC_{2}\) created explicit tactics and attempted to issue them, but system issues persisted. The longest sustained overload state duration across all the shifts occurred between 1244 and 1248, a duration of four minutes and fifteen seconds. The mean estimated overall workload during this time frame was 65.16 (SD = 2.35, min = 60.04). The shift's overall maximum estimate, 69.59, occurred during this time period as well. During this period, the cognitive workload estimate was consistently overloaded and
Figure 14: The Nov 17\({}^{th}\) 1200-1400 shift’s (a) issued tactics and (b) vehicle blockages by the minute.
the speech workload estimate was frequently overloaded, which is aligned with the SC's activities.
Given the persistent LTE issues, at 1256 the number of virtual vehicles increased to 20 UGVs and 105 UAVs. \(SC_{2}\) loaded the mission plan at 1257, but before issuing the plan, wanted to verify that the mission plan was not going to task hardware vehicles in the launch area, given that the LTE was connected. After receiving such verification, the first mission plan signal sent vehicles to the West side of the CACTF at 1300. A second mission plan signal was issued at 1301 sending vehicles to the East side of the CACTF, and the final signal within the same minute, sent vehicles to the center of the CACTF. This activity is shown in Figure 14a, recall that simulated vehicles cannot be blocked, so no vehicle blockages occurred per Figure 14b. The simulated vehicles were not providing artifacts, so the entire system was again shut down and restarted, which is shown as the drop in tasked simulated UAVs in Figure 13.
\(SC_{2}\) reloaded the mission plan at 1307 and began issuing the mission plan signals at 1308. \(SC_{2}\) generated and issued a number of tactics between 1310 and 1318 (see Figure 14a) that increased the number of tasked simulated vehicles to above 100. This activity increased the overall workload estimates, but they generally remained within the normal range.
At 1328 all tasking of the simulated vehicles stopped and at 1330 the CCAST team restarted with hardware (10 UGVs, 71 UAVs) and virtual (10 UGVs, 20 UAVs) vehicles. The mission plan was executed at 1331, but I3 was not updating with the vehicle telemetry and \(SC_{2}\) changed the communication port at 1135, which provided telemetry. Due to not receiving the telemetry, it incorrectly appears that no vehicles launch until 1336 in Figure 13. Almost immediately after the mission plan launch, 3DR Solos' began dropping from the sky3. Once the telemetry was restored, \(SC_{2}\) began attempting to command all 3DR solos to RTL at 1337.
Footnote 3: As noted in Table 1, the 3DR Solos are an older technology. Two hypotheses exist as to why they failed. The primary hypothesis is that the wind caused the UAV to exceed its maximum configured pitch/roll, causing it to stop making adjustments. The alternative hypotheses are that the barometer configuration was a problem or a hardware failure occurred.
This overall period resulted in elevated overall workload estimates compared to other portions of the shift. During this period, the estimated cognitive, physical and speech workload values all increased. This period
Figure 15: Overall workload estimates for Nov 17\({}^{th}\) 1200-1400 shift.
of time was stressful given that \(SC_{2}\) was unable to issue tactics until telemetry was restored and team members were asking \(SC_{2}\) to get the tactics issued to RTL the vehicles. The increases in the cognitive and speech components appear to be similar in magnitude to other high workload periods representative of \(SC_{2}\)'s increased work. However, the high physical workload estimates are possibly due to \(SC_{2}\)'s increased stress. Due to the ten minute timing between in situ ratings, no such ratings were recorded during this period, and it is not possible to clearly align \(SC_{2}\)'s perceived stress with the high physical workload estimates.
The remainder of the shift \(SC_{2}\) was attempting to move UGVs from the launch zone to a building. Multiple UGVs were assigned tactics, and the UGVs were not responding as expected. \(SC_{2}\) was having conversation with the team leader about this situation. \(SC_{2}\) was also verifying that tasked UGVs had tactics, and whether or not the vehicles were doing their tasks, while verifying information for and receiving instructions from the team leader. The overload estimates between 1346 and 1347 are a result of these efforts.
\(SC_{2}\)'s reported relatively low (18-34) in situ stress values throughout the shift, as shown in Table 8, with a mean of 27.6 (SD = 8.26). Similar to \(SC_{2}\)'s Nov 16\({}^{th}\) shift, the in situ fatigue level was low, 18, at shift start, and gradually increased over the shift, resulting in a mean of 45.6 (SD = 13.47).
The in situ subjective overall workload and the corresponding mean overall workload estimates are presented in Table 8. The estimated workload is generally higher than the in situ ratings. The only planned in situ data collection that corresponded with a high estimated overall workload at 1240 was missed, due to distraction.
The shift's mean estimated overall workload was 50.25 (SD = 6.52, min = 32.76, max = 69.59), see Table 5. 6.1% of the shift's overall workload estimates were classified as overload, per Table 6. The in situ and associated workload component estimates are provided in Appendix B.2 Table 14, respectively.
Joint Integrator and SCs Shift, Nov 18\({}^{th}\) 1330-1530:The joint integrator shifts were the first instances of both integrator teams operating on the CACTF simultaneously. DARPA's objective was to deploy the largest swarm ever. There was no direct communication between the two teams', rather the CACTF was spatially divided, with the CCAST team being responsible for the South half closest to C2. The only information CCAST received was the other team's vehicle telemetry via I3 glyphs similar to Figure 2b that did not have tactics or a tactic icon, with either an empty (0%) or full (100%) battery, no vehicle capabilities (e.g., electronic warfare), and a vehicle identifier that differed (e.g., atx10) from the CCAST identifiers.
During this shift both SCs simultaneously commanded the swarm. Each SC had their own I3 station, as the CCAST system communicates tactics and vehicle telemetry resulting from each SC. The I3 stations were set up in the C2 SC room, one on each side, as shown in Figure 16. The SCs' split CCAST's assigned CACTF
\begin{table}
\begin{tabular}{|c||l|l|l|l|} \hline \multirow{2}{*}{**Time**} & **Subj. Overall** & **Est. Overall** & \multirow{2}{*}{**Stress**} & \multirow{2}{*}{**Fatigue**} \\ & **Workload** & **Workload** & & \\ \hline \hline
1220 & 24.4 & 50.58 (2.85) & 18 & 18 \\ \hline
1234 & 18 & 45.19 (2.51) & 18 & 34 \\ \hline
1250 & 30.05 & 44.18 (4.57) & 18 & 34 \\ \hline
1300 & 31.6 & 48.76 (3.42) & 34 & 50.5 \\ \hline
1310 & 33.3 & 50.93 (1.87) & 18 & 50.5 \\ \hline
1320 & 46.38 & 50.37 (1.96) & 34 & 50.5 \\ \hline
1330 & 30.98 & 50.62 (1.62) & 34 & 50.5 \\ \hline
1340 & 41.5 & 54.90 (1.81) & 34 & 50.5 \\ \hline
1350 & 40.6 & 48.52 (1.62) & 34 & 50.5 \\ \hline
1355 & 39.85 & 48.38 (3.80) & 34 & 67 \\ \hline \end{tabular}
\end{table}
Table 8: The Nov 17\({}^{th}\) 1200-1400 shift’s subjective in situ fatigue, stress and overall workload as well as the estimated overall workload descriptive statistics recorded throughout the shift.
area at the C2 building, with \(SC_{1}\) being responsible for the West side and \(SC_{2}\) having responsibility for the East. The two SCs were able to directly speak to one another, but were unable to see what tactics the other was creating until the tactics were issued and assigned to vehicles.
Per the mission brief, the CCAST team placed 140 vehicles, 30 UGVs and 110 UAVs in the launch area, while the other integrator team had 90 UAVs, 90 UGVs and one vertical takeoff and landing fixed wing aerial vehicle. CCAST added 40 virtual UAVs and 10 virtual UGVs later in the shift, totaling 190 unique CCAST vehicles. The CCAST SCs deployed 110 unique vehicles.
It was predetermined that \(SC_{2}\) was responsible for the mission plan and any associated signals. During this shift, workload and performance data was collected for \(SC_{1}\) only. The recorded data captured the cognitive and physical components. The auditory workload was estimated using the procedure described in Section 3.4.4. The microphone malfunctioned; thus, speech and visual workload were estimated using the respective IMPRINT Pro models' values. CCAST mission plan and telemetry data logging issues occurred during this shift, the source of which is not clear. Since dual SCs was not specifically a design consideration, the tactics log did not identify which SC explicitly issued tactics. As such, the analysis cannot distinguish who issued which tactics. This mission plan tactics were also not clearly logged.
The shift start was delayed until 1400, at which time \(SC_{2}\) loaded the mission plan and executed the first signal, intended to deploy all rovers around CCAST's assigned portion of the CACTF. The experimenter notes, video, and the tasked UGVs (green line in Figure 17) show that \(SC_{2}\) fetched and launched the mission
Figure 16: The C2 configuration accommodating two swarm commanders. Photo courtesy of DARPA.
Figure 17: The Nov 18\({}^{th}\) 1330-1530 shift’s tasked (by vehicle type) and active vehicles.
plan at 1400 and 1401, respectively. However, the mission plan tactics do not appear in the log file, hence they are not in Figure 17(a) (shown as orange for the prior shifts' results). The SCs immediately began stopping the UGV's Surveil tactics and explicitly issued the tactics, which is shown in the tasked agents figure between 1403 and 1410. A number of tactics were issued between 1406 and 1411, including UAVs tactics; however, not all UAVs actually launched. The active agents (blue line in the figure) during this time period were not accurately logged, perhaps due to the shear volume of information from both teams creating logging issues. It is also likely that the LTE was beginning to demonstrate problems, that became more evident later in the shift. It is also important to note that for an unknown reason the number of tasked vehicles only increased with each new tactic, and did not decrease, as shown in Figure 17.
Throughout this initial deployment period, and the entire shift, \(SC_{1}\)'s estimated overall workload remained in the normal range, as shown in Figure 19. \(SC1\)'s estimated overall workload increased at 1411 after the
Figure 18: The Nov 18\({}^{th}\) 13300-1530 shift’s (a) issued tactics and (b) vehicle blockages by the minute.
tactics were issued, but vehicles did not launch. While the estimates oscillated a bit, these higher estimates persist until 1419 as the team attempts to determine why vehicles were not launching.
\(SC_{2}\) fetched the mission plan again at 1420 and launched it a minute later, note these tactics are not shown in Figure 18a due to the logging issues. The tasked vehicles in Figure 17 show these tactics at 1421. At this time, some of the active vehicles are shown as blocked in Figure 18b. The data log files do not indicate that the SCs attempted to mitigate the blockages by issuing explicit tactics. However, it is possible that some tactics were not logged due to communication issues. It is noted that \(SC_{1}\)'s estimated overall workload increased at this same time.
Beginning at about 1435, the SCs cannot communicate with the vehicles. It was determined that an LTE sector problem existed, requiring a rest. All vehicles on the launch pad were restarted at 1442. The LTE restarts, followed by vehicle restarts occurred again between 1450 and 1452, respectively. Telemetry data was not recorded during these time periods.
Virtual vehicles were added at 1450 to which the SCs issued explicit tactics for gathering information. \(SC_{1}\)'s estimated overall workload peaked just before the switch to virtual vehicles. The SCs' were careful to not task hardware vehicles that were back in communication, as the LTE issues continued. During this time, \(SC_{1}\)'s overall workload was quite high due to having to select specific vehicles for the tactics. The selection of specific simulated vehicles was not impacted by the display of the other integrator team's vehicle telemetry, because of DARPA's intentional splitting of the CACTF between the two teams. It is hypothesized that if this spatial CACTF split did not exist, and the two team's vehicles were intermixed, the SCs' overall workload associated with this task will increase due to having to differentiate between CCAST's hardware and software vehicles, as well as the other team's vehicles.
At 1500 \(SC_{1}\) indicated that there was a lot of stutter in the I3 display. The videos of the I3 display were recorded on the machine running I3. \(SC_{1}\) stopped and restarted the video at 1305-1306, which resolved the issue. \(SC_{1}\)'s spike in estimated overall workload was due to resolving this issue. During this period, the LTE was again reset at 1500 and the vehicles were powered up at 1505. The SCs were still issuing explicit tactics
Figure 19: Overall workload estimates for the last FX shift (Nov 18\({}^{th}\), 1330-1530), a joint integrator shift when both SCs simultaneously commanded the swarm. Results recorded for \(SC_{1}\) only.
to simulated vehicles between 1510 and 1514. The SCs began issuing explicit tactics for hardware vehicles around 1517, and \(SC_{1}\)'s estimated workload increased. \(SC_{2}\) issued a series of tactics to launch five UAVs, with the goal of neutralizing a fortified artifact that requires multiple vehicles interact with the artifact simultaneously; however, only one UAV launched. Simultaneously, \(SC_{1}\) created a Surveil tactic using a large number of UAVs for a building near the fortified artifact.
Both SCs issued explicit tactics through the rest of the shift, as seen in Figures 17, 18a, and b. During this period, especially the last few minutes of the shift, the SCs were verbally coordinating with one another, as they were both issuing tactics to the East side of the CACTF. \(SC_{2}\) continued to focus on neutralizing the fortified artifact, while \(SC_{1}\) was issued Surveil tactics for ten UAVs to investigate two buildings. Throughout this final push, \(SC_{1}\)'s estimated overall workload increased.
This shift resulted in some of \(SC_{1}\)'s highest reported in situ stress ratings, as shown in Table 9, with seven of ten responses being \(>50\). The SCs both reported that selecting virtual vehicles for explicit tactics, when some hardware vehicles may be in communication, but are not to be tasked, is stressful. \(SC_{1}\)'s fatigue level was moderate throughout the shift.
The in situ subjective overall workload and corresponding mean overall workload estimates are presented in Table 9. The estimated workload is generally higher than the in situ ratings, and \(SC_{1}\)'s results across the shifts show this was a common result. \(SC_{1}\) generally reported lower in situ subjective workload as compared to \(SC_{2}\) across shifts, as shown in Table 4. While \(SC_{1}\) subjectively reported higher overall workload at 1420 then the two earlier points, \(SC_{1}\) was not actively doing anything, as the team was waiting for \(SC_{2}\) to issue the mission plan, which is reflected in the lower estimated workload value for the same minute. A similar situation existed at 1430. Both the estimated cognitive and physical workload components for these time periods are very low, as shown in Appendix B.2 Table 15, and do not appear to be impacted by stress. However, as the SCs created explicit tactics for the vehicles at 1450, \(SC_{1}\)'s reported subjective overall workload was lower than the estimated workload. At 1500, when \(SC_{1}\) was dealing with the visual stuttering issue, the reported workload and stress were high, with the associated estimated workload being lower. The physical workload component estimate is higher than expected, which may be caused by stress.
Overall, \(SC_{1}\)'s estimated overall workload remained in the normal range the entire shift, with a mean of 43.02 (SD = 6.74, min = 32.37, and max = 59.37), as shown in Table 5. \(SC_{1}\)'s mean estimated overall workload was the lowest of all this SC's CACTF shifts. While \(SC_{1}\) did not take responsibility for executing mission plans, both SCs explicitly generated tactics for vehicles during the shift. Two SCs and the clear allocation of CACTF area responsibility, East vs. West appear to have reduced \(SC_{1}\)'s overall workload, even though this was a joint shift with the CCAST team placing the highest number of hardware vehicles in the launch zone, while also incorporating two CCAST SCs for the very first time.
\begin{table}
\begin{tabular}{|c||l|l|l|l|} \hline \multirow{2}{*}{**Time**} & **Subj. Overall** & **Est. Overall** & \multirow{2}{*}{**Stress**} & **Fatigue** \\ & **Workload** & **Workload** & & \\ \hline \hline
1400 & 39.78 & 34.43 (2.67) & 50.5 & 34 \\ \hline
1410 & 38.13 & 51.83 (2.77) & 50.5 & 50.5 \\ \hline
1420 & 43.9 & 36.44 (1.78) & 67 & 34 \\ \hline
1430 & 46.38 & 34.96 (0.83) & 50.5 & 34 \\ \hline
1440 & 18.8 & 33.64 (0.60) & 18 & 34 \\ \hline
1450 & 34.83 & 51.56 (1.85) & 18 & 50.5 \\ \hline
1500 & 55.45 & 43.62 (6.48) & 67 & 50.5 \\ \hline
1510 & 37.38 & 48.79 (1.95) & 34 & 50.5 \\ \hline
1520 & 47.2 & 37.30 (1.00) & 67 & 34 \\ \hline
1525 & 48.85 & 49.01 (2.84) & 67 & 50.5 \\ \hline \end{tabular}
\end{table}
Table 9: The Nov 18\({}^{th}\) 1300-1500 shift’s subjective in situ fatigue, stress and overall workload as well as the estimated overall workload descriptive statistics recorded throughout the shift.
The overall workload estimates reveal important insights related to the SCs' workload, particularly over very long shifts, that are not attainable otherwise. The analysis across the three shifts provides additional evidence, beyond Adams' prior work with Fortune, Harriott and Heard, that the multi-dimensional workload algorithm demonstrates sensitivity to known changes in the SC's workload, even when some workload components are provided by the using a single supervisor-single UAV evaluation's IMPRINT Pro models (Heard et al., 2020). Overall, the reported results provide the first use of this estimation method to single human-swarm robot deployments in an actual urban operational environment.
Even though the focus of this section was the overall workload estimates, the individual instances for each workload component can be similarly plotted for additional analysis. While cognitive workload tends to be the primary research focus in the general literature, domains that deploy very complex systems in differing environmental operational conditions impact the workload components' contribution to overall workload differently. Thus, it is critical for safe operation to understand all aspects of workload.
#### 4.2.4 Workload Component Contributions
The cognitive workload component (i.e., channel) has traditionally been the focus of the relevant literature; however, other components can and do contribute to overall workload. The CCAST SC's supervisory interaction is one that is heavily dependent on visual perception, in particular multiple object visual perception, which implies the visual workload component will be a primary contributor to overall workload. However, traditional visual perception multiple object tracking research, for example (Wolfe, 2020) assumes all visual targets (e.g., vehicles, artifacts) exist on the display simultaneously. Visual tracking of multiple objects via a virtual reality head mounted display, such as that integrated into I3, is nascent and has a different focus (Kibleur et al., 2019) from traditional multiple object visual tracking research. Current efforts in Adams' group are using eye trackers to objectively estimate visual workload, but that technology was not feasible with the existing I3 virtual reality system.
The speech and auditory workload components are important for the CCAST SC. Often the SC was talking to others to communicate the swarm's current state, verifying that it was safe to issue the mission plan or SC generated tactics (i.e., ensuring that all personnel in the launch area were a safe distance away from the vehicles), or verifying received verbal information from the team member responsible for designating the mission plan and tactics to be issued (i.e., "Surveil building X in the Northwest corner"). Often associated with these speech acts were auditory components; thus, there is a reasonable demand on both the SC's speech and auditory channels throughout the mission.
Traditional supervisory workstations, in which the human uses a mouse or joystick to interact with a system via a two dimensional interface (Heard et al., 2020; Cummings et al., 2019) do not place a high demand on the physical workload component. However, the CCAST SCs prefer to use I3 while standing, and \(SC_{1}\) frequently tended to physically move around the C2 workspace while supervising the swarm. \(SC_{1}\) learned to use the virtual reality tracking devices to determine when the SC's physical placement in the workspace had positioned the SC too far from the trackers, at which point the SC repositioned appropriately. Further, I3 relies on the virtual reality controllers as the SC's inputs to the system; thus, the SC's arms are moving frequently, contributing to physical workload. Thus, the physical workload component is expected to contribute even more to the SCs' overall workload.
Stress and fatigue are known to impact heart rate, respiration rate, and to some extent heart rate variability. Thus algorithms that estimate cognitive or overall workload using these metrics are impacted by increased stress and fatigue. The multi-dimensional workload components is one means of mitigating the impacts of stress and fatigue on the multi-dimensional workload estimation algorithm's estimates. The cognitive component estimates appear to have limited impact from stress and fatigue; however, physical workload appears to be overestimated. This impact appears to be evident at the start of the Nov 16\({}^{th}\) shift, when \(SC_{2}\) reported high subjective stress and continued to report the same high stress level at 1012 and 1020 while generating a large number of tactics. The 1020 physical workload estimation appears to be potentially
impacted by stress (see Appendix B.2 Table 13, but may also be associated with the generation and issuing of fourteen tactics, a very high number, between 1019 and 1020.
#### 4.2.5 Qualitative Swarm Commander Insights
The post-FX debrief provided some direct insights. The SCs were asked if they felt there were any days or shifts for which they were _unable to sustain their effort or performance due to being overloaded_. \(SC_{1}\) indicated feeling "red lined" when continual explicit tactic generation was needed, and was unable to trust the CCAST system's to automatically allocate vehicles to tactics. \(SC_{1}\) indicated two cases during which this situation arose. During "one of the first [live-virtual] shifts when we realized the allocation routines would happily dish out virtual and real platforms to tactic requests. At one point I was instructed to pick through the staging area to only fire off simulated quads, knowing that a mistake there would be potentially dangerous to the safety spotter crew..." An example of this situation occurred when CCAST was testing sprinter integration technologies during the Nov \(13^{th}\) shift. This shift resulted in the estimated overall workload being classified as overload 25 times, or 2.26% of the estimates, \(SC_{1}\)'s second highest number of overload instances during a shift. \(SC_{1}\) also noted another situation that was perceived as causing a high overloaded state. LTE communication issues resulted in significant latency and periods of time without telemetry updates. \(SC_{1}\) stated: "When the telemetry started backing up to such an extent we were at least half minute, maybe more, out of sync. We proceeded to task platforms and mission plan elements with full knowledge that I3 and Dispatcher had no accurate picture of the current platform positions. We were flying blind." \(SC_{2}\) did not subjectively perceive reaching a state where performance was impacted by being overloaded; however, did indicate that "during the longer shifts...(the \(>3\) hour ones), I consistently felt a good amount of physical fatigue near the end, and probably [would not] have lasted much longer standing up, but that [could have] been alleviated by sitting without really impacting my performance in I3."
The SCs were asked to comment on _what factors [you felt] increase your workload_? Both SCs mentioned needing to communicate with another team member. One SC noted: "X talking to me, especially when he was rapidly switching between asking for info vs. asking for new tasking." The other SC also commented that "Communication/coordination with people outside of I3 does incur a cost. I [would not] rate [the impact to be] large, except for the necessary context switch as we attempt to understand what is being asked of us, then re-submerging into I3."
\(SC_{1}\) felt that using mission plans had limited impact on increasing workload. "Until/unless the mission plans become more malleable through [I3], [they are] fairly low workload. Anxiety is _really high_ as the mission kicks off, and a single signal misstep can have very negative consequences, but [the SC is not] really _doing_ a lot. There is no real mechanism for [the SC] to consider modification or amendments to the mission plan structure, observing and reacting to new intelligence or threats appearing." Generally, the mission plans were used as defined and reaction to new intelligence or threats was handled by the SC generating and issuing new tactics.
Both SCs noted that generating tactics with an explicit vehicle selection impacted workload. One noted "... [single vehicle selection is] not only tedious, but error prone, [needing] to query platforms to cobble together enough to execute a Surveil object. Whenever we [could not] rely on [the system for automatic] allocation, [the SCs] were stuck in the mud, and [could not] split attention to focus on any higher level tasks."
A related tactic generation issue occurred when explicit waypoint designation was required. \(SC_{1}\) noted, "It can be painstaking to... lay down a waypoint within some tight bounds (2' of expected artifact location). There is a fair amount of spatial estimates to gauge inaccuracies in pose estimates, platform locations (GPS), as well as obstacle bounds and buffer space - also with some superficial understanding of how the route/path planner works, and it may discard positions too close together, or it may lock onto the road network under certain conditions - or even that sometimes the best path it finds from A [to] B will take it around the CACTF." This comment particularly applies to UGVs that were to use the CACTF's roads as their primary navigation routes. \(SC_{2}\) cited another impact on workload related to "trying to push rovers into just the
right position to neutralize [an artifact]." The vehicles had to be within a specific range of an active artifact in order for the Bluetooth communications to be active and neutralize the artifact, while also ensuring that the vehicle did not become neutralized itself. This situation was particularly challenging for artifacts that require multiple vehicles to simultaneously interact with the artifact.
The SCs' subjective overall workload decreased after the distinguished visitors day. \(SC_{1}\)'s objective metrics, both estimated overall workload and frequency of overload classifications, decreased, but \(SC_{2}\)'s were only slightly lower. The SCs were asked if they generally felt that their _workload during [their] shifts was lower after the distinguished visitor day_. \(SC_{1}\) felt that after that date most of the pressure and anxiety had been lifted, swarm deployments at this CACTF had become less stressful and easier to process, and the addition of interaction tools added during the FX that simplified tactic specification and provided better situation awareness. \(SC_{2}\) stated:" "My stress level was definitely lower after [the distinguished visitor] day, because I felt the major goal had been accomplished and we did well, but because we were still [increasing the number of vehicles] and capabilities, I [do not] think my workload was much lower. It may have been "slightly" lower just because [I had] gained a lot of practice using I3 by that point."
The SCs were asked _How did the joint shifts (with a single SC) impact workload compared to the prior single team shifts?_\(SC_{2}\) indicated "I [do not] recall the joint shifts having any impact.... Especially since we largely ignored the [other] team." Recall that the CACTF was spatially split between the two teams. \(SC_{1}\) felt that the "spatial deconfliction was relatively straightforward." I3 did display the other team's vehicles' telemetry using the standard vehicle glyph that excluded some information, and "after a short discovery period it was obvious which generic assets types [the other team's] telemetry mapped into." However, the increased communication necessary to incorporate the other team's vehicle telemetry update did create latency that "became problematic for I3." This latency "[increased] application input lag..., making the overall system less responsive."
The SCs' decision to jointly operate the swarm was not an I3 system development or the experimental data collection consideration. The SCs were asked if they _feel that situation increased or lowered your workload?_ Recall that workload metrics were not collected for \(SC_{2}\), who handled the mission command signals and that the SCs "split" the CCAST team's designated CACTF area. \(SC_{2}\) stated: "Yes, I subjectively felt higher workload during this shift" and cited the added chatter in the C2 room, "needing to tell \([SC_{1}]\) about what I was doing when it might conflict" and "[determining] if something happening near the middle of the CACTF [the SCs' boundary] was due to my actions or \([SC_{1}\)'s]. \(SC_{1}\), from whom workload metrics were collected, stated workload was "Lowered." Trust was an important element, as \(SC_{1}\) stated "I trusted the co-commander to see to their area of responsibility, and that they would explicitly coordinate when/if they needed to interact near the boundary we established." \(SC_{1}\) also noted "we were happy to be able to enjoy something new and novel which we had talked about for years, but was never part of the program goal."
### Discussion
The CCAST FX-6 results analysis supports the claim that a single human can supervise a true heterogeneous swarm of robots to complete mission relevant tasks in real world environments. The analysis also demonstrates that the multi-dimensional workload estimation algorithm provides results sensitive to actual SC workload changes. While both CCAST SCs experienced overload conditions, their estimated overall workload was within the normal range for 96% of the generated estimates across all data collection shifts.
Adams has led the multi-dimensional workload algorithm development, including the initial investigations into the appropriate physiological metrics since 2008 (Harriott, 2015). The algorithm is intentionally developed to be sensitive to changes in a human's individual workload components and overall workload, as different complex systems, environments, and application domains impact each workload component differently. The prior laboratory-based human subjects evaluations provided evidence that the algorithm performs well and is sensitive across domains, human-robot teaming relationships (i.e., supervisory, peer-based), and individual differences (Fortune et al., 2020; Heard and Adams, 2019; Heard et al., 2019). The algorithm
has also been demonstrated to detect shifts in workload in real-time in order to adapt a robot's interaction with the human and autonomously change task responsibilities when the human's workload is over- or underloaded (Heard et al., 2020; Heard et al., 2022). However, the prior work depended on knowing a priori the evaluation trials' tasks as well as workload levels and transitions.
The DARPA OFFSET field exercise presented a unique opportunity to apply this algorithm to a hardware-based human-swarm team completing a complex mission in an actual urban environment. The challenges (i.e., weather conditions, constantly changing situations) generated an exceptionally messy and uncontrollable human subjects evaluation that cannot be replicated by laboratory-based evaluations. The result was a true test for the multi-modal human subjects metric sensors, the multi-dimensional workload algorithm, and the associated analysis. Overall, the physiological sensors generally performed as expected in the extreme FX conditions. The noise meter issues were associated with a factory default setting.
The nature of the OFFSET field exercise shifts make it very difficult to develop representative underload, normal load and overload IMRPINT Pro models. As as such, previously developed IMRPINT Pro models for a single human supervising a large UAV were used to represent the missing metrics. Previously validated neural network workload component models for the supervisory-based adaptive human-robot teaming architecture (Heard et al., 2020) were used when generating the workload components' and overall workload estimates. While that domain differs quite a bit, especially in the number of vehicles, it was the most representative domain. The choice to use these existing trained models did facilitate an analysis of the shifts that demonstrates the algorithm's sensitivity to changing workload conditions.
As discussed, stress and fatigue appeared to have a limited impact on cognitive workload, but did impact physical workload. The physical workload component estimation was primarily dependent on heart rate and respiration rate, which coupled with the SC's limited physical movements, led to over estimates of physical workload. A limitation of this overall representation of physical workload is that it does not clearly represent the three types of physical workload: gross motor, fine motor, and tactile. The CCAST SC's physical interactions were generally fine-grained and tactile, for which the physical workload metrics struggle to assess. Data was collected using Myo devices on the SC's arms intended to capture the fine grained and tactile interactions, but these sensors were not yet integrated into multi-dimensional workload algorithm. Adams' team has recently completed preliminary work to model these physical workload components (Bhagat Smith et al., 2022) using the Myo and other sensor results for another domain. The resulting estimates are dependent on metrics that are less susceptible to stress and fatigue, which can improve the reliability of the individual estimates and the overall workload estimates.
The use of the IMRPINT Pro models to estimate visual workload is a clear limitation. The Valve Index headset does not provide eye tracking, and the headset cannot be worn with an eye tracker, such as a Pupil Lab Pupil Core. The team discussed purchasing and integrating a new headset, but decided it was a low priority. Adams' research group has only very recently developed the visual workload estimation capability. The incorporation of a metric driven estimate is expected to improve the reliability and accuracy of the overall workload estimates.
The prior laboratory-based human subjects experiments and associated multi-dimensional workload algorithm validations assume that the human's tasks are known. This task context has been shown to improve the accuracy of the component and overall workload assessments. It is important to note that while the presented analysis demonstrates sensitivity to workload changes in the SCs' task demands, task context was not available. The practical use of the multi-dimensional workload algorithm for actual military deployments, such as the one that the OFFSET program was based, will require the ability to infer the SC's current task. Adams' group is actively developing a multi-dimensional task recognition approach dependent on wearable sensors that accommodates the breadth of tasks in such domains. The initial capabilities are focused on visual task recognition (Baskaran et al., 2022). Assuming that such a system can reliably infer a SC's tasks, then such context is hypothesized to also improve the component and overall workload estimations.
The use of the various physiological sensors at FX-6 represents the first time they were used in such harsh
conditions. While the sensors generally preformed well, these sensors are not hardened for daily use, let alone routine use in harsh mission conditions. The routine use of the such sensors for disaster response and military domains will require their miniaturization, reduce power consumption, hardening, etc.
The DARPA OFFSET program assumes the swarm vehicles are highly autonomous, as well as the CCAST team's approach to relying on mission plans or SC specific high level tactics, and I3 design decisions (e.g., an immersive visualization, vehicle and tactic glyphs, artifact icons and associated prioritization filtering) directly enable the SC's ability to deploy and supervise the swarm. Future swarms deployed in similar urban environments that have high volumes of occupied space and require vertically will need similar interaction affordances. It may be desirable to provide very precise individual vehicle and goal point manipulation, and some missions may require such precision; however, achieving such precision may be very difficult for swarms, and if incorporated, this level of interaction is expected to increase the SC's workload.
The DARPA OFFSET program uses AprilTags to represent the scenario artifacts, which was done to allow the integrator teams to focus on scaling the hardware swarm's size. The CCAST system uses cameras (e.g., PiCam) and simple image recognition to perceive and differentiate the AprilTags. Assuming an AprilTag is perceived correctly, the tag identifier is mapped to an I3 icon. I3 automatically filters the artifacts so that only the most relevant artifacts are presented to the SC. Note, the SC can display all artifacts if desired. A different system that relies on sensor perception to identify artifacts (e.g., image processing, electronic signal recognition) may have higher perception error that generates false artifact identifications, or requires incorporating a representation of the system's recognition confidence level. This potentially higher error rate, or the increased complexity of incorporating confidence intervals may increase the SC's workload, but the true impact can only be hypothesized and will be highly dependent on the particular perception system's error rates, the user interface design, underlying decision support systems, the ability to associate confidence with the perceived artifact, the mission scenario, etc.
One may want to consider providing live video feed to help recognize an artifact (i.e., not an AprilTag), but the mission complexity, the swarm size and heterogeneity, as well as the broader SC duties will impact the viability of such an approach. It is feasible to believe that permitting a limited number of live video feeds can be reasonably used by a SC, but even adding a small number of feeds is expected to increase the workload. The impact on workload from such a feature will be highly dependent on how many live feeds are permitted, the steps required to enable/disable a feed, the feed's presentation within I3 and its association relative to the associated vehicle, the sensor's field of view, reliability and accuracy, as well as the purpose of the live feed. Assuming those issues are solved, available communication bandwidth, and even whether or not a vehicle is in communication will impact the usability of such information, which will impact the SC's workload. Live feeds were investigated in earlier DARPA OFFSET field exercises, and it was determined that given the low quality images, limited available bandwidth, and the likelihood of vehicles being out of communication, the video feeds were not very useful for the outdoor mission elements. When the vehicles are within a building, they are out of communication and no live sensor feed is feasible.
## 5 Conclusions
The DARPA OFFSET program demonstrated that a single human can deploy and supervise a swarm of 100 heterogeneous robots. The CCAST team's earlier DARPA OFFSET program field exercise observations demonstrated a trained SC's ability to deploy swarms over shifts that were up to three hours in duration. The FX-6 outcomes provided further validation of these observations. The CCAST team collected various metrics across twelve shifts (eight CACTF shifts) that were used to estimate the SC's workload components (i.e., cognitive, speech, auditory and physical) and overall workload via the multi-dimensional workload algorithm. The estimated overall workload was manageable and generally remained within a reasonable normal range. SCs' perceived stress was manageable, but spiked during critical shifts, such as distinguished visitors day, and perceived fatigue was manageable, but varied for many reasons, including shift duration. Generally, the resulting estimates demonstrate that the overall workload estimates increased as the SC's tasks increased,
even though the physical workload estimates demonstrated some susceptibility to stress and fatigue. This human subjects data set represents the first known data set for a single human deploying a hardware swarm in an actual urban environment to complete a complex mission. The results have broader implications that indicate the viability of future civilian single SC-swarm applications, such as disaster response (e.g., infrastructure safety inspections, wildland fire identification and tracking) and commercial applications (e.g., general logistics, deliveries).
## Appendix A FX-6 Contextual Information
This appendix provides information pertaining to multiple aspects of the FX-6 shifts and the associated conditions that impact the shift deployments and the SC's associated effort and workload.
The weather conditions for each day are provided in Table 10. The weather was highly variable in temperature ranges, as well as wind speeds.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline
**Date** & \begin{tabular}{c} **Low** \\ **Temp** \\ \end{tabular} & \begin{tabular}{c} **High** \\ **Temp** \\ \end{tabular} & \begin{tabular}{c} **Dew** \\ **Point** \\ \end{tabular} & \begin{tabular}{c} **Barometric** \\ **Pressure** \\ \end{tabular} & \begin{tabular}{c} **Mean** \\ **Wind** \\ \end{tabular} &
\begin{tabular}{c} **Max Sustained** \\ **Gust** \\ \end{tabular} \\ \hline \hline
12-Nov & \(39^{\circ}\) & \(60^{\circ}\) & \(41^{\circ}\) & \(30.00\) & \(10\) & \(22\) & \(29\) \\ \hline
13-Nov & \(27^{\circ}\) & \(48^{\circ}\) & \(31^{\circ}\) & \(30.10\) & \(6\) & \(18\) & \(25\) \\ \hline
14-Nov & \(33^{\circ}\) & \(54^{\circ}\) & \(34^{\circ}\) & \(30.03\) & \(8\) & \(20\) & \(26\) \\ \hline
15-Nov & \(28^{\circ}\) & \(57^{\circ}\) & \(32^{\circ}\) & \(30.16\) & \(4\) & \(9\) & \(9\) \\ \hline
16-Nov & \(40^{\circ}\) & \(67^{\circ}\) & \(47^{\circ}\) & \(30.03\) & \(6\) & \(14\) & \(14\) \\ \hline
17-Nov & \(57^{\circ}\) & \(72^{\circ}\) & \(58^{\circ}\) & \(30.04\) & \(12\) & \(21\) & \(28\) \\ \hline
18-Nov & \(41^{\circ}\) & \(69^{\circ}\) & \(40^{\circ}\) & \(30.24\) & \(10\) & \(23\) & \(32\) \\ \hline \hline \end{tabular}
\end{table}
Table 10: FX-6 Climate Conditions (Temperature: F, Pressure: inches, Wind: MPH).
Each shift had a different composition of vehicles, as shown in Table 11. Most shifts included hardware vehicles, while some incorporated virtual vehicles.
\begin{table}
\begin{tabular}{|c|l||l|l|l|l||l|l|} \hline \multicolumn{2}{|c||}{**Shift**} & \multicolumn{2}{c|}{**Ground Vehicles**} & \multicolumn{2}{c|}{**Aerial Vehicles**} & \multicolumn{2}{c||}{**Total Vehicles**} \\ \hline
**Date** & **Time** & **Hardware** & **Virtual** & **Hardware** & **Virtual** & **Hardware** & **Virtual** \\ \hline \hline \multirow{4}{*}{11-Nov} & 1100-1200 & 0 & 20 & 0 & 80 & 0 & 100 \\ \cline{2-7} & 1300-1400 & 0 & 15 & 0 & 65 & 0 & 80 \\ \cline{2-7} & 1500-1600 & 0 & 6 & 0 & 20 & 0 & 26 \\ \cline{2-7} & 1630-1730 & 0 & 6 & 0 & 20 & 0 & 26 \\ \cline{2-7} & 1800-1900 & 0 & 2/23 & 0 & 15/70 & 0 & 17/93 \\ \hline
12-Nov & 0830-1130 & 10 & 5 & 44 & 20 & 55 & 25 \\ \hline
13-Nov & 1430-1630 & 8 & 10 & 66 & 10 & 74 & 20 \\ \hline
14-Nov & 0800-1130 & 8 & NR & 78 & NR & 93/0 & 118/80 \\ \hline
15-Nov & 1300-1630 & 10 & 10 & 78 & 20 & 88 & 30 \\ \hline
16-Nov & 1000-1200 & 10 & 10 & 81 & 20 & 91 & 30 \\ \hline \multirow{4}{*}{17-Nov} & 1200-1400 & 10/0/ & 0/10/ & 108/0/ & 0/20/ & 118/0/ & 0/30/ \\ \cline{2-7} & 0/10 & 20/10 & 0/71 & 105/20 & 0/81 & 125/30 \\ \cline{2-7} & 1400-1630 & 8 & 0 & 23 & 0 & 31 & 0 \\ \hline \multirow{2}{*}{18-Nov} & 1000-1130 & 10 & 0 & 17 & 0 & 27 & 0 \\ \cline{2-7} & 1330-1530 & 30 & 10 & 110 & 40 & 140 & 50 \\ \hline \end{tabular}
\end{table}
Table 11: FX-6 shift allocations by SC and pre-mission brief vehicle counts. Gray rows represent \(SC_{2}\)’s shifts, no data was collected for red rows, and NR means no data recorded.
Results
This appendix provides additional results.
### Subjective Results
The descriptive statistics for the in situ subjective workload component responses are provided in Table 12. These results are provided by shift and SC.
\begin{table}
\begin{tabular}{|c|c||c|c|c|c|c|} \hline \multicolumn{2}{|c||}{**Shift**} & \multicolumn{1}{c|}{**Cognitive**} & \multicolumn{1}{c|}{**Speech**} & \multicolumn{1}{c|}{**Auditory**} & \multicolumn{1}{c|}{**Visual**} & \multicolumn{1}{c|}{**Physical**} \\ \hline
**Date** & **Time** & & & & & & \\ \hline \hline \multirow{4}{*}{11-Nov} & 1300 & 27.6 (8.76) & 21.2 (7.16) & 18 (0) & 37.4 (13.27) & 24.2 (14.7) \\ \cline{2-7} & 1500 & 17.67 (16.5) & 17.67 (16.5) & 34 (0) & 23.17 (25.15) & 18 (0) \\ \cline{2-7} & 1630 & 28.67 (9.24) & 28.67 (9.24) & 45 (9.53) & 17.67 (16.5) & 28.67 (9.24) \\ \cline{2-7} & 1800 & 34.13 (13.27) & 22 (8) & 26 (9.39) & 38.13 (8.25) & 34 (0) \\ \hline
12-Nov & 0830 & 37.28 (21.32) & 40.96 (20.36) & 35.5 (19.09) & 31.92 (16.96) & 38.21 (12.29) \\ \hline
13-Nov & 1430 & 40.65 (11.43) & 27.65 (11.3) & 21.1 (10.33) & 42.3 (11.55) & 30.85 (10.23) \\ \hline
14-Nov & 0800 & 41.39 (16.14) & 30.53 (11.86) & 22.42 (10.9) & 43.22 (12.82) & 45.94 (10.97) \\ \hline
16-Nov & 1000 & 49.32 (7.83) & 37.61 (11.41) & 44.64 (13.82) & 42.29 (12.46) & 37.61 (11.41) \\ \hline \multirow{2}{*}{17-Nov} & 1200 & 37.09 (12.23) & 26.77 (11.11) & 32.64 (11.37) & 32.64 (17.1) & 26.77 (8.36) \\ \cline{2-7} & 1400 & 26.93 (18.71) & 24.79 (16) & 22.36 (18.32) & 31.71 (17.57) & 24.71 (12.89) \\ \hline \multirow{2}{*}{18-Nov} & 1000 & 32.5 (11.97) & 29.25 (10.91) & 26.05 (11.43) & 22.6 (13.53) & 27.6 (8.26) \\ \cline{2-7} & 1330 & 37.89 (17.92) & 36.58 (9.07) & 35.35 (10.43) & 39.19 (15.46) & 34.077 (9.38) \\ \hline \end{tabular}
\end{table}
Table 12: The subjective in situ workload component responses descriptive statistics, mean (SD), by shift and SC. Gray cells represent \(SC_{2}\)’s results.
### Individual Shift Estimated Workload Analysis
Nov \(16^{th}\) Shift:The comparison of the in situ workload component responses compared to the individual workload component estimates are provided in Table 13. The overall workload results also provided for completeness.
\begin{table}
\begin{tabular}{|c|c||l|l|l|l|l|l|} \hline
**Time** & **Metric** & **Cognitive** & **Speech** & **Auditory** & **Visual** & **Physical** & **Overall** \\ \hline \hline \multirow{2}{*}{1000} & Subj. & 34 & 34 & 34 & 34 & 18 & 31.60 \\ & Est. & 43.52 (3.75) & 9.73 (0.00) & 59.20 (4.87) & – & 61.34 (4.06) & 53.73 (1.80) \\ \hline \multirow{2}{*}{1012} & Subj. & 50.5 & 50.5 & 50.5 & 34 & 34 & 44.73 \\ \cline{2-8} & Est. & 58.48 (4.94) & 12.54 (4.50) & 57.50 (3.39) & – & 18.30 (0.95) & 50.51 (1.96) \\ \hline \multirow{2}{*}{1020} & Subj. & 50.5 & 50.5 & 50.5 & 34 & 34 & 44.73 \\ \cline{2-8} & Est. & 56.30 (1.34) & 41.79 (5.94) & 62.26 (3.44) & – & 51.73 (2.93) & 58.55 (1.3) \\ \hline \multirow{2}{*}{1030} & Subj. & 50.5 & 50.5 & 67 & 34 & 34 & 46.38 \\ & Est. & 55.38 (5.26) & 47.09 (8.27) & 50.68 (2.98) & – & 24.53 (1.65) & 51.95 (1.12) \\ \hline \multirow{2}{*}{1043} & Subj. & 50.5 & 34 & 18 & 34 & 34 & 39.00 \\ \cline{2-8} & Est. & 56.45 (2.95) & 57.58 (2.57) & 51.16 (2.89) & – & 62.80 (5.02) & 60.59 (2.15) \\ \hline \multirow{2}{*}{1051} & Subj. & 50.5 & 34 & 50.5 & 50.5 & 50.5 & 48.03 \\ & Est. & 40.88 (3.17) & 27.73 (5.92) & 54.38 (1.67) & – & 22.59 (2.47) & 45.64 (0.73) \\ \hline \multirow{2}{*}{1101} & Subj. & 50.5 & 34 & 50.5 & 34 & 34 & 42.25 \\ \cline{2-8} & Est. & 58.07 (3.36) & 56.41 (5.79) & 48.66 (3.15) & – & 11.89 (0.99) & 50.73 (1.11) \\ \hline \multirow{2}{*}{1111} & Subj. & 50.5 & 50.5 & 67 & 50.5 & 50.5 & 52.15 \\ & Est. & 52.10 (1.52) & 28.49 (3.51) & 69.39 (3.44) & – & 14.66 (1.83) & 49.63 (0.68) \\ \hline \multirow{2}{*}{1120} & Subj. & 50.5 & 50.5 & 34 & 50.5 & 34 & 46.38 \\ & Est. & 56.30 (1.34) & 41.79 (5.94) & 50.41 (3.98) & – & 51.73 (2.93) & 49.91 (2.37) \\ \hline \multirow{2}{*}{1130} & Subj. & 50.5 & 34 & 50.5 & 50.5 & 34 & 45.55 \\ & Est. & 37.97 (4.83) & 54.77 (7.04) & 52.82 (1.80) & – & 11.24 (2.41) & 43.74 (1.74) \\ \hline \multirow{2}{*}{1140} & Subj. & 50.5 & 34 & 50.5 & 67 & 50.5 & 51.33 \\ & Est. & 46.74 (7.08) & 72.78 (3.31) & 42.52 (2.73) & – & 18.14 (4.18) & 48.25 (2.18) \\ \hline \multirow{2}{*}{1150} & Subj. & 50.5 & 18 & 34 & 50.5 & 50.5 & 43.98 \\ & Est. & 56.59 (4.52) & 57.61 (3.09) & 54.71 (3.07) & – & 9.80 (2.46) & 50.46 (1.39) \\ \hline \multirow{2}{*}{1155} & Subj. & 67 & 34 & 34 & 50.5 & 50.5 & 52.98 \\ & Est. & 39.13 (7.44) & 44.28 (4.82) & 55.57 (3.49) & – & 17.35 (2.66) & 45.04 (2.86) \\ \hline \end{tabular}
\end{table}
Table 13: The Nov \(16^{th}\) shift’s subjective in situ workload component responses and overall workload descriptive statistics along with the corresponding mean (SD) by in situ subjective data collection time point.
Nov \(17^{th}\) 1200-1400 Shift:The comparison of the in situ workload component responses compared to the individual workload component estimates are provided in Table 14. The overall workload results also provided for completeness.
\begin{table}
\begin{tabular}{|c|c||l|l|l|l|l|l|} \hline
**Time** & **Metric** & **Cognitive** & **Speech** & **Auditory** & **Visual** & **Physical** & **Overall** \\ \hline \hline \multirow{2}{*}{1220} & Subj. & 34 & 18 & 18 & 18 & 18 & 18 & 24.4 \\ \cline{2-7} & Obj. & 32.04 (3.84) & 18.47 (5.63) & 61.73 (3.63) & – & 62.33 (9.86) & 50.58 (2.85) \\ \hline \multirow{2}{*}{1234} & Subj. & 18 & 18 & 18 & 18 & 18 & 18 \\ \cline{2-7} & Obj. & 38.53 (7.56) & 28.62 (2.10) & 57.56 (2.37) & – & 22.71 (2.57) & 45.19 (2.51) \\ \hline \multirow{2}{*}{1250} & Subj. & 34 & 34 & 50.5 & 18 & 18 & 30.05 \\ & Obj. & 36.06 (13.29) & 51.72 (9.02) & 53.94 (4.63) & – & 17.22 (1.67) & 44.18 (4.57) \\ \hline \multirow{2}{*}{1300} & Subj. & 34 & 18 & 34 & 34 & 34 & 31.6 \\ \cline{2-7} & Obj. & 37.45 (8.45) & 42.88 (4.13) & 64.39 (3.23) & – & 35.00 (1.65) & 48.76 (3.42) \\ \hline \multirow{2}{*}{1310} & Subj. & 34 & 18 & 18 & 50.5 & 34 & 33.3 \\ & Obj. & 53.98 (6.77) & 19.31 (6.04) & 58.60 (1.90) & – & 25.96 (4.72) & 50.93 (1.87) \\ \hline \multirow{2}{*}{1320} & Subj. & 50.5 & 50.5 & 34 & 50.5 & 34 & 46.38 \\ \cline{2-7} & Obj. & 43.45 (3.34) & 52.58 (4.98) & 54.78 (2.58) & – & 34.37 (4.36) & 50.37 (1.96) \\ \hline \multirow{2}{*}{1330} & Subj. & 18 & 18 & 34 & 50.5 & 50.5 & 30.98 \\ & Obj. & 46.73 (3.05) & 9.32 (1.40) & 51.73 (4.40) & – & 43.83 (4.18) & 50.62 (1.62) \\ \hline \multirow{2}{*}{1340} & Subj. & 50.5 & 34 & 34 & 50.5 & 18 & 41.5 \\ \cline{2-7} & Obj. & 58.31 (5.02) & 49.14 (5.71) & 44.93 (4.14) & – & 36.45 (1.86) & 54.90 (1.81) \\ \hline \multirow{2}{*}{1350} & Subj. & 50.5 & 34 & 34 & 34 & 34 & 40.6 \\ \cline{2-7} & Obj. & 42.51 (7.34) & 29.94 (4.05) & 52.64 (3.45) & – & 34.40 (6.69) & 48.52 (1.62) \\ \hline \multirow{2}{*}{1355} & Subj. & 50.5 & 34 & 50.5 & 34 & 18 & 39.85 \\ \cline{2-7} & Obj. & 31.22 (10.68) & 53.80 (9.56) & 60.30 (2.27) & – & 43.19 (3.10) & 48.38 (3.80) \\ \hline \end{tabular}
\end{table}
Table 14: The Nov \(17^{th}\) 1200-1400 shift’s subjective in situ workload component responses and overall workload descriptive statistics along with the corresponding mean (SD) by in situ subjective data collection time point.
Joint Integrator and SCs Shift, Nov \(18^{th}\) 1330-1530 Shift:The comparison of the in situ workload component responses compared to the individual workload component estimates are provided in Table 15. The overall workload results also provided for completeness.
The authors thank the entire CCAST team, including Drs. Shane Clark and David Diller. Adams thanks Prakash Baskaran, Robert Brown and Dr. Jamison Heard for contributing to the data collection tools and assisting with the data analysis. Importantly, the authors thank the CCAST SCs for their willingness to be subjected to this human subjects evaluation. This research was developed with funding from the Defense Advanced Research Projects Agency (DARPA). The views, opinions, and findings expressed are those of the authors and are not to be interpreted as representing the official views or policies of the Department of Defense or the U.S. Government. DISTRIBUTION STATEMENT A: Approved for public release: distribution unlimited.
|